Posted
about 15 years
ago
by
Liraz Siri
We've just uploaded to SourceForge our first ever Debian-based virtual appliance: a beta of TurnKey Core on the rock stable Lenny release.
DOWNLOAD BETA:
105MB ISO (changelog) (sig) (manifest)
It has
... [More]
about the same features as the Ubuntu 10.04 based Core beta we released a couple of weeks ago with a few minor exceptions (e.g., grub instead of grub-pc, byobu not included).
Ubuntu-Debian chimeras considered harmful
Most users probably don't realize it but a handful of our current crop of "Ubuntu based" appliances are actually Ubuntu-Debian chimeras. The package management system (APT) is technically capable of mixing packages from different distributions. We took advantage of that to configure some TurnKey appliances to get security updates directly from Debian for certain packages which were not supported on Ubuntu.
Unfortunately it's a relatively complicated hack that relies on poorly documented, rarely used, and consequently buggy APT functionality.
It hasn't back-fired yet, but we'd rather not wait for that to happen.
TurnKey appliances are configured to auto-update security fixes by default so safety and robustness is a key concern. We don't want to risk breaking anything in the future. Better safe than sorry!
So from now on, no more chimeras. The upcoming Ubuntu Lucid based appliances will be 100% Ubuntu, even if that means some packages don't get security updates.
Are Debian based appliances worth the trouble?
This brings us to our dilemma. Guaranteed security updates for all packages are a big deal, at least for us. And only Debian provides that.
Which got us thinking. How much extra work would it take to also build a Debian-based TurnKey Core? And would the interest from the community justify the effort?
Bottom line: it was a bit harder than we anticipated but we made it happen and now we need the community's help in figuring out if it matters.
Though we haven't committed to it yet, we are seriously considering Debian-based builds of all TurnKey Linux appliances. But that depends on the feedback we get from you and the level of interest in this.
Frankly, we don't have the resources to thoroughly test both Debian and Ubuntu based builds of all TurnKey appliances.
That means to pull this off we'll need all the help we can get testing Betas, providing feedback on issues that come up, filing and triaging bug reports, etc.
If you care about TurnKey on Debian, we'll need you to step up to the plate and help us make it happen.
Ubuntu vs. Debian: the story so far...
So far TurnKey has been known as an Ubuntu based open source project so this move towards Debian may come as a surprise to some, but those of you who have been following closely know that Debian support has actually been in our sights since TurnKey's conception.
One of our first polls asked: would you prefer virtual appliances based on Ubuntu or Debian?
Results so far (based on 763 votes):
62%: Ubuntu for both client and sever roles
23%: Debian for server roles, Ubuntu for client roles
15%: Debian for both client and server roles
Despite a clear preference for Ubuntu (which is better known due to its popularity on the desktop), a significant 38% still prefer Debian for server roles.
This resonates with us because when TurnKey was still on the drawing board a couple of years ago we debated Ubuntu vs. Debian extensively. In the end Ubuntu won by a slim margin but it was a tough call!
I'll talk a little bit about the thought process behind that because despite a couple of years going by the big picture hasn't really changed.
Back then we were mainly using Ubuntu on our desktops and Debian on our servers and frankly Debian seemed like a more natural choice for a server-oriented virtual appliance library.
The "main" problem with Ubuntu is that only a subset of packages in the "main" component are officially supported with security updates.
By contrast, Debian supported all 25,000 packages with carefully backported, well tested security fixes that could be safely applied to a production system. Debian is also rock solid in terms of stability, which is something you usually want in a server operating system, even when it comes at the expense of having the latest package versions.
On the other hand Ubuntu 8.04 (Hardy), a Long Term Support version had just been released and Debian Lenny was still a work in progress. Stability comes at a price, and it's one of the main reasons for Debian's notoriously slow release cycle.
But we didn't want to wait who knew how long or start a new project on an old distribution...
Plus, we shared Ubuntu's values regarding making open source accessible to everyone, not just savvy experts. That meant encouraging a community atmosphere in which everyone was welcome and treated with respect. Which is why we adopted the Ubuntu Code of Conduct. Unfortunately, the Debian community had a reputation for being more withdrawn and elitist.
Ubuntu's popular appeal was also a factor. Let's face it, today Ubuntu has far better name recognition than Debian, though I think that's mostly due to superior marketing and more effective leadership. Having a wealthy benevolent dictator that can bankroll the operation definitely has a few advantages (and disadvantages!).
But keep in mind that though Ubuntu and Debian do inevitably compete for users in some areas, they aren't really in direct opposition. In fact, every 6 months a new version of Ubuntu begins its life from a snapshot of the unstable Debian version in development.
Certainly Ubuntu deserves credit for pushing the envelope in areas like usability, but it's Debian's self-governed volunteer workforce of over 1600 developers that do much of the unglamorous heavy lifting.
Long story short, try the beta and tell us what you think. Obviously we have immense respect for both distributions and we'd like to hear your views on where we should take it from here.
[Less]
|
Posted
about 15 years
ago
by
Liraz Siri
We've just uploaded to SourceForge our first ever Debian-based virtual appliance: a beta of TurnKey Core on the rock stable Lenny release.
DOWNLOAD BETA:
105MB ISO (changelog) (sig) (manifest)
It has
... [More]
about the same features as the Ubuntu 10.04 based Core beta we released a couple of weeks ago with a few minor exceptions (e.g., grub instead of grub-pc, byobu not included).
Ubuntu-Debian chimeras considered harmful
Most users probably don't realize it but a handful of our current crop of "Ubuntu based" appliances are actually Ubuntu-Debian chimeras. The package management system (APT) is technically capable of mixing packages from different distributions. We took advantage of that to configure some TurnKey appliances to get security updates directly from Debian for certain packages which were not supported on Ubuntu.
Unfortunately it's a relatively complicated hack that relies on poorly documented, rarely used, and consequently buggy APT functionality.
It hasn't back-fired yet, but we'd rather not wait for that to happen.
TurnKey appliances are configured to auto-update security fixes by default so safety and robustness is a key concern. We don't want to risk breaking anything in the future. Better safe than sorry!
So from now on, no more chimeras. The upcoming Ubuntu Lucid based appliances will be 100% Ubuntu, even if that means some packages don't get security updates.
Are Debian based appliances worth the trouble?
This brings us to our dilemma. Guaranteed security updates for all packages are a big deal, at least for us. And only Debian provides that.
Which got us thinking. How much extra work would it take to also build a Debian-based TurnKey Core? And would the interest from the community justify the effort?
Bottom line: it was a bit harder than we anticipated but we made it happen and now we need the community's help in figuring out if it matters.
Though we haven't committed to it yet, we are seriously considering Debian-based builds of all TurnKey Linux appliances. But that depends on the feedback we get from you and the level of interest in this.
Frankly, we don't have the resources to thoroughly test both Debian and Ubuntu based builds of all TurnKey appliances.
That means to pull this off we'll need all the help we can get testing Betas, providing feedback on issues that come up, filing and triaging bug reports, etc.
If you care about TurnKey on Debian, we'll need you to step up to the plate and help us make it happen.
Ubuntu vs. Debian: the story so far...
So far TurnKey has been known as an Ubuntu based open source project so this move towards Debian may come as a surprise to some, but those of you who have been following closely know that Debian support has actually been in our sights since TurnKey's conception.
One of our first polls asked: would you prefer virtual appliances based on Ubuntu or Debian?
Results so far (based on 763 votes):
62%: Ubuntu for both client and sever roles
23%: Debian for server roles, Ubuntu for client roles
15%: Debian for both client and server roles
Despite a clear preference for Ubuntu (which is better known due to its popularity on the desktop), a significant 38% still prefer Debian for server roles.
This resonates with us because when TurnKey was still on the drawing board a couple of years ago we debated Ubuntu vs. Debian extensively. In the end Ubuntu won by a slim margin but it was a tough call!
I'll talk a little bit about the thought process behind that because despite a couple of years going by the big picture hasn't really changed.
Back then we were mainly using Ubuntu on our desktops and Debian on our servers and frankly Debian seemed like a more natural choice for a server-oriented virtual appliance library.
The "main" problem with Ubuntu is that only a subset of packages in the "main" component are officially supported with security updates.
By contrast, Debian supported all 25,000 packages with carefully backported, well tested security fixes that could be safely applied to a production system. Debian is also rock solid in terms of stability, which is something you usually want in a server operating system, even when it comes at the expense of having the latest package versions.
On the other hand Ubuntu 8.04 (Hardy), a Long Term Support version had just been released and Debian Lenny was still a work in progress. Stability comes at a price, and it's one of the main reasons for Debian's notoriously slow release cycle.
But we didn't want to wait who knew how long or start a new project on an old distribution...
Plus, we shared Ubuntu's values regarding making open source accessible to everyone, not just savvy experts. That meant encouraging a community atmosphere in which everyone was welcome and treated with respect. Which is why we adopted the Ubuntu Code of Conduct. Unfortunately, the Debian community had a reputation for being more withdrawn and elitist.
Ubuntu's popular appeal was also a factor. Let's face it, today Ubuntu has far better name recognition than Debian, though I think that's mostly due to superior marketing and more effective leadership. Having a wealthy benevolent dictator that can bankroll the operation definitely has a few advantages (and disadvantages!).
But keep in mind that though Ubuntu and Debian do inevitably compete for users in some areas, they aren't really in direct opposition. In fact, every 6 months a new version of Ubuntu begins its life from a snapshot of the unstable Debian version in development.
Certainly Ubuntu deserves credit for pushing the envelope in areas like usability, but it's Debian's self-governed volunteer workforce of over 1600 developers that do much of the unglamorous heavy lifting.
Long story short, try the beta and tell us what you think. Obviously we have immense respect for both distributions and we'd like to hear your views on where we should take it from here.
[Less]
|
Posted
about 15 years
ago
by
Alon Swartz
Well, it took a little longer than expected, but we are pleased to announce that TurnKey Core - the common base for all appliances, has been released based on Ubuntu 10.04 LTS (Lucid Lynx).
Ubuntu 10.04 LTS will be supported for five years.
... [More]
This is a beta release, so take it for a spin, let us know what you think. If you come across any issues, please report them. If you have ideas on how to make it better, let us know.
DOWNLOAD BETA:
144MB ISO (changelog) (sig) (manifest)
All other (beta) appliances based on Ubuntu 10.04 LTS will be released in batches in the following weeks leading up to the official release, which is planned for the beginning of August. This is to coincide with the release of Ubuntu 10.04.1, which is recommended for production deployment.
Changes
Bootsplash
The bootsplash menu has been updated. Install to hard disk is now the first option, selected by default. Live system has been renamed to Try without installing. A warning message will be displayed when running in live non-persistent mode.
Recommended packages _not_ installed by default (APT)
This is not really a change from TurnKey Core 8.04, its actually the same configuration. The change is notable because Ubuntu (since 8.10) install recommends by default. We chose to keep the old configuration as TurnKey appliances are minimal, and only include what needs to be included. We believe this is the right decision, if you think differently, we'd love to hear your thoughts.
Byobu - Screen for human beings
While attending the Ubuntu Developer Summit (UDS) for Maverick, I was introduced to byobu by its developer - Dustin Kirkland. I found byobu much more user friendly than screen, as well as informative with its notification plugins (e.g., memory and processor usage, package upgrades, clock). We decided not only to include it in Core, but also launch it by default. Again, we'd love to hear your thoughts on this decision.
To get you started, here are some of the keyboard shortcuts (see the manual for more info: man byobu):
F2 - Create a new window
F3 - Move to previous window
F4 - Move to next window
F6 - Detach from this session
F8 - Re-title a window
F9 - Configuration Menu
Improved terminal
The bash configuration has been customized to included colored output (ls, grep, etc.) as well as a 2 level max prompt (e.g., instead of /usr/share/doc/foo/bar/xyz only bar/xyz will be displayed). The bash-completion package is also installed by default, which we find very useful. In addition, we have also added ~/bashrc.d support seeded with some configuration scripts, one of them being penv which Liraz and I use all the time, more on that later...
Syslog upgrade
The system and kernel logging packages (sysklogd and klogd) have been replaced with rsyslog, an enhanced multi-threaded syslogd with awesome features. This change is inline with Ubuntu who made the move in Ubuntu 9.10. The Webmin syslog configuration has been tweaked accordingly.
GRUB-PC (aka. GRUB2)
Our installer (di-live) has gone through a major upgrade and now supports GRUB-PC, a cleaner design of its predecessors with more advanced features. The default configuration has been slightly tweaked to display a timeout by default, run in console mode, and be more verbose.
All other changes are available in the changelog.
Features
Base distribution: Ubuntu 10.04 LTS
Runs on bare metal in addition to most types of virtual machines (e.g., VMWare, VirtualBox, Xen HVM, KVM).
Installable Live CD ISO:
Supports installation to an available storage device.
Supports running in a non-persistent (demo) mode.
Auto-updated on firstboot and daily with the latest security patches.
Easy to use configuration console:
Displays basic usage information.
Configure networking (supports multiple network interfaces).
Reboot or shutdown appliance.
Ajax web shell (shellinabox) - SSH client not required.
User friendly screen wrapper (byobu) launched by default on login.
Easy to use web management interface (Webmin):
Listens on port 12321 (uses SSL).
Mac OS X themed.
Network modules:
Firewall configuration (with example configuration).
Network configuration.
System modules:
Configure time, date and timezone.
Configure users and groups.
Manage software packages.
Change passwords.
System logs.
Tool modules:
Text editor.
Shell commands.
Simple file upload/download.
File manager (needs support for Java in browser).
Custom commands.
Regenerates cryptographic keys on first boot:
SSL certifcate used by webmin, apache2, lighttpd - /etc/ssl/certs/cert.pem.
SSH keys.
Console auto login when running in live/demo mode.
Default credentials (for Webmin and SSH):
username root
no password (user sets password during installation)
Call for testing and feedback
We need your help in testing the beta releases, and your feedback to make the official release rock! What are you waiting for, get it here.
[Less]
|
Posted
about 15 years
ago
by
Alon Swartz
Well, it took a little longer than expected, but we are pleased to announce that TurnKey Core - the common base for all appliances, has been released based on Ubuntu 10.04 LTS (Lucid Lynx).
Ubuntu 10.04 LTS will be supported for five years.
... [More]
This is a beta release, so take it for a spin, let us know what you think. If you come across any issues, please report them. If you have ideas on how to make it better, let us know.
DOWNLOAD BETA:
144MB ISO (changelog) (sig) (manifest)
All other (beta) appliances based on Ubuntu 10.04 LTS will be released in batches in the following weeks leading up to the official release, which is planned for the beginning of August. This is to coincide with the release of Ubuntu 10.04.1, which is recommended for production deployment.
Changes
Bootsplash
The bootsplash menu has been updated. Install to hard disk is now the first option, selected by default. Live system has been renamed to Try without installing. A warning message will be displayed when running in live non-persistent mode.
Recommended packages _not_ installed by default (APT)
This is not really a change from TurnKey Core 8.04, its actually the same configuration. The change is notable because Ubuntu (since 8.10) install recommends by default. We chose to keep the old configuration as TurnKey appliances are minimal, and only include what needs to be included. We believe this is the right decision, if you think differently, we'd love to hear your thoughts.
Byobu - Screen for human beings
While attending the Ubuntu Developer Summit (UDS) for Maverick, I was introduced to byobu by its developer - Dustin Kirkland. I found byobu much more user friendly than screen, as well as informative with its notification plugins (e.g., memory and processor usage, package upgrades, clock). We decided not only to include it in Core, but also launch it by default. Again, we'd love to hear your thoughts on this decision.
To get you started, here are some of the keyboard shortcuts (see the manual for more info: man byobu):
F2 - Create a new window
F3 - Move to previous window
F4 - Move to next window
F6 - Detach from this session
F8 - Re-title a window
F9 - Configuration Menu
Improved terminal
The bash configuration has been customized to included colored output (ls, grep, etc.) as well as a 2 level max prompt (e.g., instead of /usr/share/doc/foo/bar/xyz only bar/xyz will be displayed). The bash-completion package is also installed by default, which we find very useful. In addition, we have also added ~/bashrc.d support seeded with some configuration scripts, one of them being penv which Liraz and I use all the time, more on that later...
Syslog upgrade
The system and kernel logging packages (sysklogd and klogd) have been replaced with rsyslog, an enhanced multi-threaded syslogd with awesome features. This change is inline with Ubuntu who made the move in Ubuntu 9.10. The Webmin syslog configuration has been tweaked accordingly.
GRUB-PC (aka. GRUB2)
Our installer (di-live) has gone through a major upgrade and now supports GRUB-PC, a cleaner design of its predecessors with more advanced features. The default configuration has been slightly tweaked to display a timeout by default, run in console mode, and be more verbose.
All other changes are available in the changelog.
Features
Base distribution: Ubuntu 10.04 LTS
Runs on bare metal in addition to most types of virtual machines (e.g., VMWare, VirtualBox, Xen HVM, KVM).
Installable Live CD ISO:
Supports installation to an available storage device.
Supports running in a non-persistent (demo) mode.
Auto-updated on firstboot and daily with the latest security patches.
Easy to use configuration console:
Displays basic usage information.
Configure networking (supports multiple network interfaces).
Reboot or shutdown appliance.
Ajax web shell (shellinabox) - SSH client not required.
User friendly screen wrapper (byobu) launched by default on login.
Easy to use web management interface (Webmin):
Listens on port 12321 (uses SSL).
Mac OS X themed.
Network modules:
Firewall configuration (with example configuration).
Network configuration.
System modules:
Configure time, date and timezone.
Configure users and groups.
Manage software packages.
Change passwords.
System logs.
Tool modules:
Text editor.
Shell commands.
Simple file upload/download.
File manager (needs support for Java in browser).
Custom commands.
Regenerates cryptographic keys on first boot:
SSL certifcate used by webmin, apache2, lighttpd - /etc/ssl/certs/cert.pem.
SSH keys.
Console auto login when running in live/demo mode.
Default credentials (for Webmin and SSH):
username root
no password (user sets password during installation)
Call for testing and feedback
We need your help in testing the beta releases, and your feedback to make the official release rock! What are you waiting for, get it here.
[Less]
|
Posted
about 15 years
ago
by
Liraz Siri
A few days ago I noticed the average load on a newly setup VPS was too high relative to its workload. The server was noticeably sluggish. Load was high and a page the wiki it was running would occasionally take 10 seconds to load.
I
... [More]
didn't have too much time to kill but I decided I would at least try picking off the lowest hanging fruit before upgrading to a beefier, more expensive plan which would cost a few hundred dollars extra a year.
<!--break-->
When I noticed swap usage was relatively high, I suspected the server was constantly moving memory in and out of swap.
One of the first things you should try when you're tuning the performance of a server is to reduce memory usage so that it doesn't use as much swap and can use more of your memory for disk buffering. Accessing buffered blocks from memory is a gazillion times faster than accessing them from disk.
So how do you reduce memory usage without reducing functionality? By tuning the trade off between memory and CPU time.
One example of this trade off in practice is pre-forking, which is a common technique used by many systems that can improve user-perceived performance. The principle is that by starting the process in advance of when it's needed then that bit of overhead is not experienced by the user who would otherwise need to wait for the process to start up before it can serve him.
The catch is that keeping spare processes around requires memory.
When you have a limited amount of real memory, then any performance gains from pre-forking can quickly evaporate due to reduced IO performance.
On this server, we didn't really have that much activity for things like mail or web, but many subsystems were configured (by default) to pre-fork at least 5 worker processes in advance.
This is a reasonable configuration for a dedicated server with lots of memory and hundreds of users, but not ideal for that cheap VPS you just setup for your workgroup.
I reduced memory consumption by over 200MB by simply decreasing the number of preforked worker processes for apache (20-25MBs per process), spamassassin (30MBs per process), and saslauthd. Average load dropped immediately by about 80%.
Total time spent picking this low hanging fruit: 5 minutes.
[Less]
|
Posted
about 15 years
ago
by
Alon Swartz
Towards the end of last year we decided it was time to start working on an idea we've been toying with for a while. Mapping out the feature set was fun, and a lot of the current and future features are based on feedback we received from you guys
... [More]
and gals, as well as many related questions and comments from around the net.
Today after a last round of internal testing we are pleased to announce that we've launched TurnKey Hub into private beta:
https://hub.turnkeylinux.org
We've just sent out the first batch of invites. If you've already requested an invite, you should receive one as we roll them out. Please be patient. We initially have limited capacity so it's first come, first served.
What is this TurnKey Hub?
The short version: TurnKey cloud deployment: simplified.
The slightly longer version: An easy way to launch and manage TurnKey Linux appliances in the Amazon EC2 cloud (more clouds and VPS providers on the way).
A simple Amazon EC2 console optimized for TurnKey
Launch TurnKey appliances in the click of a button.
TurnKey optimized firewall templates.
Configure custom passwords on launch.
Authenticate with personal SSH key in addition to EC2 keypairs.
Automatically setup EBS devices and Elastic IP's on launch.
Easier management with descriptive labels for all assets.
Unified interface for all regions and all your EC2 accounts.
And that's just the tip of the iceberg. There's much more in development...
Upcoming features
More clouds: Support all clouds and VPS providers.
Backup: Automatic encrypted appliance backups.
Migration: Automatically restore backups anywhere.
You decide: Suggest features and help us prioritize.
In other words this is just the first modest step in a much more ambitious plan to continue making TurnKey easier to use, as Liraz recently explained:
"Imagine being able to develop your site on a locally running appliance (e.g., running in VirtualBox or VMWare). Then, when you're ready you can automatically migrate your appliance, with all your customizations to a cloud hosting provider of your choice."
So once you receive your invite, take the Hub for a spin, let us know what you think. We'd like to know how to make this better. What new features you'd like to see implemented. That sort of thing.
Remember, you can request an invite here.
[Less]
|
Posted
about 15 years
ago
by
Alon Swartz
Towards the end of last year we decided it was time to start working on an idea we've been toying with for a while. Mapping out the feature set was fun, and a lot of the current and future features are based on feedback we received from you guys
... [More]
and gals, as well as many related questions and comments from around the net.
Today after a last round of internal testing we are pleased to announce that we've launched TurnKey Hub into private beta:
https://hub.turnkeylinux.org
We've just sent out the first batch of invites. If you've already requested an invite, you should receive one as we roll them out. Please be patient. We initially have limited capacity so it's first come, first served.
What is this TurnKey Hub?
The short version: TurnKey cloud deployment: simplified.
The slightly longer version: An easy way to launch and manage TurnKey Linux appliances in the Amazon EC2 cloud (more clouds and VPS providers on the way).
A simple Amazon EC2 console optimized for TurnKey
Launch TurnKey appliances in the click of a button.
TurnKey optimized firewall templates.
Configure custom passwords on launch.
Authenticate with personal SSH key in addition to EC2 keypairs.
Automatically setup EBS devices and Elastic IP's on launch.
Easier management with descriptive labels for all assets.
Unified interface for all regions and all your EC2 accounts.
And that's just the tip of the iceberg. There's much more in development...
Upcoming features
More clouds: Support all clouds and VPS providers.
Backup: Automatic encrypted appliance backups.
Migration: Automatically restore backups anywhere.
You decide: Suggest features and help us prioritize.
In other words this is just the first modest step in a much more ambitious plan to continue making TurnKey easier to use, as Liraz recently explained:
"Imagine being able to develop your site on a locally running appliance (e.g., running in VirtualBox or VMWare). Then, when you're ready you can automatically migrate your appliance, with all your customizations to a cloud hosting provider of your choice."
So once you receive your invite, take the Hub for a spin, let us know what you think. We'd like to know how to make this better. What new features you'd like to see implemented. That sort of thing.
Remember, you can request an invite here.
[Less]
|
Posted
about 15 years
ago
by
Liraz Siri
In a previous post I explained why we decided to convert most of the images on this site from PNG to JPG and how we used ImageMagick to batch it.
What I didn't get into is how we updated Drupal, our CMS, to point to all these newly converted
... [More]
files. Manually uploading and updating nearly two hundred new images through a web form is time consuming and boring. Finding a non labor intensive solution can also be time consuming... but so much more interesting!
Once you know how, it's not really that difficult.
Here's a simple example:
<?php
error_reporting(E_ALL);
require_once './includes/bootstrap.inc';
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);
$results = db_query('select nid from node where type="appliance"');
while ($result = db_fetch_object($results)) {
$node = node_load($result->nid);
# useful if you don't know $node like the back of your hand
print "<pre>";
print htmlspecialchars(print_r($node, TRUE), ENT_QUOTES);
print "</pre>";
# somewhat useless example edit
$name = strtolower($node->field_name[0]['value']);
$node->field_name[0]['value'] = $name;
node_save($node);
}
?>
Save this as a temporary PHP script in your web root, point your browser to the location, and walla! You just lowercased the value of the CCK name field for all appliance nodes.
For your custom edits you should of course replace the value passed to db_query(). You can use Views to help you create this SQL query:
Leveraging Views as an SQL query builder is especially helpful with more complex queries.
But wait there's a catch!
It turns out CCK file fields are a special case and you can't edit them with a simple node_save(). That's because $node contains a reference to the field field object, not the object itself.
This slowed me down for a bit, but eventually I figured it out how to use drupal_write_record to update the file field table:
<?php
error_reporting(E_ALL);
require_once './includes/bootstrap.inc';
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);
function f(&$s) {
$s = str_replace("png", "jpg", $s);
}
drupal_flush_all_caches();
$results = db_query('select nid from node where type="appliance"');
while ($result = db_fetch_object($results)) {
$node = node_load($result->nid);
$file = $node->field_icon[0];
f($file['filename']);
f($file['filepath']);
f($file['filemime']);
$realpath = "/path/to/webroot/" . $file[filepath];
$stat = stat($realpath);
if(!$stat)
continue;
$file['filesize'] = $stat[7];
print "<pre>";
print htmlspecialchars(print_r($file, TRUE), ENT_QUOTES);
drupal_write_record('files', $file, 'fid');
print "</pre>";
}
drupal_flush_all_caches();
?>
[Less]
|
Posted
over 15 years
ago
by
Alon Swartz
By reducing the file size of your CSS, JavaScript and images, as well as the number of unnecessary browser requests made to your site, load time of your applications pages can be drastically reduced, not to mention the load on your server.
... [More]
Yahoo have created a list of the 35 best practices to speed up your website, a recommended read for any web developer. I wanted to summarize a few I recently implemented in a Django application.
In a nutshell:
Serve compressed static media files. This one is obvious, the smaller the file size, the quicker it is to download.
Once a browser has already downloaded the static media, it shouldn't have to download it again. This is what actually happens, as the server will respond with 304 Not Modified, which means that the cached file is still up-to-date.
However, the HTTP request is still sent. This is unnecessary and should be avoided by setting the HTTP Expire header to a date in the future.
When files are updated, you need to give them a new URL so the browser will request them. This is referred to as versioning. A common scheme is to use the modification time/date of the file in its URL, either as part of the name or in the query string. This is obviously tedious and error prone, so automation is required.
Django-Compress (CSS/JS compression and auto-versioning)
django-compress provides an automated system for compressing CSS and JavaScript files, with built in support for auto-versioning. I was pleasantly surprised by the ease of integration, and was up and running within a couple of minutes.
Installation
Firstly, django-compress depends on csstidy, so lets install it.
apt-get update
apt-get install csstidy
Now, grab the latest tarball from github.
wget http://github.com/pelme/django-compress/tarball/master
Unpack and add it to your site (or the Python path).
tar -zxf pelme-django-compress-*.tar.gz
cp -a pelme-django-compress-*/compress /path/to/django/apps/compress
Update settings.py to enabled django-compress, auto-updates, and versioning.
settings.py
COMPRESS = True
COMPRESS_AUTO = True
COMPRESS_VERSION = True
...
INSTALLED_APPS = (
...
'compress',
)
Configuration
Configure which CSS and JavaScript files to compress and auto-version.
settings.py
COMPRESS_CSS = {
'screen': {
'source_filenames': ('css/style.css', 'css/foo.css', 'css/bar.css'),
'output_filename': 'compress/screen.?.css',
'extra_context': {
'media': 'screen,projection',
},
},
}
COMPRESS_JS = {
'all': {
'source_filenames': ('js/jquery-1.2.3.js', 'js/jquery-preload.js')
'output_filename': 'compress/all.?.js',
},
}
Note: The '?' will be substituted with the epoch time (I.e., the version).
I hate hardcoding, it's just too error prone and not scalable, so I used this primitive helper function to auto-generate the required configuration.
utils.py
import os
# generate config for django-compression
# alpha-numeric ordering for customization as required
def compress_cfg(media_root, dir, output):
files = os.listdir(os.path.join(media_root, dir))
files.sort()
return {'source_filenames': tuple(os.path.join(dir, f) for f in files),
'output_filename': output}
Using the above helper function, the hardcoded files can be replaced with the following for auto-generation (after MEDIA_ROOT is defined).
settings.py
from utils import compress_cfg
...
COMPRESS_CSS = {'screen': compress_cfg(MEDIA_ROOT, 'css', 'compress/screen.?.css')}
COMPRESS_JS = {'all': compress_cfg(MEDIA_ROOT, 'js', 'compress/all.?.js')}
Usage
Now you can update your templates to use the compressed auto-versioned files, for example:
base.html
{% load compressed %}
<html>
<head>
{% compressed_css 'screen' %}
{% compressed_js 'all' %}
</head>
...
Image optimization and versioning
With regards to image optimization, take a look at Liraz's post on PNG vs JPG: 6 simple lessons you can learn from our mistakes.
It would be great if django-compress supported image versioning, but it currently doesn't.
I found this nice snippet which provides a template tag for asset versioning, such as images, but it only solves half the problem, the other half being images specified in CSS.
If you need to go the image versioning route, a more complete solution is django-static, which also does CSS/JS compression, though I prefer django-compress.
Currently, I have not implemented image versioning. Its just not worth the complexity as I don't have too many images, and have no plans to change them, often, hopefully. The Expires Header is good enough for now. (Pre-mature optimization is the root of all evil).
HTTP Expire header
As discussed above, without an Expires header, the browser will request media files on every page load, and will receive a 304 Not Modified response if the cached media files are up-to-date.
Setting a Far Future Expire is possible, and recommended, now that you have versioned CSS and JavaScript files.
In the following examples I have set images to expire 2 hours after they are accessed, but you can tweak this to your specific use case. CSS and JavaScript (basically) never expire.
Depending on your media server, add your custom configuration, enable the expire(s) module, and reload the webserver.
Apache2
ExpiresActive On
ExpiresByType image/jpg "access plus 2 hours"
ExpiresByType image/png "access plus 2 hours"
...
ExpiresByType text/css "access plus 10 years"
ExpiresByType application/x-javascript "access plus 10 years"
Lighttpd
server.modules = (
"mod_expire",
...
$HTTP["url"] =~ "\.(jpg|png|gif|ico)$" {
expire.url = ("" => "access plus 2 hours")
}
$HTTP["url"] =~ "\.(css|js)$" {
expire.url = ("" => "access plus 10 years")
}
Lastly, don't forget to enable compression (gzip) on your media server for an extra load-time performance gain.
[Less]
|
Posted
over 15 years
ago
by
Liraz Siri
Page load times are important. Amazon insiders estimate that every 100 ms increase in latency costs Amazon roughly %1 of profit.
Simply put: visitors hate slow sites, so don't make them wait.
Unfortunately, many web sites, including this one up
... [More]
until recently, are slowed down by inefficiently encoded images. Note that there are many other components to page load times and if you're looking to optimize your web site you should analyze and understand all of them. But today we're just going to focus on the images.
<!--break-->
Our mistake was very simple: we used PNG for everything. Of course, we realized other encoding formats such as JPG existed, we just didn't have a good awareness of when you should use one and not the other. Bzzzzt.
PNG is a lossless compression format: that means it compresses images without loosing any quality. But it's not economical to encode most images in a lossless format such as PNG, when the loss of quality using JPG is barely perceptible to the human eye and a JPG might only take up a quarter of the space.
For example previously the front page weighed 701KB and was visibly slow to load. A large component of that weight were the PNG appliance icons. Batch converting the icons to JPG shaved 400KB off, a 60% reduction!
Gotcha! JPG doesn't support transparency (but don't waste your time trying to optimize PNGs): since we initially depended on icon transparency in the web design we resisted making the change and tried optimizing the PNGs first. We tried every tool we could find. Lossless optimizers such as pngcrush and optipng only yielded a 3-4% decrease in filesize. Lossy compressors such as pngnq slashed the file size dramatically by reducing the color depth but the decrease in image quality was often unacceptable. Some icons (e.g., Drupal) looked OK, but others looked terrible.
A few examples. In the first row are originals, second row reduced to 256 colors, third row 128, fourth row 64:
Bottom line: PNG optimization is a poor substitute for JPG. We concluded that it was better to modify the web design and compromise on that than accept a 3X increase in load time for the front page.
Don't compromise on resolution, compromise on compression: after we finished with the front page we turned our attention to the screenshots section. To compensate for the size of PNG encoded screenshots we had previously reduced the resolution of the image but that's a bad move because it results in dramatic reduction in readability of the screenshot.
For the same size we discovered you could encode a much higher resolution JPG image at high quality such that the user would barely notice any compression artifacts. Overall this would provide a much nicer, more usable screenshot without increasing bandwidth requirements.
Long story short, resolution is very important to the perception of quality and it's one of the last things you want to give up when you are reducing bandwidth requirements.
Keep PNG "masters" for future image manipulation: I can't stress this enough. JPG is a lossy format, and every time you re-encode a JPG the quality deteriorates. After a few passes compression artifacts pile up and can become easily visible to the human eye.
For this reason it's best to always keep around masters of your images in PNG format in case you want to perform edits.
When PNG is better than JPG: sometimes you just don't want to compress an image with JPG. As I mentioned earlier, if you really need transparency JPG is out. But also we've discovered small and simple images may actually compress better using PNG than JPG. It seems to depend on how much is "going on" in the image.
Use imagemagick to batch convert PNG to JPG: manually re-encoding PNG images into JPG is boring, especially if you're converting a large number of images.
ImageMagick is the swiss-army knife of command line image manipulation tools. Using it saved us a ton of time:
apt-get install imagemagick
convert -flatten -background white file.png file.jpg
Convert an entire directory:
for f in *.png; do
n=$(echo $f|sed 's/.png/.jpg/');
convert -flatten -background white $f $n
done
Was this post helpful to you? Share your experience, post a comment!
[Less]
|