Posted
almost 15 years
ago
by
Liraz Siri
We've pushed out new RC (Release Candidates) builds for part 1 of the upcoming TurnKey Linux 11.0 release and we need your help testing them! See the appliance pages for download links.
The current crop of release candidates only include
... [More]
Ubuntu Lucid based ISO images for now. Debian Lenny based images will follow, as will builds specially optimized for the the full range of supported virtualization and hosting platforms (e.g., VM build, EC2 AMIs, ESX4, Xen, Eucalyptus, etc.).
In part 1 we've updated all the existing appliances except for Zimbra and Openbravo which we haven't got to yet. They'll be in part 2 along with all the fantastic new appliances developed during the community development contest.
We broke up work on the next release into two parts because with all the new appliances the size of the appliance libary is doubling and that was a bit too much to handle in one go.
Also, Alon is flying off to Orlando today for the Ubuntu Developer Summit and we wanted to have something out the door to show off to our friends in the Ubuntu community.
New versioning scheme
Instead of date based versions the new versioning scheme tracks TurnKey Core, regardless of the date or the underlying distribution.
In other words a TurnKey Linux 11.0 appliance based on Ubuntu Lucid and a TurnKey Linux 11.0 appliance based on Debian Lenny are the same version because they have the same Core features.
We're also hoping to get rid of some of the confusion with date based releases. For example, we pushed out the maintenance release 2009.10-2 in April 2010 but the version confused many people into thinking the appliances were obsolete, even though Hardy will be supported for another 3 years and security updates were installed on first boot and daily afterwards.
Now built-in: TKLBAM (TurnKey Linux Backup and Migration)
TKLBAM, which we announced a few weeks back as a separate component, is now tightly integrated with TurnKey out of the box. This includes a spiffy new Webmin module so users who dislike the command line can fully enjoy TKLBAM from the comfort of their web browser. Now you can backup and migrate fully working systems anywhere with literally just a few mouse clicks.
As an old school Unix guy who loves the command line, what I like the most about the new Webmin module is how it facilitates easy discovery of TKLBAM's functionality and ties everything together seamlessly. No need to memorize funny command line flags or decipher manual pages for configuration options. At a glance you can see what can be done. If you don't understand something you just click on the question marks for embedded help pop-ups.
Screenshot #1: backup menu
Screenshot #2: restore menu
New Core features
Besides TKLBAM integration, we've made a broad range of improvements to the Core features shared by all appliances. In fact one could argue more has changed between the last beta and the release candidate than between the previous release and the beta.
Basic configuration dialogs on first boot: Previously only ISO builds would prompt you, during installation, to configure the appliance (e.g., set passwords). Users of the pre-installed builds (e.g,. the virtual machine build for VMWare/VirtualBox) had to do basic configuration by hand.
With the new mechanism, the user experience is consistent for all build types. Also the first boot configuration mechanism is modular so appliance customizers can easily add their own hooks. Alon will be documenting how it works shortly.
LVM (Logical Volume Management) support: instead of installing the appliance filesystem to a naked partition, we setup LVM by default. This makes it much easier to adjust storage capacity later. Also it adds basic support for filesystem snapshots, which future versions of TKLBAM may leverage to support new kinds of databases.
auto-apt-archive: uses GeoIP to automatically configure closest APT package archive for maximum network performance.
Command line convenience: a range of small improvements that make working with the command line a bit more comfortable:
smart, programmable bash shell completion: helps you get more done with fewer keystrokes.
support for $HOME/.bashrc.d hooks (see my blog post about shell hooks)
persistent environment variables (see $HOME/.bashrc.d/penv):
penv-set foo=bar
exit
# later...
echo $foo
Inverted webshell color scheme: real men use white on black for their command line shells. And they like it!
Display system info in the login motd: seen in the above screenshot.
Appliance specific changes
The latest and greatest: all appliances are now built using the newest software packages available with Long Term Support from Ubuntu. For software not available in the package management system (e.g., redmine), the latest stable software versions from upstream are used instead.
A range of performance optimizations: for example...
All LAMP-based appliances now includes xcache for PHP opcode caching which improves PHP performance, as code only has to be recompiled by PHP into opcode if it changes, and not on every page load.
All Rails based appliances now come with Ruby Enterprise which improves memory performance, traditionally one of Ruby's weak points.
Many small improvements and bug fixes
Exciting potential for new bugs: as yet unidentified.
Distribution level changes
Upgraded base distribution: from Ubuntu 8.04.3 to Ubuntu 10.04.1
The good: newer packages based on a new Ubuntu LTS release which will be supported for 5 years.
The bad: roughly 50MB in additional bloat for all appliances. This is mostly a consequence of bloated dependencies within the new Ubuntu version. Some of the dependencies are unnecessary for our use case (e.g., plymouth) but the only way to get rid of them is to fork the packages and we'd rather avoid that.
Fortunately, the upcoming Debian Lenny based builds do not have this extra bloat so anyone who cares will have an alternative.
No more pinning: As discussed earlier we've discontinued use of complex package management configurations to achieve mixed hybrids of Ubuntu and Lucid.
In the works...
Debian Lenny based equivalent builds
PostgreSQL support for TKLBAM
Part 2 of the TurnKey with dozens of new appliances
64-bit support (it's about time!)
That's all folks! Any questions?
[Less]
|
Posted
almost 15 years
ago
by
Liraz Siri
We've pushed out new RC (Release Candidates) builds for part 1 of the upcoming TurnKey Linux 11.0 release and we need your help testing them! See the appliance pages for download links.
The current crop of release candidates only include
... [More]
Ubuntu Lucid based ISO images for now. Debian Lenny based images will follow, as will builds specially optimized for the the full range of supported virtualization and hosting platforms (e.g., VM build, EC2 AMIs, ESX4, Xen, Eucalyptus, etc.).
In part 1 we've updated all the existing appliances except for Zimbra and Openbravo which we haven't got to yet. They'll be in part 2 along with all the fantastic new appliances developed during the community development contest.
We broke up work on the next release into two parts because with all the new appliances the size of the appliance libary is doubling and that was a bit too much to handle in one go.
Also, Alon is flying off to Orlando today for the Ubuntu Developer Summit and we wanted to have something out the door to show off to our friends in the Ubuntu community.
New versioning scheme
Instead of date based versions the new versioning scheme tracks TurnKey Core, regardless of the date or the underlying distribution.
In other words a TurnKey Linux 11.0 appliance based on Ubuntu Lucid and a TurnKey Linux 11.0 appliance based on Debian Lenny are the same version because they have the same Core features.
We're also hoping to get rid of some of the confusion with date based releases. For example, we pushed out the maintenance release 2009.10-2 in April 2010 but the version confused many people into thinking the appliances were obsolete, even though Hardy will be supported for another 3 years and security updates were installed on first boot and daily afterwards.
Now built-in: TKLBAM (TurnKey Linux Backup and Migration)
TKLBAM, which we announced a few weeks back as a separate component, is now tightly integrated with TurnKey out of the box. This includes a spiffy new Webmin module so users who dislike the command line can fully enjoy TKLBAM from the comfort of their web browser. Now you can backup and migrate fully working systems anywhere with literally just a few mouse clicks.
As an old school Unix guy who loves the command line, what I like the most about the new Webmin module is how it facilitates easy discovery of TKLBAM's functionality and ties everything together seamlessly. No need to memorize funny command line flags or decipher manual pages for configuration options. At a glance you can see what can be done. If you don't understand something you just click on the question marks for embedded help pop-ups.
Screenshot #1: backup menu
Screenshot #2: restore menu
New Core features
Besides TKLBAM integration, we've made a broad range of improvements to the Core features shared by all appliances. In fact one could argue more has changed between the last beta and the release candidate than between the previous release and the beta.
Basic configuration dialogs on first boot: Previously only ISO builds would prompt you, during installation, to configure the appliance (e.g., set passwords). Users of the pre-installed builds (e.g,. the virtual machine build for VMWare/VirtualBox) had to do basic configuration by hand.
With the new mechanism, the user experience is consistent for all build types. Also the first boot configuration mechanism is modular so appliance customizers can easily add their own hooks. Alon will be documenting how it works shortly.
LVM (Logical Volume Management) support: instead of installing the appliance filesystem to a naked partition, we setup LVM by default. This makes it much easier to adjust storage capacity later. Also it adds basic support for filesystem snapshots, which future versions of TKLBAM may leverage to support new kinds of databases.
auto-apt-archive: uses GeoIP to automatically configure closest APT package archive for maximum network performance.
Command line convenience: a range of small improvements that make working with the command line a bit more comfortable:
smart, programmable bash shell completion: helps you get more done with fewer keystrokes.
support for $HOME/.bashrc.d hooks (see my blog post about shell hooks)
persistent environment variables (see $HOME/.bashrc.d/penv):
penv-set foo=bar
exit
# later...
echo $foo
Inverted webshell color scheme: real men use white on black for their command line shells. And they like it!
Display system info in the login motd: seen in the above screenshot.
Appliance specific changes
The latest and greatest: all appliances are now built using the newest software packages available with Long Term Support from Ubuntu. For software not available in the package management system (e.g., redmine), the latest stable software versions from upstream are used instead.
A range of performance optimizations: for example...
All LAMP-based appliances now includes xcache for PHP opcode caching which improves PHP performance, as code only has to be recompiled by PHP into opcode if it changes, and not on every page load.
All Rails based appliances now come with Ruby Enterprise which improves memory performance, traditionally one of Ruby's weak points.
Many small improvements and bug fixes
Exciting potential for new bugs: as yet unidentified.
Distribution level changes
Upgraded base distribution: from Ubuntu 8.04.3 to Ubuntu 10.04.1
The good: newer packages based on a new Ubuntu LTS release which will be supported for 5 years.
The bad: roughly 50MB in additional bloat for all appliances. This is mostly a consequence of bloated dependencies within the new Ubuntu version. Some of the dependencies are unnecessary for our use case (e.g., plymouth) but the only way to get rid of them is to fork the packages and we'd rather avoid that.
Fortunately, the upcoming Debian Lenny based builds do not have this extra bloat so anyone who cares will have an alternative.
No more pinning: As discussed earlier we've discontinued use of complex package management configurations to achieve mixed hybrids of Ubuntu and Lucid.
In the works...
Debian Lenny based equivalent builds
PostgreSQL support for TKLBAM
Part 2 of the TurnKey with dozens of new appliances
64-bit support (it's about time!)
That's all folks! Any questions?
[Less]
|
Posted
almost 15 years
ago
by
Liraz Siri
Musings on Lisp by a routinely Pythonic programmer
Last week I did some maintenance on various Python projects I haven't touched in years (literally), and I was surprised by how easy, almost trivial it was to reorient myself and make the
... [More]
necessary changes.
That observation came at the right time because I've been reading up on Lisp dialects for a while now and questioning whether or not I should be programming in Lisp instead. Lisp enthusiasts (converts?) certainly make persuasive arguments, typically advocating Lisp as the one-true-language with near religious zeal.
<!--break-->
As programming languages go, I'm not very loyal. Over the years I've programmed in the usual assortment of languages (e.g., C, C++, Java). Then I discovered how much more productive I could be using a high-level interpreted language and I fell in love. With Perl. The affair lasted a few years until I realized she was a capricious, evil mistress and I left her in disgust for a younger, more elegant programming language: Python. With which I've been living happily ever after. No complaints. I have a happy text life. But sometimes, lying in bed late at night, my mind wanders and I wonder, if there isn't something even better out there for me. Just waiting to be discovered...
Could Lisp be that temptress?
Here's the thing: I've done my homework and it seems to be true that the various Lisp dialects are inherently more powerful programming languages than Python.
The reason for this is that Lisp isn't really a language at all. It's more of a mathematical entity. You might say it wasn't invented but discovered. In fact if there is intelligent life in the universe and they have computers I'll bet they have something equivalent to Lisp too.
You see, other "languages" are compiled/interpreted into a tree of instructions. With Lisp, you write programs directly in a lightweight text representation of the instruction tree. The learning curve is substantial. If you've programmed in other languages many things need to be unlearned. And the result, with all of those parentheses can look intimidating to someone who's mind isn't used to Lisp.
What makes Lisp so powerful is that there is no distinction between data and code. Code is data. Data is code. Lisp macros take advantage of this by allowing you to execute code at compile time that generates your runtime program. Features that need to be built into other languages (e.g., support for object-oriented programming) can be added to Lisp as a standard library of sorts. At its core you can think of Lisp as a sort of meta-programming language.
All of this makes Lisp extremely powerful, but... wait for it... this power comes with a price.
Compared with Python, Lisp is much harder to read and much more verbose than Python. Lisp makes it easier for a programmer to create mind-numbingly complex code. If you think a big ball of mud is bad, imagine a program implemented in a programming language that is itself a big ball of mud. Of course, you can write good and bad programs in any language but Lisp gives you more rope to hang yourself since you can redefine anything and reprogram the language itself.
Since Lisp is so flexible when you try to understand a Lisp program there is a decent chance you are looking at a program written in a domain-specific programming language invented by the programmer that you need to learn and understand first before you can understand the program. It won't necessarily help much that this language superficially shares the parse-tree syntax of Lisp. Two functionally equivalent Lisp programs may be as different under the hood as two programs implemented in Haskell vs Erlang.
Whether it makes sense to pay the price for all this power ultimately depends on the type of programs you are writing and the type of programmers you have in your team.
In my mind just because you can implement your own programming language doesn't mean you should. I think it's only rarely a good idea. Programming languages are hard to design well. Usually, I'd rather let Guido and the other language design experts in the Python community carefully work out what features a language should have rather then hack out my own.
Unless the inherent complexity of a program is very deep (e.g., AI like tasks) and to dig in you actually do need the power to extend your programming language. In that case, Lisp might be a good choice if you have programmers you can trust not to abuse it.
Note that many famous programmers who know both still prefer Python for most programming tasks. Where Lisp is optimized for power, Python is optimized for simplicity and ease of use.
[Less]
|
Posted
almost 15 years
ago
by
Alon Swartz
In preparation for TurnKey's upcoming release based on Ubuntu Lucid 10.04 LTS, we are knocking off todo list items. One of them is code-named auto-apt-archive. As you can guess from its name, the objective is to configure the closest APT package
... [More]
archive mirror, automatically, without user intervention. It does this by leveraging a new GeoIP service provided by the TurnKey Hub.
By using the closest archive, it is usually much faster, will lessen the load on Ubuntu's main package archive which has been the default up until now, and in certain circumstances, cheaper (for example, bandwidth within Amazon EC2 regions is free).
BTW, TurnKey EC2 builds already include a similar optimization, which leverages ec2metadata to get its associated region and construct the URL for the region specific Ubuntu APT archives.
The new auto-apt-archive solution will replace the old Amazon EC2 adhoc solution, but will also be included in all TurnKey builds, whether it be bare-metal, virtual machines, VPS's or cloud deployment.
So how does it work
Firstly, you might recall a post I made last month, with the somewhat similar title Finding the closest data center using GeoIP and indexing. The GeoIP implementation details are similar, so I won't repeat them here.
For those interested in how auto-apt-archive works, it goes something like this:
On firstboot, auto-apt-archive is called by an inithook, which contacts the Hub requesting the closest Ubuntu APT package archive, and updates APT sources lists accordingly.
The Hub looks up the requesting IP address using GeoIP to find the associated country code which is used in the archive URL.
Ubuntu have implemented a wildcard domain configuration for the archive mirrors, making the URL construction really simple. In the case that there is no local APT archive in your country, you will be routed to Ubuntu's main package archive. When one does become available, you'll automatically be routed there.
http://$CC.archive.ubuntu.com/ubuntu
What about Amazon EC2 you ask? Well, the Hub checks if the IP address is associated with an Amazon EC2 instance it launched, and if it does, returns the region specific archive URL.
http://$REGION.archive.ubuntu.com/ubuntu
In the future, when we add more Cloud deployment options to the Hub which have local APT package archives, they will be automatically supported as well.
And lastly, don't forget that Debian appliances are in the works, so Debian APT package archives are also supported. Debian haven't implemented wildcard DNS, so the Hub looks up the best archive in an index (similar to the amazon region indexes), and returns the archive URL.
http://ftp.$CC.debian.org/debian
Just as with the previous geoip/index post, we need your help to tweak the indexes and mapping logic, as you have better knowledge and experience on your connection latency, and mirror speed. If you think we should associate your country/state to a different archive, please let us know.
[Less]
|
Posted
almost 15 years
ago
by
Liraz Siri
Rik Goldman is the English and Info Systems teacher that led a team of six high school students to help develop 3 new TurnKey Linux appliances for Ampache, LimeSurvey and Elgg.
Rik is the kind of passionate teacher I wish I had in high school.
... [More]
An innovative thinker who isn't afraid to step outside the box to challenge his students to achieve more.
We wanted everyone to get to know Rik better, so we interviewed him for this blog post.
Who are you and what do you do?
My name is Rik Goldman. I am primarily a high school English teacher at Chelsea School in Silver Spring. In addition to teaching English 11, I also teach our advanced technology courses, Information Systems 1 and 2 (a third year has just been added). Prior to this year, Info Systems were based on A+ certification standards; however, this year I introduced Ubuntu, and the students soon demanded that we shift to a Linux+ curriculum.
Our school predominantly serves students with language-based learning disabilities that can have profound effects on reading comprehension, writing fluency, and syntax and mechanics.
Before teaching high school, I taught University and College literature and composition classes. My formal training from undergrad through ABD is literature. However, I've always been involved in technology. First web development (1994), then database development (95-98), then system administration (Solaris and Irix), and then finally I found myself as an instructional technologist rather than a professor. This is where I lost my tolerance for proprietary, closed software in education.
How did you hear about TurnKey Linux?
I had been using virtual machines in my IT classes for some time; ultimately I explored some of the appliances VMWare steered me to. This brought me to Jumpbox, which seemed to open a lot of possibilities. Until I learned more about their business model. Trolling around brought me to TurnKey Linux, which provided all the solutions I was looking for at the time: Joomla, Lamp, Mediawiki, all ready to roll. I started using them and still depend on them.
Is this your first time contributing to an open source project?
For my students and I, our contributions to TKL are our first contributions to an open source project. Simultaneous to this is our contribution to Ampache. This is not without having made a strong effort (on my part): I've pursued writing docs for various projects, but either no one followed through or the projects fell apart. Now I feel a part of the TKL community as well as the Ampache community, to whom I hope to submit documentation over the summer. I've tried to contribute to TKLPatch docs as well.
What were the main challenges in getting involved with the project? How did you overcome them?
The main challenge to some degree was a lack of prior knowledge on my part. I overcame that quickly, and the next hurdle became working through the TKLPatch docs. I had difficulty following along with them; I spent about half a week on a minor patch to no avail. I overcame the difficulty by looking at example patches. I knew bash well enough to make sense of the patches, and from there I was able to establish a framework from which to build my patch. Upcoming hurdle will be understanding the post-install hooks. One other minor hurdle - and this one applies to any culture you're trying to get acclimated to - understanding the culture and the TKL community. What questions could be asked, how should they be asked, where to post what, etc.
What did you learn that surprised you the most?
What surprised my the most was how quickly the students picked up setting up a patch environment and laying out the skeleton. Given a conf script with no comments, they were able to make insightful annotations. That was the starting point for building their own conf scripts. What really surprised me the most is how supporting the open source community has been: #ampache has answered any and every question we've tossed their way, no matter what project we worked on.
What are your plans for the future?
Revising the IT curriculum to cater to students interested in A+ at the same time as serving the interests of students interested in Linux+. At the same time as doing that, I intend to teach light coding or scripting - prolly bash, python, and php is what it looks like at this point.
Bottom line, what would you like other educators in similar positions to take away from your experience?
Teach with authentic assessments in mind. Drilling students on how to access the device manager on XP serves a purpose, but giving them a meaningful task, hopefully one they have some ownership and stake in, can mean students producing resources that can be put to use by a real-world audience. This gives technology students the opportunity to understand the relationship between a technology and the community it serves. Our example: the students built an Ampache streaming media server. Not for entertainment purposes, but to stream audio texts to other students in need of text-to-speech accommodations.
[Less]
|
Posted
almost 15 years
ago
by
Liraz Siri
It all started with a happy accident
I have a confession to make. This contest, which is directly fueling the largest expansion of the TurnKey library since the project started, is a happy accident. It wasn't something we planned. It wasn't on
... [More]
our summer todo list. It was just one of those unexpected, spontaneous ideas that light up the inside of your brain like a flash bulb, and demand you take action. Or else! (you won't get any sleep)
Back in June we had just launched the TurnKey Hub and were getting ready to focus all our energies on releasing TKLBAM. I logged into PayPal and noticed our donated beer budget had a sad little beer belly. It was just sitting there, giving me an accusing look. I felt guilty. Surely all those people who donated expected we would put these funds to better use. That's when it hit me. It was too much to buy beer, but not so much that we couldn't risk it all on a fun experiment...
I talked it over with Alon and on an impulse we decided to do a contest, but not just any contest. A wild and wet summer open source bonanza! With ponies!
What happened next took us both by surprise.
<!--break-->
From the far corners of the globe... two superstars emerge
I figured if we were lucky we would attract about a dozen or so contributors who would create appliances for one or two of their favorite open source applications.
But what I do know. It turns out we got even luckier with two superstar developers who took different but complimentary approaches, and blew us away with over two dozen TKLPatches for an amazing range of excellent open source software, some of which we had never even heard of before!
From India, Basil Kurian went wide with about 20 relatively straightforward new appliances, developed at a furious pace that was hard to keep up with. Like a hungry bird of prey, Basil scanned the open source savanna from above, then swooped down for kills he could snatch back whole to his nest.
From Venezuela, Adrian Moya dug in deep, with about 10 relatively challenging appliances. His attention to detail rivals some of our best work to date. Adrian, like a tiger, stalked his prey, taking a slower, more deliberate approach. He went after larger targets, sunk his teeth in and tore them to pieces.
There were a few other contributions as well but the bulk of developments can be credited directly to these two admirable and highly skilled individuals.
Looking at the big picture, what they've accomplished is not only a tribute to their own impressive abilities but also to the power of the open source ecosystem they leveraged. I can't think of any other area of human enterprise where a loosely knit band of individuals is empowered (in their spare time no less) to help enable thousands of individuals and organizations all over the world to take advantage of so much technological innovation. That's what TurnKey means to Alon and I and we're delighted that more people are getting to share this experience!
Help! How do we figure out who should win?
It was probably best for the project that our two superstars took different complimentary approaches. But it complicates our decision as a direct apples to apples comparison isn't possible. That means instead of just counting the number of submissions we actually have to think this through, and we'd like the community to help us with that.
So what's at stake? You mean besides bragging rights and immortality in the TurnKey hall of fame?
Gold: $1500 (+ a make-believe pony with a suitcase full of broken dreams)
Silver: $800
Bronze: $100
Judging criteria
Your mission, should you choose to accept it, is to help us figure out who added the most total value to the project. That means the quantity of contributions multiplied by their quality. Yes, we realize quality is inherently subjective and open to endless interpretation. We've thrown in some rough metrics and have identified dimensions of evaluation so it isn't entirely arbitrary, but let's not kid ourselves - playing with grading systems and numbers won't save us from making subjective, qualitative judgments.
To make this a bit simpler we've attempted to summarize contributions in three key dimensions:
Anticipated interest from our audience: the more people will benefit from an appliance, the more valuable it is. Unfortunately, we don't have a TurnKey crystal ball (yet), so we can only make educated guesses. Mostly we sampled data from various public resources provided by Google (e.g., Google Trends, the number of search results for a given name, the page rank of a project's web site, etc.)
Grades: minimal (), modest (), large () and huge ()
Integration effort: an appliance saves users the trouble of finding the right components, integrating them into a working solution and testing it themselves. Hard appliance integrations are more valuable because they save users more trouble.
Examples of factors we took into account when we evaluated integration effort includes:
how well aligned the integration is with a well defined need
the number of integrated components
whether or not pre-installation (e.g., seeding databases) was required
approximate testing difficulty
added value such as custom glue scripts
Grades: simple (), involved (), challenging () and brilliant ()
Documentation: A pre-requisite to adding an appliance to the library is that we understand what it achieves, how it does it and why. Good documentation makes all of this easier for us, and also serves to educate the community and promote discussion. Truly awesome documentation goes beyond that and adds significant value to end-users by making it easier to get started (e.g., tutorials, HOWTOs, documentation resources, etc.)
Grades: minimal (), good (), excellent (), awesome ()
Summary of submissions competing for the gold
Adrian Moya
TKLPatch
Audience
Integration
Documentation
Asterisk: VoIP PBX
Magento: ecommerce
OpenLDAP
Alfresco: enterprise CMS
Web content filtering proxy: DansGuardian + ClamAV
Big Blue Button: web conferencing
Plone: Zope-based CMS
Bacula: network backup tool
Gitorious: Git collaborative repository hosting
Basil Kurian
TKLPatch
Audience
Integration
Documentation
Mono ASP.NET: IIS replacement
vTiger CRM
Cacti: web-based graphing tool
Prestashop: e-commerce / shopping cart
SugarCRM Community Edition
Tomato cart: ecommerce
Status.NET: micro blogging
eFront: eLearning software
OrangeHRM
Vanilla forum
Groupoffice: groupware
LEMP stack: Linux, Nginx, Mysql, PHP
EyeOS: web desktop
Foswiki: structured wiki fork of TWiki
Fedena: school/campus management system
Spree: ecommerce
Retrospectiva: agile project management
Typo: rails blogware
Notes:
This is not a complete listing of submissions, just those that are most likely to be included in the upcoming release batch.
I'm linking to the forum posts where the TKLPatch submissions are discussed for future reference and so anyone who wants can make up their own mind.
Third place?
While the vast bulk of contributions came from Basil and Adrian, there were a few sporadic contributions by other community members. Here too we're interested in feedback from the community on who should win third place, or alternatively whether we should just eliminate third place and merge it with second place. Or something else...
Candidates:
Bilal: contributed a barebones Citadel appliance, and a partially completed JBoss appliance.
JedMeister: contributed a partially finished KnowledgeTree patch, researched possible integration details for the TurnKey Core Client and provided encouragement, feedback and moral support throughout the contest to all involved.
Neil Wilson: contributed a patch fixing various issues with the TurnKey Core Lucid beta.
Rik Goldman: contributed an IEP-IPP appliance, that could make it easier for school to use open source software to track students with special needs. Before the contest Rik and his band of merry students were our most prolific community appliance developers (Ampache, LimeSurvey, Elgg). Rik also contributed excellent supplemental documentation on the TKLPatch development process.
So now what? That's where you come in!
Vote on who should win and share your thoughts with the community in the comments below. If for some reason you'd rather not share your thoughts publicly feel free to contact us in a private e-mail.
[Less]
|
Posted
almost 15 years
ago
by
Liraz Siri
It all started with a happy accident
I have a confession to make. This contest, which is directly fueling the largest expansion of the TurnKey library since the project started, is a happy accident. It wasn't something we planned. It wasn't on
... [More]
our summer todo list. It was just one of those unexpected, spontaneous ideas that light up the inside of your brain like a flash bulb, and demand you take action. Or else! (you won't get any sleep)
Back in June we had just launched the TurnKey Hub and were getting ready to focus all our energies on releasing TKLBAM. I logged into PayPal and noticed our donated beer budget had a sad little beer belly. It was just sitting there, giving me an accusing look. I felt guilty. Surely all those people who donated expected we would put these funds to better use. That's when it hit me. It was too much to buy beer, but not so much that we couldn't risk it all on a fun experiment...
I talked it over with Alon and on an impulse we decided to do a contest, but not just any contest. A wild and wet summer open source bonanza! With ponies!
What happened next took us both by surprise.
<!--break-->
From the far corners of the globe... two superstars emerge
I figured if we were lucky we would attract about a dozen or so contributors who would create appliances for one or two of their favorite open source applications.
But what I do know. It turns out we got even luckier with two superstar developers who took different but complimentary approaches, and blew us away with over two dozen TKLPatches for an amazing range of excellent open source software, some of which we had never even heard of before!
From India, Basil Kurian went wide with about 20 relatively straightforward new appliances, developed at a furious pace that was hard to keep up with. Like a hungry bird of prey, Basil scanned the open source savanna from above, then swooped down for kills he could snatch back whole to his nest.
From Venezuela, Adrian Moya dug in deep, with about 10 relatively challenging appliances. His attention to detail rivals some of our best work to date. Adrian, like a tiger, stalked his prey, taking a slower, more deliberate approach. He went after larger targets, sunk his teeth in and tore them to pieces.
There were a few other contributions as well but the bulk of developments can be credited directly to these two admirable and highly skilled individuals.
Looking at the big picture, what they've accomplished is not only a tribute to their own impressive abilities but also to the power of the open source ecosystem they leveraged. I can't think of any other area of human enterprise where a loosely knit band of individuals is empowered (in their spare time no less) to help enable thousands of individuals and organizations all over the world to take advantage of so much technological innovation. That's what TurnKey means to Alon and I and we're delighted that more people are getting to share this experience!
Help! How do we figure out who should win?
It was probably best for the project that our two superstars took different complimentary approaches. But it complicates our decision as a direct apples to apples comparison isn't possible. That means instead of just counting the number of submissions we actually have to think this through, and we'd like the community to help us with that.
So what's at stake? You mean besides bragging rights and immortality in the TurnKey hall of fame?
Gold: $1500 (+ a make-believe pony with a suitcase full of broken dreams)
Silver: $800
Bronze: $100
Judging criteria
Your mission, should you choose to accept it, is to help us figure out who added the most total value to the project. That means the quantity of contributions multiplied by their quality. Yes, we realize quality is inherently subjective and open to endless interpretation. We've thrown in some rough metrics and have identified dimensions of evaluation so it isn't entirely arbitrary, but let's not kid ourselves - playing with grading systems and numbers won't save us from making subjective, qualitative judgments.
To make this a bit simpler we've attempted to summarize contributions in three key dimensions:
Anticipated interest from our audience: the more people will benefit from an appliance, the more valuable it is. Unfortunately, we don't have a TurnKey crystal ball (yet), so we can only make educated guesses. Mostly we sampled data from various public resources provided by Google (e.g., Google Trends, the number of search results for a given name, the page rank of a project's web site, etc.)
Grades: minimal (), modest (), large () and huge ()
Integration effort: an appliance saves users the trouble of finding the right components, integrating them into a working solution and testing it themselves. Hard appliance integrations are more valuable because they save users more trouble.
Examples of factors we took into account when we evaluated integration effort includes:
how well aligned the integration is with a well defined need
the number of integrated components
whether or not pre-installation (e.g., seeding databases) was required
approximate testing difficulty
added value such as custom glue scripts
Grades: simple (), involved (), challenging () and brilliant ()
Documentation: A pre-requisite to adding an appliance to the library is that we understand what it achieves, how it does it and why. Good documentation makes all of this easier for us, and also serves to educate the community and promote discussion. Truly awesome documentation goes beyond that and adds significant value to end-users by making it easier to get started (e.g., tutorials, HOWTOs, documentation resources, etc.)
Grades: minimal (), good (), excellent (), awesome ()
Summary of submissions competing for the gold
Adrian Moya
TKLPatch
Audience
Integration
Documentation
Asterisk: VoIP PBX
Magento: ecommerce
OpenLDAP
Alfresco: enterprise CMS
Web content filtering proxy: DansGuardian + ClamAV
Big Blue Button: web conferencing
Plone: Zope-based CMS
Bacula: network backup tool
Gitorious: Git collaborative repository hosting
Basil Kurian
TKLPatch
Audience
Integration
Documentation
Mono ASP.NET: IIS replacement
vTiger CRM
Cacti: web-based graphing tool
Prestashop: e-commerce / shopping cart
SugarCRM Community Edition
Tomato cart: ecommerce
Status.NET: micro blogging
eFront: eLearning software
OrangeHRM
Vanilla forum
Groupoffice: groupware
LEMP stack: Linux, Nginx, Mysql, PHP
EyeOS: web desktop
Foswiki: structured wiki fork of TWiki
Fedena: school/campus management system
Spree: ecommerce
Retrospectiva: agile project management
Typo: rails blogware
Notes:
This is not a complete listing of submissions, just those that are most likely to be included in the upcoming release batch.
I'm linking to the forum posts where the TKLPatch submissions are discussed for future reference and so anyone who wants can make up their own mind.
Third place?
While the vast bulk of contributions came from Basil and Adrian, there were a few sporadic contributions by other community members. Here too we're interested in feedback from the community on who should win third place, or alternatively whether we should just eliminate third place and merge it with second place. Or something else...
Candidates:
Bilal: contributed a barebones Citadel appliance, and a partially completed JBoss appliance.
JedMeister: contributed a partially finished KnowledgeTree patch, researched possible integration details for the TurnKey Core Client and provided encouragement, feedback and moral support throughout the contest to all involved.
Neil Wilson: contributed a patch fixing various issues with the TurnKey Core Lucid beta.
Rik Goldman: contributed an IEP-IPP appliance, that could make it easier for school to use open source software to track students with special needs. Before the contest Rik and his band of merry students were our most prolific community appliance developers (Ampache, LimeSurvey, Elgg). Rik also contributed excellent supplemental documentation on the TKLPatch development process.
So now what? That's where you come in!
Vote on who should win and share your thoughts with the community in the comments below. If for some reason you'd rather not share your thoughts publicly feel free to contact us in a private e-mail.
[Less]
|
Posted
almost 15 years
ago
by
Liraz Siri
At home we canceled our cable subscription a few months ago. We hardly ever used it any more. Instead we were downloading content to a makeshift media server and watching it on our own schedule. Many of the shows I like (e.g., Colbert Report)
... [More]
aren't even available over here.
Upstairs we had a gorgeous big screen HDTV set that was being powered by one of my old computers, a nice P4 machine with 1GB of memory that was running the TurnKey Torrent Server appliance on bare metal.
Then it died. Traced it back to the motherboard being fried by a faulty power unit. Facing an immediate home entertainment emergency, I rummaged through the basement and found an old P3 machine with 256MB of old-style memory (I.e., the kind you can't get any more of these days).
I transferred the hard drive but of course GNOME choked so bad with 256MB that I decided to just turn X off. So we still had our home server but the screen went down on our big screen TV. A sad sight.
Then a few days later I came across LXDE, a new and lightweight X11 desktop environment that really does use less memory:
The first test is simply to check how much memory is used after a fresh boot of the respective live CDs, all the way into the default desktop.
LXDE based Lubuntu: 57,908 KB
XFCE based Xubuntu: 156,852 KB
GNOME based Ubuntu: 153,840 KB
It wasn't available in 8.04 so I had never come across it before. I installed it on our ancient franken-server and I'm impressed.
The server not only works, it zips! It looks nice and modern, easy to configure, pretty much everything works out of the box (e.g., auto-generated application menus, virtual desktops, cpu monitor, date and clock).
Looks like an excellent candidate for the upcoming TurnKey Core Client. Unsurprisingly, Ubuntu are so impressed with it there's talk of a new Ubuntu derivative based on it called Lubuntu.
[Less]
|
Posted
almost 15 years
ago
by
Liraz Siri
Background: how a backup key works
In TKLBAM the backup key is a secret encrypted with a passphrase which is uploaded to the Hub. Decrypting the backup key yields the secret which is passed on to duplicity (and eventually to GnuPG) to be used as
... [More]
the symmetric key with which backup volumes are encrypted on backup and decrypted on restore.
When you create a new backup, or change the passphrase on an existing backup, a new backup key is uploaded to the Hub where it is stored in the key field for that backup record.
When you restore, tklbam downloads the backup key from the Hub and decrypts it locally on the computer performing the restore. Note that the Hub only allows you to download the backup key for backup records to which you have access (e.g., you are the owner).
Only you can decrypt your passphrase protected backups
All of this matters because it means that as long as you use a passphrase to protect the key, even the Hub can't decrypt your backups, only you can - provided you remember the passphrase (or failing that, at least have the escrow key stored in a safe place).
In other words, the decryption of the backup key happens locally and at no point does the passphrase reach the Hub, so we can't decrypt your backup even if you asked us to. Neither can an attacker that has theoretically compromised the Hub, or a government agency that comes kicking down our door with a court warrant.
The problem with cryptographic passphrases
But wait. If an attacker has local access to the key, his ability to run dictionary attacks to find the key's passphrase is limited only by the computational resources he can throw at it.
Remember there's a critical difference between a passphrase used for authentication purposes (e.g., to an online service) and a passphrase used for cryptographic purposes.
By contrast, a passphrase used for authenticating to an online service doesn't need to be as strong as a passphrase that is used cryptographically because with an online service, even if no explicit countermeasures are used (e.g., IP blacklisting on too many failed attempts) there is still a network between the attacker and the service. The available bandwidth places a debilitating upper limit on how many passphrases can be tried per second. Also, in practice there are usually bottlenecks in other places which would slow down an online dictionary attack even further.
But a passphrase used for cryptographic purposes assumes the attacker has access to the ciphertext, and that's a whole different ball game.
To better understand what we're up against, here's the formula for calculating the size of the passphrase search space:
log(howmany_different_possible_values ** howmany_values) / log(2)
For example, consider a typical 6 letter password.
6 ASCII printable letters = maximum 42-bits of search space.
That's a maximum of 4 trillion possible combinations. Which sounds like a lot. But it really isn't, since:
You can probably squeeze out about 1 million local passphrase tests per second from a modern multi-core workstation, assuming a typical passphrase encryption method is used.
This is one of those problems that are trivial to parallelize.
If you rent just 100 computers (e.g., in the cloud) you could exhaustively search through 42-bits in about 5 days.
And remember, today the bad guys often have millions of computers at their disposal via botnets.
People are very bad at choosing truly random passwords. A clever attacker will try the low hanging fruit first, so they're likely to find out your passphrase much sooner than by brute forcing blindly through the full search space.
For example, say you know a 6 letter password is much too short for an encryption key and instead you're using a longer random combination of 10,000 common English words:
2 words = 18-bits worth of search space.
3 words = 27-bits worth of search space.
4 words = 36-bits worth of search space.
English words aren't very random so your "paranoid" 3 word, 17 letter passphrase may actually be easier to crack than a truly random combination of just 4 ASCII printable characters (28-bits).
For comparison, let's see what happens if you use 6 random individual characters.
If you just use random lowercase characters the search space is reduced to 27-bits which is 32,768 times easier to search through than the full 42-bit search space of 6-letter ASCII printable passwords.
If you just use random lowercase characters and numbers, the search space is 30-bits which is 4,096 times easier to search through.
If you just use random lowercase and uppercase characters and numbers, the search space is 35-bits which is 128 times easier to search through.
The good news is that each bit of search space doubles the expense for the attacker.
The bad news is that it takes a truly random combination of 11 uppercase, lowercase characters and numbers just to reach 64-bits worth of search space, and a 10M strong botnet could crack even that in an average of 10 days.
Bottom line: even your supposedly ultra-paranoid passphrase (e.g., r0m4n14nv4mp1r344rdv4rkn3st) of 4 random words from a dictionary of 150K words (in l33t speak) only has about 50-bits worth of entropy, despite being 27 characters long. A 10,000 botnet could crack that in about a day.
Countermeasures: increase computational cost
Though it's impossible to prevent these attacks entirely I've implemented a couple of countermeasures in the way TKLBAM generates passphrase protected keys:
1) The first trick: is to increase how computationally expensive it is to calculate the cipher key from the passphrase:
def _repeat(f, input, count):
for x in xrange(count):
input = f(input)
return input
def _cipher_key(passphrase, repeats):
cipher_key = _repeat(lambda k: hashlib.sha256(k).digest(),
passphrase, repeats)
The principle is that calculating a hash costs CPU time so by feeding the hash into itself enough times we can linearly increase how expensive it is to map the passphrase-space to the key-space.
For example, repeating the hash routine 100,000 times takes about a quarter second on one of the cores of my computer. If I use all 4 cores this limits me to generating 16 cipher keys per second. Down from 1.6 million cipher keys per second. So that's one way to dramatically reduce the practical feasibility of a dictionary or exhaustive brute force attack.
Note that an attacker can't circumvent this calculation by searching through the key-space directly because even after we increase the cost of generating the passphrase space a 100,000 times over, the cost of trying to bruteforce the 256-bit key-space directly is still countless trillions of times greater.
The weakness of this technique is that an attacker would have to pay the cost of mapping the passphrase-space (e.g., a dictionary) to the key-space only once when trying to crack multiple keys.
2) The second trick is to increase how computationally expensive it is to decrypt the key packet by increasing the number of times we pass it through encryption:
def _cipher(cipher_key):
return AES.new(cipher_key, AES.MODE_CBC)
ciphertext = _repeat(lambda v: _cipher(cipher_key).encrypt(v),
_pad(plaintext), cipher_repeats)
This part of the computational expense is key-specific so trading off memory to pre-calculate the mapped key-space won't help you with this step.
Implementation notes
Embedding repeat parameters in the key packet
The current implementation hardwires 100,000 repeats of the hash, and another 100,000 repeats of the cipher.
This makes searching through the passphrase-space about 200,000 times more expensive. On my workstation it takes 0.5 seconds to encrypt or decrypt a key (per-core).
I'm not sure these are the ideal parameters but they are in the ball park of how much you can increase the computational expense before usability suffers.
That's not to say you couldn't go higher, but there's a practical upper boundary to that too. If you're willing to wait about a minute for key generation/decryption you could increase the computational expense about 100 times over and that would give you 100 times better protection or allow you to use a password that is 100 times weaker with the same protection.
Just in case, to allow the number of repeats to change or be user configurable in the future the key formatter routine embeds the repeat parameters into the unencrypted portion of the key packet. This allows the key parsing routine to extract these parameters from the key itself so it can just as easily parse a 0.5 second key (I.e., the current default) as a 5 second, or 50 second key.
Embedding a version id
Just to make sure the key format is future proof I'm also embedding a version id into it.
Embedding a version costs almost nothing (an extra byte) and makes it easier to support incompatible changes to the key format should the need arise (e.g., changing of cipher/hash, changing the format, etc.).
Worst case scenario, we increment the version and implement a new incompatible key format. Old clients won't be able to understand the new key format but will at least fail reliably, and new clients will be able to support both new and old key formats.
[Less]
|
Posted
almost 15 years
ago
by
Liraz Siri
Drum roll please...
Today, I'm proud to officially unveil TKLBAM (AKA TurnKey Linux Backup and Migration): the easiest, most powerful system-level backup anyone has ever seen. Skeptical? I would be too. But if you read all the way through
... [More]
you'll see I'm not exaggerating and I have the screencast to prove it. Aha!
This was the missing piece of the puzzle that has been holding up the Ubuntu Lucid based release batch. You'll soon understand why and hopefully agree it was worth the wait.
We set out to design the ideal backup system
Imagine the ideal backup system. That's what we did.
Pain free
A fully automated backup and restore system with no pain. That you wouldn't need to configure. That just magically knows what to backup and, just as importantly, what NOT to backup, to create super efficient, encrypted backups of changes to files, databases, package management state, even users and groups.
Migrate anywere
An automated backup/restore system so powerful it would double as a migration mechanism to move or copy fully working systems anywhere in minutes instead of hours or days of error prone, frustrating manual labor.
It would be so easy you would, shockingly enough, actually test your backups. No more excuses. As frequently as you know you should be, avoiding unpleasant surprises at the worst possible timing.
One turn-key tool, simple and generic enough that you could just as easily use it to migrate a system:
from Ubuntu Hardy to Ubuntu Lucid (get it now?)
from a local deployment, to a cloud server
from a cloud server to any VPS
from a virtual machine to bare metal
from Ubuntu to Debian
from 32-bit to 64-bit
System smart
Of course, you can't do that with a conventional backup. It's too dumb. You need a vertically integrated backup that has system level awareness. That knows, for example, which configuration files you changed and which you didn't touch since installation. That can leverage the package management system to get appropriate versions of system binaries from package repositories instead of wasting backup space.
This backup tool would be smart enough to protect you from all the small paper-cuts that conspire to make restoring an ad-hoc backup such a nightmare. It would transparently handle technical stuff you'd rather not think about like fixing ownership and permission issues in the restored filesystem after merging users and groups from the backed up system.
Ninja secure, dummy proof
It would be a tool you could trust to always encrypt your data. But it would still allow you to choose how much convenience you're willing to trade off for security.
If data stealing ninjas keep you up at night, you could enable strong cryptographic passphrase protection for your encryption key that includes special countermeasures against dictionary attacks. But since your backup's worst enemy is probably staring you in the mirror, it would need to allow you to create an escrow key to store in a safe place in case you ever forget your super-duper passphrase.
On the other hand, nobody wants excessive security measures forced down their throats when they don't need them and in that case, the ideal tool would be designed to optimize for convenience. Your data would still be encrypted, but the key management stuff would happen transparently.
Ultra data durability
By default, your AES encrypted backup volumes would be uploaded to inexpensive, ultra-durable cloud storage designed to provide %99.999999999 durability. To put 11 nines of reliability in perspective, if you stored 10,000 backup volumes you could expect to lose a single volume once every 10 million years.
For maximum network performance, you would be routed automatically to the cloud storage datacenter closest to you.
Open source goodness
Naturally, the ideal backup system would be open source. You don't have to care about free software ideology to appreciate the advantages. As far as I'm concerned any code running on my servers doing something as critical as encrypted backups should be available for peer review and modification. No proprietary secret sauce. No pacts with a cloudy devil that expects you to give away your freedom, nay worse, your data, in exchange for a little bit of vendor-lock-in-flavored convenience.
Tall order huh?
All of this and more is what we set out to accomplish with TKLBAM. But this is not our wild eyed vision for a future backup system. We took our ideal and we made it work. In fact, we've been experimenting with increasingly sophisticated prototypes for a few months now, privately eating our own dog food, working out the kinks. This stuff is complex so there may be a few rough spots left, but the foundation should be stable by now.
Seeing is believing: a simple usage example
We have two installations of TurnKey Drupal6:
Alpha, a virtual machine on my local laptop. I've been using it to develop the TurnKey Linux web site.
Beta, an EC2 instance I just launched from the TurnKey Hub.
On both I install and initialize tklbam:
apt-get update
apt-get install tklbam
# initialize tklbam by providing it with the Hub API Key
tklbam-init QPINK3GD7HHT3A
Note that in the future, tklbam will come pre-installed on TurnKey appliances so this part will be even simpler.
I now log into Alpha's command line as root (e.g., via the console, SSH or web shell) and do the following:
tklbam-backup
It's that simple. Unless you want to change defaults, no arguments or additional configuration required.
When the backup is done a new backup record will show up in my Hub account:
To restore I log into Beta and do this:
tklbam-restore 1
That's it! To see it in action watch the video below or better yet log into your TurnKey Hub account and try it for yourself.
Quick screencast (2 minutes)
Best viewed full-screen. Having problems with playback? Try the YouTube version.
Getting started
TKLBAM's front-end interface is provided by the TurnKey Hub, an Amazon-powered cloud backup and server deployment web service currently in private beta.
If you don't have a Hub account already, either ask someone that does to send you an invite, or request an invitation. We'll do our best to grant them as fast as we can scale capacity on a first come, first served basis.
To get started log into your Hub account and follow the basic usage instructions. For more detail, see the documentation.
Feel free to ask any questions in the comments below. But you'll probably want to check with the FAQ first to see if they've already been answered.
Upcoming features
PostgreSQL support: PostgreSQL support is in development but currently only MySQL is supported. That means TKLBAM doesn't yet work on the three PostgreSQL based TurnKey appliances (PostgreSQL, LAPP, and OpenBravo).
Built-in integration: TKLBAM will be included by default in all future versions of TurnKey appliances. In the future when you launch a cloud server from the Hub it will be ready for action immediately. No installation or initialization necessary.
Webmin integration: we realize not everyone is comfortable with the command line, so we're going to look into developing a custom webmin module for TKLBAM.
Special salute to the TurnKey community
First, many thanks to the brave souls who tested TKLBAM and provided feedback even before we officially announced it. Remember, with enough eyeballs all bugs are shallow, so if you come across anything else, don't rely on someone else to report it. Speak up!
Also, as usual during a development cycle we haven't been able to spend as much time on the community forums as we'd like. Many thanks to everyone who helped keep the community alive and kicking in our relative absence.
Remember, if the TurnKey community has helped you, try to pay it forward when you can by helping others.
Finally, I'd like to give extra special thanks to three key individuals that have gone above and beyond in their contributions to the community.
By alphabetical order:
Adrian Moya: for developing appliances that rival some of our best work.
Basil Kurian: for storming through appliance development at a rate I can barely keep up with.
JedMeister: for continuing to lead as our most helpful and tireless community member for nearly a year and a half now. This guy is a frigging one man support army.
Also special thanks to Bob Marley, the legend who's been inspiring us as of late to keep jamming till the sun was shining. :)
Final thoughts
TKLBAM is a major milestone for TurnKey. We're very excited to finally unveil it to the world. It's actually been a not-so-secret part of our vision from the start. A chance to show how TurnKey can innovate beyond just bundling off the shelf components.
With TKLBAM out of the way we can now focus on pushing out the next release batch of Lucid based appliances. Thanks to the amazing work done by our star TKLPatch developers, we'll be able to significantly expand our library so by the next release we'll be showcasing even more of the world's best open source software. Stir It Up!
[Less]
|