6
I Use This!
Moderate Activity

News

Analyzed 1 day ago. based on code collected 3 days ago.
Posted almost 15 years ago by Liraz Siri
Drum roll please... Today, I'm proud to officially unveil TKLBAM (AKA TurnKey Linux Backup and Migration): the easiest, most powerful system-level backup anyone has ever seen. Skeptical? I would be too. But if you read all the way through ... [More] you'll see I'm not exaggerating and I have the screencast to prove it. Aha! This was the missing piece of the puzzle that has been holding up the Ubuntu Lucid based release batch. You'll soon understand why and hopefully agree it was worth the wait. We set out to design the ideal backup system Imagine the ideal backup system. That's what we did. Pain free A fully automated backup and restore system with no pain. That you wouldn't need to configure. That just magically knows what to backup and, just as importantly, what NOT to backup, to create super efficient, encrypted backups of changes to files, databases, package management state, even users and groups. Migrate anywere An automated backup/restore system so powerful it would double as a migration mechanism to move or copy fully working systems anywhere in minutes instead of hours or days of error prone, frustrating manual labor. It would be so easy you would, shockingly enough, actually test your backups. No more excuses. As frequently as you know you should be, avoiding unpleasant surprises at the worst possible timing. One turn-key tool, simple and generic enough that you could just as easily use it to migrate a system: from Ubuntu Hardy to Ubuntu Lucid (get it now?) from a local deployment, to a cloud server from a cloud server to any VPS from a virtual machine to bare metal from Ubuntu to Debian from 32-bit to 64-bit System smart Of course, you can't do that with a conventional backup. It's too dumb. You need a vertically integrated backup that has system level awareness. That knows, for example, which configuration files you changed and which you didn't touch since installation. That can leverage the package management system to get appropriate versions of system binaries from package repositories instead of wasting backup space. This backup tool would be smart enough to protect you from all the small paper-cuts that conspire to make restoring an ad-hoc backup such a nightmare. It would transparently handle technical stuff you'd rather not think about like fixing ownership and permission issues in the restored filesystem after merging users and groups from the backed up system. Ninja secure, dummy proof It would be a tool you could trust to always encrypt your data. But it would still allow you to choose how much convenience you're willing to trade off for security. If data stealing ninjas keep you up at night, you could enable strong cryptographic passphrase protection for your encryption key that includes special countermeasures against dictionary attacks. But since your backup's worst enemy is probably staring you in the mirror, it would need to allow you to create an escrow key to store in a safe place in case you ever forget your super-duper passphrase. On the other hand, nobody wants excessive security measures forced down their throats when they don't need them and in that case, the ideal tool would be designed to optimize for convenience. Your data would still be encrypted, but the key management stuff would happen transparently. Ultra data durability By default, your AES encrypted backup volumes would be uploaded to inexpensive, ultra-durable cloud storage designed to provide %99.999999999 durability. To put 11 nines of reliability in perspective, if you stored 10,000 backup volumes you could expect to lose a single volume once every 10 million years. For maximum network performance, you would be routed automatically to the cloud storage datacenter closest to you. Open source goodness Naturally, the ideal backup system would be open source. You don't have to care about free software ideology to appreciate the advantages. As far as I'm concerned any code running on my servers doing something as critical as encrypted backups should be available for peer review and modification. No proprietary secret sauce. No pacts with a cloudy devil that expects you to give away your freedom, nay worse, your data, in exchange for a little bit of vendor-lock-in-flavored convenience. Tall order huh? All of this and more is what we set out to accomplish with TKLBAM. But this is not our wild eyed vision for a future backup system. We took our ideal and we made it work. In fact, we've been experimenting with increasingly sophisticated prototypes for a few months now, privately eating our own dog food, working out the kinks. This stuff is complex so there may be a few rough spots left, but the foundation should be stable by now. Seeing is believing: a simple usage example We have two installations of TurnKey Drupal6: Alpha, a virtual machine on my local laptop. I've been using it to develop the TurnKey Linux web site. Beta, an EC2 instance I just launched from the TurnKey Hub. On both I install and initialize tklbam: apt-get update apt-get install tklbam # initialize tklbam by providing it with the Hub API Key tklbam-init QPINK3GD7HHT3A Note that in the future, tklbam will come pre-installed on TurnKey appliances so this part will be even simpler. I now log into Alpha's command line as root (e.g., via the console, SSH or web shell) and do the following: tklbam-backup It's that simple. Unless you want to change defaults, no arguments or additional configuration required. When the backup is done a new backup record will show up in my Hub account: To restore I log into Beta and do this: tklbam-restore 1 That's it! To see it in action watch the video below or better yet log into your TurnKey Hub account and try it for yourself. Quick screencast (2 minutes) Best viewed full-screen. Having problems with playback? Try the YouTube version. Getting started TKLBAM's front-end interface is provided by the TurnKey Hub, an Amazon-powered cloud backup and server deployment web service currently in private beta. If you don't have a Hub account already, either ask someone that does to send you an invite, or request an invitation. We'll do our best to grant them as fast as we can scale capacity on a first come, first served basis. To get started log into your Hub account and follow the basic usage instructions. For more detail, see the documentation. Feel free to ask any questions in the comments below. But you'll probably want to check with the FAQ first to see if they've already been answered. Upcoming features PostgreSQL support: PostgreSQL support is in development but currently only MySQL is supported. That means TKLBAM doesn't yet work on the three PostgreSQL based TurnKey appliances (PostgreSQL, LAPP, and OpenBravo). Built-in integration: TKLBAM will be included by default in all future versions of TurnKey appliances. In the future when you launch a cloud server from the Hub it will be ready for action immediately. No installation or initialization necessary. Webmin integration: we realize not everyone is comfortable with the command line, so we're going to look into developing a custom webmin module for TKLBAM. Special salute to the TurnKey community First, many thanks to the brave souls who tested TKLBAM and provided feedback even before we officially announced it. Remember, with enough eyeballs all bugs are shallow, so if you come across anything else, don't rely on someone else to report it. Speak up! Also, as usual during a development cycle we haven't been able to spend as much time on the community forums as we'd like. Many thanks to everyone who helped keep the community alive and kicking in our relative absence. Remember, if the TurnKey community has helped you, try to pay it forward when you can by helping others. Finally, I'd like to give extra special thanks to three key individuals that have gone above and beyond in their contributions to the community. By alphabetical order: Adrian Moya: for developing appliances that rival some of our best work. Basil Kurian: for storming through appliance development at a rate I can barely keep up with. JedMeister: for continuing to lead as our most helpful and tireless community member for nearly a year and a half now. This guy is a frigging one man support army. Also special thanks to Bob Marley, the legend who's been inspiring us as of late to keep jamming till the sun was shining. :) Final thoughts TKLBAM is a major milestone for TurnKey. We're very excited to finally unveil it to the world. It's actually been a not-so-secret part of our vision from the start. A chance to show how TurnKey can innovate beyond just bundling off the shelf components. With TKLBAM out of the way we can now focus on pushing out the next release batch of Lucid based appliances. Thanks to the amazing work done by our star TKLPatch developers, we'll be able to significantly expand our library so by the next release we'll be showcasing even more of the world's best open source software. Stir It Up! [Less]
Posted almost 15 years ago by Alon Swartz
According to Murphy's Law, everything that can go wrong, eventually will go wrong. This is true for backups on multiple levels. A backup is often our last line of defense when things go wrong, but so many things can go wrong with the backup ... [More] itself that we usually don't find out about it until, well, horror of horrors, the backup fails. On the surface, backups can fail for zillions of reasons. If you're ahead of the game you can probably think of at least a dozen reasons why your backups will fail exactly when you need them the most. If you can't, you've most likely been lulled into a false sense of security. You don't even know what you don't know. But these are just symptoms of a deeper underlying problem. Yes, backups are hard to get right, but the real stinger is that they're even harder to test. This is because restoring a system from backup into production is usually a labor intensive, error prone, time consuming exercise in pain. In the real world, it wouldn't matter if things go wrong, if (and that's a big if) we could easily simulate the worst case scenario on demand: restore our systems from backup and verify that everything works - or verify that they don't, fix it, rinse-repeat. Bottom line? Few test their backups, and nobody tests them frequently enough. Sure, if you have the resources (not many do), you can bruteforce your way around the problem. For example, with the right setup you could do frequent bit-for-bit identical snapshots of the underlying storage media and send them safely off site via a very high-bandwidth network connection (and vice versa). But such bruteforce backup strategies are hugely inefficient and often impractical due to costs. In the words of Mat Kearney - what is a boy to do? [Less]
Posted almost 15 years ago by Alon Swartz
We are about to release the TurnKey Linux Backup and Migration (TKLBAM) mechanism, which boasts to be the simplest way, ever, to backup a TurnKey appliance across all deployments (VM, bare-metal, Amazon EC2, etc.), as well as provide the ability ... [More] to restore a backup anywhere, essentially appliance migration or upgrade. Note: We'll be posting more details really soon - In this post I just want to share an interesting issue we solved recently. Backups need to be stored somewhere - preferably somewhere that provides unlimited, reliable, secure and inexpensive storage. After exploring the available options, we decided on Amazon S3 for TKLBAM's storage backend.   The problem Amazon have 4 data centers called regions spanning the world, situated in North California (us-west-1), North Virginia (us-east-1), Ireland (eu-west-1) and Singapore (ap-southeast-1).   The problem: Which region should be used to store a servers backups, and how should it be determined?   One option was to require the user to specify the region to be used during backup, but, we quickly decided against polluting the user interface with options which can be confusing, and opted for a solution that could automatically determine the best region.   The solution The below map plots the countries/states with their associated Amazon region:   Generated automatically using Google Maps API from the indexes.   The solution: Determine the location of the server, then lookup the closest Amazon region to the servers location.   Part 1: GeoIP This was the easy part. The TurnKey Hub is developed using Django which ships with GeoIP support in contrib. Within a few minutes of being totally new to geo-location I had part 1 up and running.   When TKLBAM is initialized and a backup is initiated, the Hub is contacted to get authentication credentials and the S3 address for backup. The Hub performs a lookup on the IP address and enumerates the country/state.   In a nutshell, adding GeoIP support to your Django app is simple: Install Maxmind's C library and download the appropriate dataset. Then, once you update your settings.py file, you're all set.   settings.py GEOIP_PATH = "/volatile/geoip" GEOIP_LIBRARY_PATH = "/volatile/geoip/libGeoIP.so" code from django.contrib.gis.utils import GeoIP ipaddress = request.META['REMOTE_ADDR'] g = GeoIP() g.city(ipaddress) {'area_code': 609, 'city': 'Absecon', 'country_code': 'US', 'country_code3': 'USA', 'dma_code': 504, 'latitude': -39.420898, 'longitude': - 74.497703, 'postal_code': '08201', 'region': 'NJ'}   Part 2: Indexing This part was a little more complicated.   Now that we have the servers location, we can lookup the closest region. The problem is creating an index of each and every country in the world, as well as each US state - and associating them with their closest Amazon region.   Creating the index could have been really pain staking, boring and error prone if doing it manually - so I devised a simple automated solution: Generate a mapping of country and state codes with their coordinates (latitude and longitude). Generate a reference map of the server farms coordinates. Using a simple distance based calculation, determine the closest region to each country/state, and finally output the index files. I was also planning on incorporating data about internet connection speeds and trunk lines between countries, and add weight to the associations, but decided that was overkill.   We are making the indexes available for public use (countries.index, unitedstates.index).   More importantly, we need your help to tweak the indexes - as you have better knowledge and experience on your connection latency and speed. Please let us know if you think we should associate your country/state to a different Amazon region. [Less]
Posted almost 15 years ago by Liraz Siri
We're going to be doing a series of interviews with prominent TurnKey community members so we figured it would make sense to do an interview with the founders of TurnKey (that's us!). Interviewing ourselves is a bit weird, so instead we're ... [More] inviting the TurnKey community to propose the questions which we'll answer in a separate blog post. So... ask us anything! [Less]
Posted almost 15 years ago by Alon Swartz
So you developed a Django web application and now need to deploy it into production, but still need to actively continue development (bugfixes, tweaks, adding and testing new features, etc.) In your development environment you probably had ... [More] debugging enabled, performance settings disabled, used SQLite as your database, and other settings that make development easier and faster.  But in production you need to disable debugging, enable performance, and use a real database such as MySQL or PostgreSQL, etc. Hopefully, your development environment can simulate your production environment as well, sort of staging, so your final tests prior to deployment provides the smallest delta. Sometimes you need to emulate the full production environment. Sometimes you need to emulate the full development environment. Sometimes a mixture of the two. This leads to the question, how do you seamlessly manage your development and production settings without adding overhead? It turns out there is quite a lot of discussion on how to setup Django settings.py that supports both a development and production environment, for example: Completely different settings.py files (usually you configure the webserver to add the production settings to the python path, and use the default (development) settings when using the dev webserver. By hostname By variable (PRODUCTION = True) We recently came across this issue when we were ready to deploy the TurnKey Hub into production. I didn't really like the above mentioned solutions, so this is what I came up with: settings.py (full settings for production) settings-dev.py (override production for full development) If the environment variable DEVELOPMENT is set, use settings-dev to override the production settings. I was toying with the idea to have full control over the settings via the environment, for maximum flexibility, but in the end decided against it, as it added too much complexity with not enough gain. Our settings_dev.py looks something like this: DEBUG = True TEMPLATE_DEBUG = True COMPRESS_AUTO = True SESSION_COOKIE_SECURE = False DATABASE_ENGINE = 'sqlite3' DATABASE_NAME = '/tmp/dev.db' DATABASE_USER = '' DATABASE_PASSWORD = '' TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.load_template_source', 'django.template.loaders.app_directories.load_template_source', )   The settings.py (full settings for production) includes the following snippet at the end. if os.environ.get('DEVELOPMENT', None): from settings_dev import *   [Less]
Posted almost 15 years ago by Liraz Siri
Today I'd like to spotlight TurnKey's unlikely relationship with Chelsea School, a high school in suburban Maryland. I'm going to try to tell this story on two levels: The straightforward who-what-why. Why you should care. I'll ... [More] start with the latter. If it works maybe you'll stick around for the full story. Kids collaborate with NASA, discover cave on Mars Recently, a 7th grade science class, using the raw data from a NASA satellite, made a remarkable discovery: a mysterious cave on Mars. <!--break-->I found myself fascinated not only by the discovery itself and what it means for future Mars exploration, but also in how it was made. A discovery on another planet, made not by a team of highly credentialed professionals, but a bunch of kids! When I was in middle school this sort of thing could only happen in a cartoon. But it's 2010, and this is real life. Granted this isn't the sort of thing that happens every day. But I think it questions the assumption that school kids can only learn by being spoon fed cookie-cutter, artificial assignments that bare little resemblance to real work, at least until they graduate. What if instead we could teach kids by challenging them with authentic assessments? Real solutions to real problems, and not just in science where the bar to making new contributions is often set very high... Open source and education: a match made in heaven? All over the world millions of kids are studying computers and technology. Imagine what a difference we could make if we could figure out how to help the open source community embrace that opportunity. What if we could help kids explore how computer systems in the real world work in a friction-less playground that allows them to crack open the box, explore its insides and tinker. Imagine if with a bit of guidance we could teach them how to leverage open source and the unprecedented wealth of knowledge Google puts at their fingerprints to learn autonomously faster and better than we could ever spoon feed them with traditional methods. Before you know it kids will be applying what they've learned and using open source to solve real problems in their environment. Remember, you don't have to be an experienced developer to add value to the open source community. There are many ways to contribute. Especially now that there is such an abundance of free components that often the real challenge is in discovering the good stuff and figuring out what to mix and match to get stuff done. A pipe dream you say? Our recent experience suggests otherwise... Chelsea School: greater expectations Let me get something off my chest. I tried not to show it, but when Rik Goldman, an English teacher at Chelsea School, started posting to the TurnKey forums I was initially somewhat skeptical. Chelsea School is a school that specializes in teaching students with language-based learning disabilities. Rik was in the process of guiding 6 students in their attempt to leverage off-the-shelf open source software to solve a real problem they had encountered: how to serve audible editions of assigned texts to students who would otherwise have difficulty accessing these materials. The students had decided to focus their efforts on Ampache, a streaming audio and video server which they setup to serve as a compromise between text-to-speech software and an actual human reader. They made it happen. In fact, not only did they leverage open source successfully in solving their local problem, they embraced the open source spirit, took the initiative and worked with us to help make it easy for everyone to use Ampache. Chelsea School students realized that setting up an open source based solution like Ampache can require technical skills that would scare off your typical school. Not content with just solving the problem for themselves the students built and configured a virtual appliance in which Ampache was pre-installed and pre-configured on top of Ubuntu, a popular Linux distribution. This made it possible to distribute and run a fully functional Ampache server from a USB key drive. If that wasn't remarkable enough they then proceeded to develop and document a high-quality patch that would allow us to add Ampache to the next release of the TurnKey Linux Virtual Appliance Library. Their patch automatically installs the required software components and then configures filesystem permissions, the Apache web server, MySQL database, and the Samba file sharing service. We couldn't have done it any better ourselves and we're supposed to be experts. Frankly, this impressed the heck out of me. This was no toy assignment. It's a working technology product that thousands of other users all over the world will use. An authentic assessment of the skill involved in planning, developing and implementing an innovation. A genuine achievement. And they did it the same way we would do it. By googling, reading technical documentation, and consulting with others in the community. That along with a willingness to experiment through trial and error. Then, as if to show us this was no accident, in the following weeks they would do it again and again with Elgg and LimeSurvey. Allow me to emphasize that when they started out these high school students had no prior exposure to Ubuntu or Linux. According to Rik, the students reacted strongly to working with Ubuntu and the open source community. They were keenly interested in the community development process and were eager for feedback regarding their contributions. They read excerpts from Stallman on the philosophy behind the free software movement, and signed the Ubuntu Code of Conduct. Four of the six students even committed to maintaining the appliances as Ampache matures, even after they've left Chelsea School. Meet the Chelsea School team Rik Goldman: trained as an English professor, Rik formerly taught University and College literature and composition classes. Today Rik teaches English and technology at Chelsea School's high school division, which predominantly serves students with language-based learning disabilities that can have profound effects on reading comprehension and writing fluency. Besides literature, Rik has always been interested in computers, dabbling with everything from web development to system administration. This eventually led him to transition from English professor to instructional technologist and it's through this experience that Rik lost his tolerance for proprietary, closed software in education. Adrian Madison: a junior interested in pursuing a career in Information Technology. At home his personal computer is now running Ubuntu and he's happiest learning new command line arguments. Adrian is hoping for a summer internship where he can put his new skills to use. Curtis Fawcett: a senior in the software class that is debating between a major in English and a major in engineering. He's an avid reader with an incredible memory and a natural strength for critical thinking and analysis. He's putting these assets to use by taking a college class in CAD during his senior year. Curtis now runs Ubuntu on his home computer. Jerel Moses: also in the software class, is enthusiastic about Ubuntu and the possibilities it offers for customized distribution. Jerel is simultaneously pursuing course work in web and graphic design. If he chooses to pursue a career in technology, he'll be the third generation of computer techs in his family. Maurice Quarles: a junior whose at his best navigating a GUI efficiently to perform OS tasks and maintenance. Maurice is interested in pursuing a career in game design. Steven Robinson: currently a freshman in the hardware class. He's the first student to start the Info Systems course while only in middle school and has a strong memory regarding hardware components and specifications. Steven's laptop is now running Ubuntu. David Walton: a freshman, interested in pursuing a career in game design. He kept the team laughing when they ran into problems, and insists that when Windows XP shuts down, it's singing "have a nice day". David, like Steven, is an avid gamer and is eager to start a career in game design. Plans for the future Rik's plans include teaching more students about technology through authentic assessments. Starting an open source lab right in the school, and expanding the IT curriculum to include more advanced studies of Linux and the basics of programming/scripting languages such Bash, Python and PHP. How do we scale this up? Chelsea School has proven teaching students by contributing to open source can be a win-win for both education and the open source community. In my mind the main question now is, how do we scale this up? How do we bring together more schools and open source projects? Just imagine if we can figure out how to make this happen on a large scale, what a difference we could make. [Less]
Posted about 15 years ago by Alon Swartz
I don't need to tell you how search improves our efficiency on the web, but using custom search engines can make your day even more efficient.   Configuring your browser to use custom search engines is a massive time gain, and improves ... [More] your work flow. When I used Firefox, I had set up search plugins for the stuff I needed, but it took screen real-estate (even with optimizations) and wasn't easily customizable or extendable.     With my migration to Google Chrome, I found setting up custom search engines a snap, no wasted screen real-estate, and using keywords improved my workflow.   Example: Search Wikipedia by typing alt+d wp<space>search_term     Adding custom search engines in Chrome Click the spanner, choose options. Click manage (next to default search). Click add. Create your custom search engine.   Custom search engines I use Name Keyword URL ------------------------------------------------------------------------------------- TurnKey Linux tkl http://www.turnkeylinux.org/search/node/%s Ubuntu Packages pkgu http://packages.ubuntu.org/%s Debian Packages pkgd http://packages.debian.org/%s GitHub gh http://github.com/search?q=%s Wikipedia wp http://en.wikipedia.org/wiki/Special:Search?search=%s   Notes on package searching I usually use apt-cache search search_term when I'm at the command line, but I find that using the Ubuntu and Debian web interfaces useful when exploring new packages and their dependencies while working on appliance development.   What custom search engines do you use? [Less]
Posted about 15 years ago by Liraz Siri
Over the last few months donations have been trickling in and gradually piling up. Since there's a limit to how much beer we can reasonably drink we've been brainstorming ideas for using that money to help the project. One of the ideas we ... [More] liked the most was to try and sponsor an experimental contest which would hopefully stimulate community TKLPatch development, strengthen the community and teach as many people as possible the skills needed to customize existing TurnKey appliances and create new ones. It's not rocket science and you don't need to be a programmer or systems expert. Anyone willing to learn a few basic Linux skills can have fun doing it. If you want we'll even teach you how! (details below). We're thinking of running the contest for the next 8 weeks. Then allow another 2 weeks to summarize the results and let the community help us decide who should win (e.g., a survey on the web site). Prizes Winners and all honorable mentions will be celebrated in obligatory blog posts, and forever immortalized in the TurnKey hall of fame. They'll also receive full bragging rights, the undying gratitude of billions of TurnKey users, and some of our excess beer money on PayPal, which they can claim for themselves or donate to a charity of their choice. First prize: $1000 Second prize: $300 Third prize: $100 Many thanks to everyone who donated to the project! We're hoping this will put your money to good use. If you like the idea and want to increase the contest pot size, feel free to donate now and ask us to dedicate the money to that. Help us expand the next release We're hoping to get lots of high-quality submissions, and if we do, the timing will be just right. As many of you already know we're in the middle of a development cycle for the new Lucid and Lenny based beta appliances and we have our hands full upgrading the existing crop of appliances and introducing new features such as backup and migration. Unfortunately that means this time around we don't have the resources to expand the appliance library horizontally and add a significant number of new appliances from scratch. Unless... well, that's where our heros come in. Adding a new appliance to the library from a TKLPatch still requires work, but it's definitely easier than creating an appliance from scratch and that means we could get more of them done. In other words, any high-quality TKLPatch submissions we receive from the community in the next couple of months will have a very good chance of being formally adopted into the library in time for the next release. After that the appliance library will be frozen again for a while until the next release batch. List of ideas In no particular order here's a list of ideas for appliances that have been sitting in our todo list for much too long and probably won't make it into the next release without the community's help: By function (components not yet determined): terminal server replacement (I.e., remote desktops for thin-clients), ASP.NET replacement, web filtering proxy (e.g. privacy, ad-blocking, malware protection, content filtering), plug-in e-mail filter (e.g., spam/malware), unified threat management, load balancing reverse proxies. eCommerce: Magento, VirtueMart, PrestaShop, Zen Cart, osCommerce, UberCart. Content management: Alfresco, SilverStripe, Plone, Knowledge Tree, DSpace, Apache Roller, LifeRay. Messaging: DimDim, Asterisk, OpenFire, Mumble, Vanilla forum, StatusNet, Zarafa, Scalix, RoundCube, OpenEMM Business: SugarCRM, vTiger CRM, Open HRM, Apache OFBiz, GLPI Monitoring: Nagios, Cacti, ZenOSS, Hyperic, Zabbix, OpenNMS Infrastructure: OpenLDAP, Radius Development frameworks: Zope, TurboGears, jBos Management: eBox, ISPConfig Data integration: Jasper BI, Pentaho, SnapLogic Backup: Amanda, Bacula IDS: Snort + Aanval, Snorby Virtualization: Proxmox VE, Xen-DTC, oVirt, enomaly, eucalyptus Misc: Apache Solr, iFolder, IceCast, OpenVPN ALS If you have your own ideas for appliances you think would make good additions to the TurnKey library, don't worry if they're not on the list. All contributions are welcome. Work on whatever interests you. Sign up for a live training session Creating a TKLPatch isn't hard. Read the documentation and still not sure where to begin? No problem. If there's interest, we'll be giving free live training sessions on TurnKey's IRC channel showing how to build an example TKLPatch step by step and answering any questions. Click here to sign up and we'll send you an e-mail with the time and date of the session. A few guidelines You can improve an existing appliance or create a new appliance by patching any appliance in the TurnKey library, including the new Ubuntu Lucid and Debian Lenny TurnKey Core betas. If you're creating a new appliance, we recommend patching the new beta of TurnKey Core based on Ubuntu 10.04 Lucid. This is the latest version of Ubuntu so it has the newest packages in its software repositories. It's preferable to install software through the package management system rather than directly from an upstream tarball. It's usually much easier to install and update software this way. Unfortunately, sometimes this won't be an option because there's a lot of excellent open source software that isn't in Ubuntu's or Debian's package repositories. In these cases try checking if the software you're looking for is at least available as a Debian package (*.deb). If not, that's OK. An appliance doesn't have to be perfect to be useful. In case it isn't obvious, if you include software from outside the official package repositories, make sure it's available under an open source license (e.g., GPL, BSD, etc.). Free as in free beer is not enough. Software in official TurnKey appliances must also be free as in liberty. Publish results as soon as you have them on the forum and/or development wiki. In general, credit for a result goes to the first person who publishes. This doesn't even have to be a finished TKLPatch, though naturally finished, high-quality submissions count moure than partial results. To avoid duplicated effort, check the development wiki before you start working on a new TKLPatch. Maybe it's already been submitted. But if you think you can make something better, don't let that stop you! You can work alone, or collaborate as part of a group. Your choice. There's no need to register. If you're part of a group, just document who should take credit (e.g., Hans Solo, Star Wars group). Results will be evaluated based on a quality and quantity of all submissions, including any integration notes, TKLPatches, or documentation submitted. At the end we'll summarize the results of all participants and set up community surveys to help us decide who should win. Have fun! Don't forget to sign up for the live training session if you're interested. Any questions? [Less]
Posted about 15 years ago by Liraz Siri
Over the last few months donations have been trickling in and gradually piling up. Since there's a limit to how much beer we can reasonably drink we've been brainstorming ideas for using that money to help the project. One of the ideas we ... [More] liked the most was to try and sponsor an experimental contest which would hopefully stimulate community TKLPatch development, strengthen the community and teach as many people as possible the skills needed to customize existing TurnKey appliances and create new ones. It's not rocket science and you don't need to be a programmer or systems expert. Anyone willing to learn a few basic Linux skills can have fun doing it. If you want we'll even teach you how! (details below). We're thinking of running the contest for the next 8 weeks. Then allow another 2 weeks to summarize the results and let the community help us decide who should win (e.g., a survey on the web site). Prizes Winners and all honorable mentions will be celebrated in obligatory blog posts, and forever immortalized in the TurnKey hall of fame. They'll also receive full bragging rights, the undying gratitude of billions of TurnKey users, and some of our excess beer money on PayPal, which they can claim for themselves or donate to a charity of their choice. First prize: $1000 Second prize: $300 Third prize: $100 Many thanks to everyone who donated to the project! We're hoping this will put your money to good use. If you like the idea and want to increase the contest pot size, feel free to donate now and ask us to dedicate the money to that. Help us expand the next release We're hoping to get lots of high-quality submissions, and if we do, the timing will be just right. As many of you already know we're in the middle of a development cycle for the new Lucid and Lenny based beta appliances and we have our hands full upgrading the existing crop of appliances and introducing new features such as backup and migration. Unfortunately that means this time around we don't have the resources to expand the appliance library horizontally and add a significant number of new appliances from scratch. Unless... well, that's where our heros come in. Adding a new appliance to the library from a TKLPatch still requires work, but it's definitely easier than creating an appliance from scratch and that means we could get more of them done. In other words, any high-quality TKLPatch submissions we receive from the community in the next couple of months will have a very good chance of being formally adopted into the library in time for the next release. After that the appliance library will be frozen again for a while until the next release batch. List of ideas In no particular order here's a list of ideas for appliances that have been sitting in our todo list for much too long and probably won't make it into the next release without the community's help: By function (components not yet determined): terminal server replacement (I.e., remote desktops for thin-clients), ASP.NET replacement, web filtering proxy (e.g. privacy, ad-blocking, malware protection, content filtering), plug-in e-mail filter (e.g., spam/malware), unified threat management, load balancing reverse proxies. eCommerce: Magento, VirtueMart, PrestaShop, Zen Cart, osCommerce, UberCart. Content management: Alfresco, SilverStripe, Plone, Knowledge Tree, DSpace, Apache Roller, LifeRay. Messaging: DimDim, Asterisk, OpenFire, Mumble, Vanilla forum, StatusNet, Zarafa, Scalix, RoundCube, OpenEMM Business: SugarCRM, vTiger CRM, Open HRM, Apache OFBiz, GLPI Monitoring: Nagios, Cacti, ZenOSS, Hyperic, Zabbix, OpenNMS Infrastructure: OpenLDAP, Radius Development frameworks: Zope, TurboGears, jBos Management: eBox, ISPConfig Data integration: Jasper BI, Pentaho, SnapLogic Backup: Amanda, Bacula IDS: Snort + Aanval, Snorby Virtualization: Proxmox VE, Xen-DTC, oVirt, enomaly, eucalyptus Misc: Apache Solr, iFolder, IceCast, OpenVPN ALS If you have your own ideas for appliances you think would make good additions to the TurnKey library, don't worry if they're not on the list. All contributions are welcome. Work on whatever interests you. Sign up for a live training session Creating a TKLPatch isn't hard. Read the documentation and still not sure where to begin? No problem. If there's interest, we'll be giving free live training sessions on TurnKey's IRC channel showing how to build an example TKLPatch step by step and answering any questions. Click here to sign up and we'll send you an e-mail with the time and date of the session. A few guidelines You can improve an existing appliance or create a new appliance by patching any appliance in the TurnKey library, including the new Ubuntu Lucid and Debian Lenny TurnKey Core betas. If you're creating a new appliance, we recommend patching the new beta of TurnKey Core based on Ubuntu 10.04 Lucid. This is the latest version of Ubuntu so it has the newest packages in its software repositories. It's preferable to install software through the package management system rather than directly from an upstream tarball. It's usually much easier to install and update software this way. Unfortunately, sometimes this won't be an option because there's a lot of excellent open source software that isn't in Ubuntu's or Debian's package repositories. In these cases try checking if the software you're looking for is at least available as a Debian package (*.deb). If not, that's OK. An appliance doesn't have to be perfect to be useful. In case it isn't obvious, if you include software from outside the official package repositories, make sure it's available under an open source license (e.g., GPL, BSD, etc.). Free as in free beer is not enough. Software in official TurnKey appliances must also be free as in liberty. Publish results as soon as you have them on the forum and/or development wiki. In general, credit for a result goes to the first person who publishes. This doesn't even have to be a finished TKLPatch, though naturally finished, high-quality submissions count moure than partial results. To avoid duplicated effort, check the development wiki before you start working on a new TKLPatch. Maybe it's already been submitted. But if you think you can make something better, don't let that stop you! You can work alone, or collaborate as part of a group. Your choice. There's no need to register. If you're part of a group, just document who should take credit (e.g., Hans Solo, Star Wars group). Results will be evaluated based on a quality and quantity of all submissions, including any integration notes, TKLPatches, or documentation submitted. At the end we'll summarize the results of all participants and set up community surveys to help us decide who should win. Have fun! Don't forget to sign up for the live training session if you're interested. Any questions? [Less]
Posted about 15 years ago by Alon Swartz
Every web application needs a navigation bar. Common practice is to indicate to the user where he or she is, and is usually implemented by using a visual aid such as a bold type-face, different color or an icon.   I wanted an elegant ... [More] , generic, extendable solution to "highlight" a link on the navigation bar without hardcoding URLs, using ifequals, or using template block inheritance by specifying a navbar block on each and every template (you'd be surprised, but the above mentioned are recommend often). <!--break-->   The solution I came up with is quite simple. No need to hardcode URLs (using urlconf names). Navbar is only specified in the base template (actually a separate template loaded by the base template). By using a simple template tag and the request context processor, "active" will be returned if the "link" should be active. Supports multiple URLs for each link. CSS is used to highlight the active link. You can see the above in action on the TurnKey Hub. On to the code First up, we need to enable the request context processor. settings.py TEMPLATE_CONTEXT_PROCESSORS = ( ... 'django.core.context_processors.request', )   Next, create the template tag navactive.   Note: the navigation sitemap I'm using is flat, but you can tweak the code to support multiple levels quite easily. apps/<myapp>/templatetags/base_extras.py from django import template from django.core.urlresolvers import reverse register = template.Library() @register.simple_tag def navactive(request, urls): if request.path in ( reverse(url) for url in urls.split() ): return "active" return ""   Now for a little CSS styling... media/css/style.css .navbar .active { font-weight: bold; }   With all the above in place, we can create the navigation bar template code. templates/navigation.html {% load base_extras %} <div id="topbar"> <div class="navbar"> <div class="navbar-side navbar-left"></div> <div class="navbar-content"> {% if user.is_authenticated %} <a class="{% navactive request 'servers help_server' %}" href="{% url servers %}">Servers</a> <a class="{% navactive request 'account_clouds help_registeraccount href="{% url account_clouds %}">Cloud Accounts</a> <a class="{% navactive request 'account_details' %}" href="{% url account_details %}">User Profile</a> <a href="{% url auth_logout %}">Logout</a> {% else %} <a href="{% url auth_login %}">Login</a> or <a href="{% url registration_register %}"><b>Sign up</b></a> {% endif %} </div> <div class="navbar-side navbar-right"></div> </div> </div>   And finally, include the navigation bar in the base template so the navigation bar shows up on each page. templates/base.html {% include "navigation.html" %} [Less]