20
I Use This!
Activity Not Available

News

Posted almost 9 years ago by Aaron Seigo
Today we closed out the first (and quite successful) Kolab Summit in front of both the Kolab and openSUSE attendees with some really big news: the Roundcube team has launched a significant new development project to give Roundcube, the world's most ... [More] popular free software webmail system, a modern fluid "single-page" user interface. The UI will be rendered entirely in the browser, and the server will only do minimal business logic in support of that. The focus is on modularity (to make it easier to extend Roundcube's core features), scalability, and deployability. At the same time, the Roundcube team needs to maintain the current version (we have commitments to clients and users that stretch years into the future) as well as build a migration strategy to the new version when it becomes available. Thomas, the founder and project lead for Roundcube, gave a great presentation explaining the whole thing. As you might imagine, achieving these goals will involve refactoring nearly the entire codebase. We plan to commit three developers along with a UI designer to the project with support of the Kolab Systems project management infrastructure and staff. So this is a pretty big project, but quite achievable. While discussing how best to make this all happen, the Roundcube team decided that it would make sense to reach out to the entire Roundcube user community to help make this happen, and therefore launched a crowd funding campaign today at Indiegogo. Quite a way to close out the conference! Together, we can make this a great success! Please help spread the word, back the campaign with a pledge, and join us for what is going to be a fantastic journey. Regular updates will be posted to the crowdfunding page, and we are excited to make the run to our initial goal of $80,000 with you!   [Less]
Posted almost 9 years ago by Aaron Seigo
On the first day of the Kolab Summit we announced that Kolab is getting full extended MAPI support. That was in itself a pretty fantastic announcement, but it was accompanied by announcements of instant messaging, WebRTC and collaborative editing. ... [More] Here is a picture which I think captures what the LibreOffice and WebODF people think about this direction, captured over lunch today:   [Less]
Posted almost 9 years ago by Aaron Seigo
Yesterday I delivered a keynote at the openSUSE conference about the best feature of Free software: freedom. This is a message that is easy to lose sight of in the maker/creator community around free software given the understandable focus on ... [More] business goal metrics such as market penetration, developer adoption, innovation rates, etc. You can see my slides here, and the video of the presentation will be uploaded later by the conference team. (I'll link to it when it appears.) The questions after the presentation were excellent as well and the conversations continued out into the hallways afterwards. That was yesterday. Today, the Kolab Summit began. Georg Greve kicked things off by sharing the vision for Kolab this year (slides here). He covered three areas of focus for Kolab this year: Real-time collaboration: IM, WebRTC, document editing. This will allow us to compliment the existing asynchronous communication Kolab excels at (email, calendaring, notes, files, etc.) with synchronous, collaborative editing. User experience refactor: major work is being done with the Kolab clients, in particular the Roundcube client. The goal here is to surpass what is available elsewhere in the market to keep free software as a leader in this area. Full extended MAPI support. Yes, Kolab will be able to support Outlook out of the box. Fully. The lead OpenChange developer is here to discuss this further later in the summit. There are many other projects we are digging into significantly, and Kolab's system architect, Jeroen van Meeuwen, followed Georg with a technical roadmap overview. He not only filled in the details behind the three focus areas Georg highlighted, but shared our road map for data loss prevention, multi-factor authentication, in-web-browser encryption, Akonadi Next for the desktop client ... in short we're very, very busy. Everything we are doing has a very clear use case that needs these tools so that they can choose to also use free software for their collaboration needs.   [Less]
Posted almost 9 years ago by Aaron Seigo
In my last blog entry, I mentioned that we have been working on a comprehensive data loss prevention (DLP) and audit trail system for use with Kolab, with the end goal being not only DLP but also a platform for business intelligence. In that entry I ... [More] listed the three parts of the system, noting that I'd be writing about one at a time. I had hoped to jump on the first of those a day or two after writing the entry, but life and work intervened and then I was off on a short family vacation ... but now I'm back. So let's talk about the capture side of the system. Kolab can be viewed as a set of cooperative microservices: smtp, imap, LDAP, spam/virus protection, invitation auto-processing, web UI, etc. etc. There are a couple dozen of these and up until now they have all done the traditional, and correct, thing of logging events to a system log. This has numerous drawbacks, however. First, on a distributed system where different services are running on different hosts (physical or VMs), the result is data spread over many systems. Not great for subsequent reporting. At the time of logging, the events are in a "raw" state: each service likely does not know about the rest of the Kolab services and thus how their events relate to the whole system. With logs going through the host systems it makes it difficult to ensure that they are not easily tampered with; this can be somewhat alleviated by setting up remote logging but this also only goes so far. Finally, logging tends to be a firehose of data and for our specific interests here we want a very specific sub-stream of that total flow. So we have written yet another service whose entire job is to collect events as they are generated. This service is itself distributed, allowing collection agents to be run across a cluster running a Kolab instance, and it stores its data in a dedicated key-value store which can be housed on an isolated (and specially secured, if desire) system. The program running this service is called Egara, which is Sumerian for "storehouse", and it is written in Erlang due to its robustness (this service must simply never go down), scalability and distributed communication features. The source repository can be found here. Egara itself is part of the overall DLP/auditing system we have named Bonnie. The high-level purpose of Egera is to create a consistent and complete history of what happens to objects within the groupware system over time. An "object" might be an email, a user account, a calendar event, a tag, a note, a todo item, etc. An event (or "what happens") including things such as new objects, deletions, setting of flags or tags, changing the state (e.g. from unread to read), starting or tearing down an authenticated session, etc. In other words, its job is to create, in real-time, a complete history of who did what when. As such I've come to view it as an automated historian for your world of groupware. Egara itself is divided into three core parts: incoming handlers: these components implement a standard behavior and are responsible for collecting events from a specific service (e.g. cyrus-imap) and relaying them to the core application once received event normalizers: these workers process events from the new event queue and are tasked with normalizing and augmenting the data within them, creating complete point-in-time additions to the history. Many events come in with simple references to other objects, such as a mail folder; the event normalization workers need to turn those implicit bits of information into explicit links that can be reliably followed over time middleware: these are mainly the bits that provide process supervision, populate and manage the shared queues of events as information arrives from incoming handlers and is processed by normalizers. This all happen asynchronously and provides guarantees at each step of correct handling (inasumuch as each reporting service allows for that). This means that individual normalizers can fail in even spectacular fashion and not disrupt the system, that an admin can halt and restart the system at will without fear of loss of events (save those that are generated during downtime periods, assuming a full Egara take-down), etc. Final storage is done in a Riak database, with queues managed by the Mnesia database built into Erlang's OTP system itself. Mnesia can best be thought of as a built-in Redis: entirely in-memory (fast) with disk backing (robust); just add built-in clustering and native, first-class API for storage and retrieval (e.g. we are able to use Erlang functions to do perform updates and filtering over all or part of a queue's dataset). Data in Mnesia is stored as native Erlang records, while data in Riak is stored as JSON documents. Incoming events may be any format and any delivery mechanism. They can be parallelized, spread across a cluster of machines ... it doesn't matter. The incoming handler is tasked with translating the stream of events into an Erlang term that can be passed on to the normalizer for processing. This allows us to extend Egara in a very easy way with new service-specific handlers to virtually any dataset we wish to keep track of within Kolab or its surroundings. Normalizers will eventually also join this level of abstraction, though right now the sole worker implementation is specific to groupware data objects. Future releases of Egara will add support for different workers for different classes of events, giving a nice symmetry with the incoming event handlers. The middleware is designed to be used without modification as the system grows in capability while being scalable. Multiple instances can be run across different systems and the results should (eventually) be the same. I say "eventually" since in such a system one can not guarantee the exact order of events, only the exact results after some period of time. Or, in more familiar terms, it is eventually consistent. The whole system is quite flexible at runtime, as well. One can configure which kinds of events one cares to track; which data payloads (if any) to archive; which incoming handlers to run on a given node, etc. This will expand over time as well to allow normalizers and their helpers to be quarantined to specific systems within a cluster. Egara works nicely with Kolab 3.4 and Kolab Enterprise 14, though Bonnie is not officially a part of either. I expect the entire system will be folded into a future Kolab release to ease usage. It will almost certainly remain an optional component, however: not everyone needs these features, and if you don't then there's no reason to pay the price of the runtime overhead and maintenance. That's a "50,000 foot" view of the historian component of Bonnie. The next installments in this blog series will look a bit closer at the storage model, history querying and replayability and, finally, what this means for end-users and organizations running Kolab with the Bonnie suite.   [Less]
Posted about 9 years ago by Aaron Seigo
Working with Kolab has kept me busy on numerous fronts since I joined near the end of last year. There is the upcoming Kolab Summit, refreshing Kolab Systems' messaging, helping with progress around Kolab Now, collaborating on development process ... [More] improvement, working on the design and implementation of Akonadi Next, the occassional sales engineering call ... so I've been kept busy, and been able to work with a number of excellent people in the process both in Kolab Systems and the Kolab community at large. While much of that list of topics doesn't immediately bring "writing code" to mind, I have had the opportunity to work on a few "hands on keyboard, writing code" projects. Thankfully. ;) One of the more interesting ones, at least to me, has been work on an emerging data loss prevention and audit trail system for Kolab called Bonnie. It's one of those things that companies and governmental users tend to really want, but which is fairly non-trivial to achieve. There are, in broad strokes, three main steps in such a system: Capturing and recording events Storing data payloads associated with those events Recreating histories which can be reviewed and even restored from I've been primarily working on the first two items, while a colleague has been focusing on the third point. Since each of these points is a relatively large topic on their own, I'll be covering each individually in subsequent blog entries. We'll start in the next blog entry by looking at event capture and storage, why it is necessary (as opposed to simply combing through system logs, e.g.) and what we gain from it. I'll also introduce one of the Bonnie components, Egara, which is responsible for this set of functionality.   [Less]
Posted about 9 years ago by Aaron Seigo
Today "everyone" is online in one form or another, and it has transformed how many people connect, communicate, share and collaborate with others. To think that the Internet really only hit the mainstream some 20 years ago. It has been an amazingly ... [More] swift and far-reaching shift that has touched people's personal and professional lives. So it is no surprise that the concept of eGovernment is a hot one and much talked about. However, the reality on the ground is that governments tend not to be the swiftest sort of organizations when it comes to adopting change. (Which is not a bad thing; but that's a topic for another blog perhaps.) Figuring out how to modernize the communication and interaction of government with their constituencies seems to largely still be in the future. Even in countries where everyone is posting pictures taken on their smartphones of their lunch to all their friends (or the world ...), governments seem to still be trying to figure out how to use the Internet as an effective tool for democratic discourse. The Netherlands is a few steps ahead of most, however. They have an active social media presence which is used by numerous government offices to collaborate with each other as well as to interact with the populace. Best of all, they aren't using a proprietary, lock-in platform hosted by a private company oversees somewhere. No, they use a free software social media framework that was designed specifically for this: Pleio. They have somewhere around 100,000 users of the system and it is both actively used and developed to further the aims of the eGovernment initiative. It is, in fact, an initiative of the Programme Office 2.0 with the Treasury department, making it a purposeful program rather than simply a happy accident. In their own words: The complexity of society and the need for citizens to ask for an integrated service platform where officials can easily collaborate with each other and engage citizens. In addition, hundreds of government organizations all have the same sort of functionality needed in their operations and services. At this time, each organization is still largely trying to reinvent the wheel and independently purchase technical solutions. That could be done better. And cheaper. Gladly nowadays new resources are available to work together government-wide in a smart way and to exchange knowledge. Pleio is the platform for this. Just a few days ago it was anounced publicly that not only is the Pleio community is hard at work on improving the platform to raise the bar yet again, but that Kolab will be a part of that. A joint development project has been agreed to and is now underway as part of a new Pleio pilot project. You can read more about the collaboration here.   [Less]
Posted about 9 years ago by roundcube
This is the first service release to update the stable version 1.1. It contains some important bug fixes and improvements in error handling as well as a few new features and configuration options. See the full changelog here. It’s considered stable ... [More] and we recommend to update all productive installations of Roundcube with this version. Download it from roundcube.net/download, Please do backup your data before updating!   [Less]
Posted about 9 years ago by Aaron Seigo
We just announced that registration and presentation proposal submission is now open for the Kolab Summit 2015 which is being held in The Hague on May 2-3. Just as Kolab itself is made up of many technologies, many technologies will be present at the ... [More] summit. In addition to topics on Kolab, there will be presentations covering Roundcube, KDE Kontact and Akonadi, cyrus imap, and OpenChange among others. We have some pretty nifty announcements and reveals already lined up for the event, which will be keynoted by George Greve (CEO of Kolab Systems AG) and Jeroen van Meeuwen (lead Kolab architect). Along with the usual BoFs and hacking rooms, this should be quite an enjoyable event. As an additional and fun twist, the Kolab Summit will be co-located with the openSUSE conference which is going on at the same time. So we'll have lots of opportunity for "hallway talks" with Geekos as well. In fact, I'll be giving a keynote presentation at the openSUSE conference about freedom as innovation. A sort of "get the engines started" presentation that I hope provokes some thought and gets some energy flowing.   [Less]
Posted about 9 years ago by Aaron Seigo
We just announced that registration and presentation proposal submission is now open for the Kolab Summit 2015 which is being held in The Hague on May 2-3. Just as Kolab itself is made up of many technologies, many technologies will be present at ... [More] the summit. In addition to topics on Kolab, there will be presentations covering Roundcube, KDE Kontact and Akonadi, cyrus imap, and OpenChange among others. We have some pretty nifty announcements and reveals already lined up for the event, which will be keynoted by George Greve (CEO of Kolab Systems AG) and Jeroen van Meeuwen (lead Kolab architect). Along with the usual BoFs and hacking rooms, this should be quite an enjoyable event. As an additional and fun twist, the Kolab Summit will be co-located with the openSUSE conference which is going on at the same time. So we'll have lots of opportunity for "hallway talks" with Geekos as well. In fact, I'll be giving a keynote presentation at the openSUSE conference about freedom as innovation. A sort of "get the engines started" presentation that I hope provokes some thought and gets some energy flowing.   [Less]
Posted about 9 years ago by Andreas Cordes
Hello, Seafile is an open source cloud software which is free for private use. There is also a professional edition which is not necessary for my needs. I just want to sync files across more than one device. In the past I used ownCloud which was ... [More] pretty good for my needs. First I decided to have ownCloud integrated in Kolab as a backend but I had a bit lack of time so developing a new driver for chwala was not so easy. After a while I noticed that Seafile is integrated in chwala 0.3.0 and with Kolab 3.4 it is quite stable to install. Last week I managed all the stuff and here is my Step-By-Step Guide: Install Kolab 3.4 and test itIt's obvious that you need Kolab for this. Please refer to the kolab.org web page for an installation guide for your distro Install seafile and test it.The homepage of seafile will guide you through all the stuff you need for that. So just download, extract and run it. Connect Seafile to Kolab-LDAPFollowing Seafile->Using LDAP is exactly what I did. My installation was spreaded over /opt/seafile, /opt/seafile/seafile-server-latest and /mnt/seafileBut you'll find your ccnet.conf and add the LDAP part to it (please change the values according to your installation): [LDAP]HOST = ldap://127.0.0.1BASE = ou=People,dc=example,dc=comUSER_DN = cn=directory managerPASSWORD = youdon'tknowjackLOGIN_ATTR = mail Now test seafile again if you can now login with your kolab main mailaddress.Please keep in mind that each user has to login for the first time to seafile in order to get the right folders. If everything is ok, now you have to use Apache as a proxy for SeafileFollowing Seafile -> Deploy with Apache was ok for me for the first time.Well, this will break your Kolab if you follow the steps directly. If this is ok as well, now fine tune a bit the Apache configuration for Seafile.These lines should be changed in your apache.conf (or vhost): ## seahub#RewriteRule ^/(media.*)$ /$1 [QSA,L,PT]RewriteCond %{REQUEST_FILENAME} !-fRewriteRule ^(.*)$ /seahub.fcgi$1 [QSA,L,E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]This will redirect all request to Seafile, but if you only have one SSL certificate and one domain you have to ignore this for all the Kolab modules.So please add the following RewriteCond to the config: RewriteRule ^/(media.*)$ /$1 [QSA,L,PT]RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_URI} !^/roundcubemail.*RewriteCond %{REQUEST_URI} !^/Microsoft.*RewriteCond %{REQUEST_URI} !^/iRony.*RewriteCond %{REQUEST_URI} !^/chwala.*RewriteCond %{REQUEST_URI} !^/kolab.*RewriteRule ^(.*)$ /seahub.fcgi$1 [QSA,L,E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]The first RwriteCond tells Apache to rewrite the URL if the file does not exist. For the Kolab installation there are some URL more not to check against an existing file because they are an Alias in the Apache way. So we ignore this rewrite when the URL starts with "roundcube", "Microsoft" (for active sync) and so on.Reading the conf above should be: Rewrite all URL starting with /mediaReWrite the URL when the requested file does not exist (! -f)AND (is implicitly) the request does not start with roundcube (!/^/roundcubemail.*)AND the request does not start with MicrosoftAND the request does not start with iRonyAND the erquest does not start with chwalaAND the request does not start with kolabTO /seahub.fcgi PLUS the request string including the trailing slash (^(.*)$ /seahub.fcgi$1) Now test your Kolab and Seafile again, everything should now work on the same server. It's time to combine Chwala with Seafile. The most interesting part. :-)Please edit your /etc/roundcubemail/config.inc.php file and all the Seafile stuff: // seafile$config['fileapi_backend'] = 'seafile';$config['fileapi_seafile_host'] = 'www.example.com';$config['fileapi_seafile_ssl_verify_peer'] = false;$config['fileapi_seafile_ssl_verify_host'] = false;$config['fileapi_seafile_cache_ttl'] = '14d';$config['fileapi_seafile_debug'] = true; That's it. Now you can use your Seafile server in Kolab as a file storage. But keep in mind, password protected folders are not accessible through chwala this way. Feel free to leave any comments and Greetz   [Less]