20
I Use This!
Activity Not Available

News

Analyzed almost 2 years ago. based on code collected about 3 years ago.
Posted almost 7 years ago by seigo
I bought an Intel Pinnacle Canyon NUC5CPYH, a.k.a. an Intel NUC barebone, for my home network. This ships with an Intel Celeron N3050 processor, which RHEL 7.1 is so kind to tell me has not been tested. However, the installation is running fine, so I expect this will “Just Work(TM)”.  
Posted almost 7 years ago by mollekopf
After having reached the first milestone of a read-only prototype of Kube, it’s time to provide a lookout of what we plan to achieve in 2016. I have put together a Roadmap, of what I think are realistic goals that we can achieve in 2016. Obviously ... [More] this will evolve over time and we’ll keep adjusting this as we advance faster/slower or simply move in other directions. Since we’re building a completely new technology stack, a lot of the roadmap revolves around ensuring that we can create what we envision technology wise, and that we have the necessary infrastructure to move fast while having confidence in the quality. It’s important that we do this before growing the codebase too much so we can still make the necessary adjustments without having too much code to adjust. On the UX side we’ll want to work on concepts and prototypes, although we’ll probably keep the first implemented UI’s to something fairly simple and standard. Over time we have to build a vision where we want to go in the long run so this can steer the development. This will be a long and ongoing process involving not only wire-frames and mockups, but hopefully also user research and analysis of our problem space (how do we communicate rather than how does GMail work). However, since we can’t just stomp that grander vision out of the ground, the primary goal for us this year, is a simple email client that doesn’t do much, but does what it does well. Hopefully we can go beyond that with some other components available (calendar, addressbook, …), or perhaps something simple available on mobile already, but we’ll have to see how fast it goes first. Overall we’ll want to focus on quality rather than quantity to prove what quality level we’re able to reach and to ensure we’re well lined up to move fast in the following year(s). The Roadmap I split the roadmap into four quarters, each having it’s own focus. Note that Akonadi Next has been renamed to Sink to avoid confusion (now that Akonadi 5 is released and we planned for Akonadi2…). 1. Quarter Milestones: – Read-only Kube Mail prototype. – Fully functional Kube Mail prototype but with very limited functionality set (read and compose mail). – Testenvironment that is also usable by designers. – Logging mechanism in Sink and potentially Kube so we can produce comprehensive logs. – Automatic gathering of performance statistics so we can benchmark and prove progress over time. – The code inventory1 is completed and we know what features we used to have in Kontact. – Sink Maildir resource. – Start of gathering of requirements for Kube Mail (features, ….). – Start of UX design work. We focus on pushing forward functionality wise, and refactoring the codebase every now and then to get a feeling how we can build applications with the new framework. The UI is not a major focus, but we may start doing some preparatory work on how things eventually should be. Not much attention is paid to usability etc. Once we have the Kube Mail prototype ready, with a minimum set of features, but a reasonable codebase and stability (so it becomes somewhat useful for the ones that want to give it a try), we start communicating about it more with regular blogposts etc. 2. Quarter Milestones: – Build on Windows. – Build on Mac. – Comprehensive automated testing of the full application. – First prototype on Android. – First prototype on Plasma Mobile? – Sink IMAP resource. – Sink Kolab resource. – Sink ICal resource. – Start of gathering of performance requirements for Kube Mail (responsiveness, disk-usage, ….) – Define target feature set to reach by the end of the year. We ensure the codebase builds on all major platforms and ensure it keeps building and working everywhere. We ensure we can test everything we need, and work out what we want to test (i.e. including UI or not). Kube is extended with further functionality and we develop the means to access a Kolab/IMAP Server (perhaps with mail only). 3. Quarter Milestones: – Prototype for Kube Shell. – Prototype for Kube Calendar. – Potentially prototype for other Kube applications. – Rough UX Design for most applications that are part of Kube. – Implementation of further features in Kube Mail according to the defined feature set. We start working on prototypes with other datatypes, which includes data access as well as UI. The implemented UI’s are not final, but we end up with a usable calendar. We keep working on the concepts and designs, and we approximately know what we want to end up with. 4. Quarter Milestones: – Implementation of the final UI for the Kube Mail release. – Potentially also implementation of a final UI for other components already. – UX Design for all applications “completed” (it’s never complete but we have a version that we want to implement). – Tests with users. We polish Kube Mail, ensure it’s easy to install and setup on all platforms and that all the implemented features work flawlessly. Progress so far Currently we have a prototype that has: – A read-only maildir resource. – HTML rendering of emails. – Basic actions such as deleting a mail. My plan is to hook the Maildir resource up with offlineimap, so I can start reading my mail in Kube within the next weeks ;-) Next to this we’re working on infrastructure, documentation, planning, UI Design… Current progress can be followed in our Phabricator projects 23, and the documentation, while still lagging behind, is starting to take shape in the “docs/” subdirectory of the respective repositories45. There’s meanwhile also a prototype of a docker container to experiment with available 6, and the Sink documentation explains how we currently build Sink and Kube inside a docker container with kdesrcbuild. Join the Fun We have weekly hangouts on that you are welcome to join (just contact me directly or write to the kde-pim mailinglist). The notes are on notes.kde.org and are regularly sent to the kdepim mailinglist as well. As you can guess the project is in a very early state, so we’re still mostly trying to get the whole framework into shape, and not so much writing the actual application. However, if you’re interested in trying to build the system on other platforms, working on UI concepts or generally tinker around with the codebase we have and help shaping what it should become, you’re more than welcome to join =) git://anongit.kde.org/scratch/aseigo/KontactCodebaseInventory.git ↩ https://phabricator.kde.org/project/profile/5/ ↩ https://phabricator.kde.org/project/profile/43/ ↩ git://anongit.kde.org/akonadi-next ↩ git://anongit.kde.org/kontact-quick ↩ https://github.com/cmollekopf/docker/blob/master/kubestandalone/run.sh ↩   [Less]
Posted almost 7 years ago by Timotheus Pokorra
There has been a new release of Roundcube: https://roundcube.net/news/2015/12/26/updates-1.1.4-and-1.0.8-released/ I have updated the OBS package with this version, and you can run yum update or apt-get update && apt-get upgrade depending on ... [More] your Linux OS. For more details about how to create such an update, or how to work around the issue when Roundcube has been updated in the OS already but not yet in the Kolab OBS, have a look at my older blog post: http://www.pokorra.de/2015/10/new-roundcube-1-1-3-release-for-kolab-3-4-updates/   [Less]
Posted almost 7 years ago by Aaron Seigo
Christian recently blogged about a small command line tool that added to the client demo application a bunch of useful functionality for interacting with Akonadi Next from the command line. This inspired me to reach into my hard drive and pull out a ... [More] bit of code I'd written for a side project of mine last year and turn up the Akonadi Next command line to 11. Say hello to akonadish. akonadish supports all the commands Christian wrote about, and adds: piping and file redirect of commands for More Unix(tm) able to be used in stand-alone scripts (#!/usr/bin/env akonadish style) an interactive shell featuring command history, tab completion, configuration knobs, and more Here's a quick demo of it I recorded this evening (please excuse my stuffy nose ... recovering from a Christmas cold): We feel this will be a big help for developers, power users and system administrators alike; in fact, we could have used a tool exactly like this for Akonadi with a client just this month ... alas, this only exists for Akonadi Next. I will continue to develop the tool in response to user need. That may include things like access to useful system information (user name, e.g.?), new Akonadi Next commands, perhaps even that ability to define custom functions that combine multiple commands into one call... it's rather flexible, all-in-all. Adopt it for your own Speaking of which, if you have a project that would benefit from something similar, this tool can easily be re-purposed. The Akonadi parts are all kept in their own files, while the functionality of the shell itself is entirely generic. You can add new custom syntax by adding new modules that register syntax which references functions to run in response. A simple command module looks like this: namespace Example { bool hello(const QStringList &args, State &state) { state.printLine("Hello to you, too!"); } Syntax::List syntax() { return Syntax::List() << Syntax("hello", QObject::tr("Description"), &Example::hello); } REGISTER_SYNTAX(Example) } Automcompletion is provided via a lambda assigned to the Syntax object's completer member: sync.completer = &AkonadishUtils::resourceCompleter; and sub-commands can be added by adding Syntax object to the children member: get.children << Syntax("debug", QObject::tr("The current debug level from 0 to 6"), &CoreSyntax::printDebugLevel); Commands can be run in an event loop when async results are needed by adding the EventDriven flag: Syntax sync("sync", QObject::tr("..."), &AkonadiSync::sync, Syntax::EventDriven); and autocompleters can do similarly using the State object passed in which provides commandStarted/commandFinished methods. ... all in all, pretty straightforward. If there is enough demand for it, I could even make it load commands from a plugin that matches the name of the binary (think: ln -s genericappsh myappsh), allowing it to be used entirely generically with little fuss. shrug I doubt it will come to that, but these are the possibilities that float through my head as I wait for compiles to finish. ;) For the curious, the code can be found here.   [Less]
Posted almost 7 years ago by seigo
In past few months we’ve been working on implementing document editing in Kolab webclient. By document editing here I mean a possibility to edit Open Document Format documents which includes collaborative work of many users on the same document. This ... [More] is possible thanks to WebODF editor, Manticore service, Kolab’s Chwala and Roundcube. WebODF WebODF is a JavaScript library that makes it easy to add Open Document Format (ODF) support to web applications. It provides a standalone text documents editor, which we already use for read-only documents preview. Manticore Manticore is a Node.js project that implements handling of (realtime) collaboration in WebODF editor. Using websocket technology is responsible for handling of document changes merging and saving and collaborators management. It can be used standalone (with its own user interface) or as an API. Chwala Kolab users should already know Chwala. It is the service used in Kolab for files storage operations. So, when you access a file in Roundcube it will use Chwala API. Recent development in Chwala added access rights support and handling of document editing sessions and collaborator invitations. … and the user interface So, we’ve integrated all of these components in Roundcube via kolab_files plugin. Files interface have been extended to provide a way to create and edit documents with invitations to editing sessions. In the main interface you can observe a few new elements. In the toolbar I added Edit, Create and Rename buttons. On the files list you can notice a new icon which indicates editing session status. It gives info about existence of an editing session and its creator. When you click Create button you’ll be asked for a document type, name and location. View gives document preview in read-only mode. When you decide to edit a document which is already being edited (not terminated editing session exist), you’ll be asked to join the session or create a new one. The editing session is opened in a new window. On right you’ll see a list of session collaborators. As a session creator you can invite other users to the session. When you add a participant he will be informed with a notification message in Roundcube. Invited user can join the session or decline the invitation. This is not all and this is not yet finished. We plan to add a better way to view ongoing sessions and invitations. There are some issues to be fixed. There’s still no Kolab skin support. Manticore installation is not simple. But it’s quite nice prototype of the functionality we want.   [Less]
Posted almost 7 years ago by roundcube
We just published updates to both stable versions 1.0 and 1.1 delivering important bug fixes one of which seals a potential path traversal vulnerability reported by High-Tech Bridge Security Research Lab. A second security improvement adds some ... [More] measures against brute-force attacks. See the full changelog here. Both versions are considered stable and we recommend to update all productive installations of Roundcube with either of these versions. Download them from roundcube.net/download. If you prefer to patch your installation for the path traversal vulnerability only, you can find patches on our download mirrors for versions 1.0, and 1.1. As usual, don’t forget to backup your data before updating!   [Less]
Posted almost 7 years ago by mollekopf
For Akonadi Next I built a little utility that I intend to call “akonadi_cmd” and it’s slowly becoming useful. It started as the first Akonadi Next client, for me to experiment a bit with the API, but it recently gained a bunch of commands and can ... [More] now be used for various tasks. The syntax is the following: akonadi_cmd COMMAND TYPE ... The Akonadi Next API always works on a single type, so you can i.e. query for folders, or mails, but not for folders and mails. Instead you query for the mails with a folder filter, if that’s what you’re looking for. akonadi_cmd’s syntax reflects that. Commands list The list command allows to execute queries and retreive results in form of lists. Eventually you will be able to specify which properties should be retrieved, for now it’s a hardcoded list for each type. It’s generally useful to check what the database contains and whether queries work. count Like list, but only output the result count. stat Some statistics how large the database is, how the size is distributed accross indexes, etc. create/modify/delete Allows to create/modify/delete entities. Currently this is only of limited use, but works already nicely with resources. Eventually it will allow to i.e. create/modify/delete all kinds of entities such as events/mails/folders/…. clear Drops all caches of a resource but leaves the config intact. This is useful while developing because it i.e. allows to retry a sync, without having to configure the resource again. synchronize Allows to synchronize a resource. For an imap resource that would mean that the remote server is contacted and the local dataset is brought up to date, for a maildir resource it simply means all data is indexed and becomes queriable by akonadi. Eventually this will allow to specify a query as well to i.e. only synchronize a specific folder. show Provides the same contents as “list” but in a graphical tree view. This was really just a way for me to test whether I can actually get data into a view, so I’m not sure if it will survive as a command. For the time being it’s nice to compare it’s performance to the QML counterpart. Setting up a new resource instance akonadi_cmd is already the primary way how you create resource instances: akonadi_cmd create resource org.kde.maildir path /home/developer/maildir1 This creates a resource of type “org.kde.maildir” and a configuration of “path” with the value “home/developer/maildir1”. Resources are stored in configuration files, so all this does is write to some config files. akonadi_cmd list resource By listing all available resources we can find the identifier of the resource that was automatically assigned. akonadi_cmd synchronize org.kde.maildir.instance1 This triggers the actual synchronization in the resource, and from there on the data is available. akonadi_cmd list folder org.kde.maildir.instance1 This will get you all folders that are in the resource. akonadi_cmd remove resource org.kde.maildir.instance1 And this will finally remove all traces of the resource instance. Implementation What’s perhaps interesting from the implementation side is that the command line tool uses exactly the same models that we also use in Kube. Akonadi2::Query query; query.resources << res.toLatin1(); auto model = loadModel(type, query); QObject::connect(model.data(), &QAbstractItemModel::rowsInserted, [model](const QModelIndex &index, int start, int end) { for (int i = start; i <= end; i++) { std::cout << "\tRow " << model->rowCount() << ":\t "; std::cout << "\t" << model->data(model->index(i, 0, index), Akonadi2::Store::DomainObjectBaseRole).value<Akonadi2::ApplicationDomain::ApplicationDomainType::Ptr>()->identifier().toStdString() << "\t"; for (int col = 0; col < model->columnCount(QModelIndex()); col++) { std::cout << "\t|" << model->data(model->index(i, col, index)).toString().toStdString(); } std::cout << std::endl; } }); QObject::connect(model.data(), &QAbstractItemModel::dataChanged, [model, &app](const QModelIndex &, const QModelIndex &, const QVector<int> &roles) { if (roles.contains(Akonadi2::Store::ChildrenFetchedRole)) { app.quit(); } }); if (!model->data(QModelIndex(), Akonadi2::Store::ChildrenFetchedRole).toBool()) { return app.exec(); } This is possible because we’re using QAbstractItemModel as an asynchronous result set. While one could argue whether that is the best API for an application that is essentially synchronous, it still shows that the API is useful for a variety of applications. And last but not least, since I figured out how to record animated gifs, the above procedure in a live demo ;-)   [Less]
Posted almost 7 years ago by seigo
When you start with Kontact you have to wait until the first sync of your mails with the IMAP or Kolab server is done. This is very annoying, because the first impression is that kontact is slow. So why not start this first sync with a script, and ... [More] then the data is already available when the user starts kontact the first time? 1. Setup akonadi & kontact We need to add the required config files to a new user home. This is simply copying config files to the new user home. We just need to replace username, email address and the password. Okay, that sounds quite easy, doesn't it? Oh wait - the password must be stored inside KWallet. KWallet can be accessed from the command line with kwalletcli. Unfortunatelly we can only use kwallet files not encrypted with a password because there is no way to enter the password with kwalletcli. Maybe pam-kwallet would be a solution; for plasma5 it there is an offical part for this, kwallet-pam, but I haven't tested it yet. As an alternative to copying files around, we could have used kiosk system from KDE. With that you are able to preseed the configuration files for an user and have additionally the possibility to roll out changes. F.ex. if the server addresses changes. But for a smaller setup this is kind of overkill. 2. Start needed services For starting a sync, we first need Akonadi running and Akonadi depends on a running DBus and kwalletd. KWallet refuses to start without a running XServer and is not happy with just xvfb. 3. Triggering the sync via DBus akonadi has a great Dbus interface so it is quite easy to trigger a sync and track the end of the sync: import gobject import dbus from dbus.mainloop.glib import DBusGMainLoop def status(status, msg): if status == 0: gobject.timeout_add(1, loop.quit) DBusGMainLoop(set_as_default=True) session_bus = dbus.SessionBus() proxy = session_bus.get_object('org.freedesktop.Akonadi.Resource.akonadi_kolab_resource_0', "/") proxy.connect_to_signal("status", status, dbus_interface="org.freedesktop.Akonadi.Agent.Status") proxy.synchronize(dbus_interface='org.freedesktop.Akonadi.Resource') loop = gobject.MainLoop() loop.run() The status function gets all updates, and status=0 indicates the end of a sync. Other than that is just getting the SessionBus and trigger the sychronize method and wait for till the loop ends. 4. Glue everything together After having all parts in place, it can be glued into a nice script. As language I use python, together with some syntactic sugar it is quite small: config.setupConfigDirs(home, fullName, email, name, uid, password) with DBusServer(): logging.info("set kwallet password") kwalletbinding.kwallet_put("imap", akonadi_kolab_resource_0rc", password) with akonadi.AkonadiServer(open("akonadi.log", "w"), open("akonadi.err", "w")): logging.info("trigger fullSync") akonadi.fullSync(akonadi_resource_name) first create the config files. If they are in place we need a DBus Server. If it is not available it is started (and stoped after leaving the with statement). Now the passwort is inserted in kwallet and the akonadiserver is started. If akonadi is running the fullSync is triggered. You can find the whole at github:hefee/akonadi-initalsync 5. Testing After having a nice script, the last bit that we want to test it. To have a fully controlled environment we docker images for that. One image for the server and one with this script. As base we use a Ubuntu 12.04 and our obs builds for kontact. Because we already started with docker images for other parts of the depolyment of kontact I added them to the known repository github:/cmollekopf/docker ipython ./automatedupdate/build.py #build kolabclient/percise python testenv.py start set1 #start the kolab server (set1) start the sync: % ipython automatedupdate/run.py developer:/work$ cd akonadi-initalsync/ developer:/work/akonadi-initalsync$ ./test.sh export QT_GRAPHICSSYSTEM=native QT_GRAPHICSSYSTEM=native export QT_X11_NO_MITSHM=1 QT_X11_NO_MITSHM=1 sudo setfacl -m user:developer:rw /dev/dri/card0 export KDE_DEBUG=1 KDE_DEBUG=1 USER=doe PASSWORD=Welcome2KolabSystems sleep 2 sudo /usr/sbin/mysqld 151215 14:17:25 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead. 151215 14:17:25 [Note] /usr/sbin/mysqld (mysqld 5.5.46-0ubuntu0.12.04.2) starting as process 16 ... sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf ./initalsync.py 'John Doe' doe@example.com doe Welcome2KolabSystems akonadi_kolab_resource_0 INFO:root:setup configs INFO:DBusServer:starting dbus... INFO:root:set kwallet password INFO:Akonadi:starting akonadi ... INFO:root:trigger fullSync INFO:AkonadiSync:fullSync for akonadi_kolab_resource_0 started INFO:AkonadiSync:fullSync for akonadi_kolab_resource_0 was successfull. INFO:Akonadi:stopping akonadi ... INFO:DBusServer:stopping dbus... To be honest we need some more quirks, because we need to setup the X11 forward into docker. And in this case we also want to run one MySQL server for all users and not a MySQL server per user, that's why we also need to start mysql by hand and add a database, that can be used from akonadi. The real syncing begins with the line: ./initalsync.py 'John Doe' doe@example.com doe Welcome2KolabSystems akonadi_kolab_resource_0   [Less]
Posted almost 7 years ago by Timotheus Pokorra
First I want to say that I am glad that there is https://obs.kolabsys.com, the OBS instance maintained and sponsored by Kolab Systems. Jeroen put a lot of work into making that system work. Unfortunately, it seems Jeroen is the only one maintaining ... [More] it. And that is not a healthy situation, in several regards: On Wednesday, we have seen OBS hang, and Jeroen had to restart it during his holidays. At least that is what I suspect, there has not been any response on the mailing list or IRC to our questions about the downtime. That is not right for any employee to have to do such tasks during his well deserved holidays. The other point is updating the operating systems: CentOS 7.2 is out, and the current installation of Kolab on CentOS7 does not work, due to incompatibilities with newer CentOS 7.2 packages. The libcalendaring package would need a rebuild against CentOS 7.2. See for details: http://lists.kolab.org/pipermail/users/2015-December/020317.html I see several options: Kolab Systems hires more Sysadmin engineers to maintain the growing complexity of servers and build infrastructure. Or trusted members of the community get permission to add new operating systems to OBS, and to restart the server. On the other hand, that is a complex installation, and with enterprise customers also using the Kolab Systems OBS, I don’t think that is a valid alternative I am developing my own LightBuildServer (aka LBS), which could allow everyone to easily install his own building environment for various Operating Systems. I am building Kolab packages on LBS, even some private packages for use at TBits.net patched with our (public) ISP extensions for Kolab. But again this is risky, as long I am the only one developing it. At least for CentOS/Fedora, we could use the Fedora infrastructure provided by RedHat. That gives us the benefit of quick availability of the latest releases of the OS, and that it is maintained and used by many people. It is possible to duplicate a Copr repository, if you need to fix something yourself. So I tried to mirror the Kolab packages from the OBS to Copr. My goal is to still maintain the sources of the packages at OBS, so that everyone can benefit from the fixes. But I will get the source rpms, and build them for CentOS and Fedora at my Copr repository. It is also split into a Release and an Update Repository. I have documented the process here: https://github.com/TBits/KolabScripts/tree/Kolab3.4/copr#build-instructions A quick summery of those instructions: I have written a script that will download the source rpms from OBS (http://obs.kolabsys.com/repositories/Kolab:/3.4/CentOS_7/src/ and http://obs.kolabsys.com/repositories/Kolab:/3.4:/Updates/CentOS_7/src/) process the source rpms and tell you the right order of building the packages, which is something Copr cannot do upload the source rpms to my webspace at fedorapeople.org: https://tpokorra.fedorapeople.org/kolab/kolab-3.4/ and https://tpokorra.fedorapeople.org/kolab/kolab-3.4-updates/ Then I build the packages in the prescribed order at https://copr.fedoraproject.org/coprs/tpokorra/Kolab-3.4/ and https://copr.fedoraproject.org/coprs/tpokorra/Kolab-3.4-Updates/ Currently the Fedora 23 packages don’t build yet completely. I need to look into this later. The CentOS6 and CentOS7 packages should be fine, I just tested them with clean machines! Here are the installation instructions: https://github.com/TBits/KolabScripts/tree/Kolab3.4/copr#installing-kolab-from-the-copr-repositories At last, I want to mention that I had to only add one missing source rpm, for CentOS6. see details at https://github.com/TBits/KolabScripts/tree/Kolab3.4/copr#python-pyasn1 The other packages are identical to the ones at OBS, apart from the CentOS7 packages built against CentOS 7.2, so that should be a direct improvement to the OBS packages.     [Less]
Posted almost 7 years ago by Timotheus Pokorra
First I want to say that I am glad that there is https://obs.kolabsys.com, the OBS instance maintained and sponsored by Kolab Systems. Jeroen put a lot of work into making that system work. Unfortunately, it seems Jeroen is the only one maintaining ... [More] it. And that is not a healthy situation, in several regards: On Wednesday, we have seen OBS hang, and Jeroen had to restart it during his holidays. At least that is what I suspect, there has not been any response on the mailing list or IRC to our questions about the downtime. That is not right for any employee to have to do such tasks during his well deserved holidays. The other point is updating the operating systems: CentOS 7.2 is out, and the current installation of Kolab on CentOS7 does not work, due to incompatibilities with newer CentOS 7.2 packages. The libcalendaring package would need a rebuild against CentOS 7.2. See for details: http://lists.kolab.org/pipermail/users/2015-December/020317.html I see several options: Kolab Systems hires more Sysadmin engineers to maintain the growing complexity of servers and build infrastructure. Or trusted members of the community get permission to add new operating systems to OBS, and to restart the server. On the other hand, that is a complex installation, and with enterprise customers also using the Kolab Systems OBS, I don’t think that is a valid alternative I am developing my own LightBuildServer (aka LBS), which could allow everyone to easily install his own building environment for various Operating Systems. I am building Kolab packages on LBS, even some private packages for use at TBits.net patched with our (public) ISP extensions for Kolab. But again this is risky, as long I am the only one developing it. At least for CentOS/Fedora, we could use the Fedora infrastructure provided by RedHat. That gives us the benefit of quick availability of the latest releases of the OS, and that it is maintained and used by many people. It is possible to duplicate a Copr repository, if you need to fix something yourself. So I tried to mirror the Kolab packages from the OBS to Copr. My goal is to still maintain the sources of the packages at OBS, so that everyone can benefit from the fixes. But I will get the source rpms, and build them for CentOS and Fedora at my Copr repository. It is also split into a Release and an Update Repository. I have documented the process here: https://github.com/TBits/KolabScripts/tree/Kolab3.4/copr#build-instructions A quick summery of those instructions: I have written a script that will download the source rpms from OBS (http://obs.kolabsys.com/repositories/Kolab:/3.4/CentOS_7/src/ and http://obs.kolabsys.com/repositories/Kolab:/3.4:/Updates/CentOS_7/src/) process the source rpms and tell you the right order of building the packages, which is something Copr cannot do upload the source rpms to my webspace at fedorapeople.org: https://tpokorra.fedorapeople.org/kolab/kolab-3.4/ and https://tpokorra.fedorapeople.org/kolab/kolab-3.4-updates/ Then I build the packages in the prescribed order at https://copr.fedoraproject.org/coprs/tpokorra/Kolab-3.4/ and https://copr.fedoraproject.org/coprs/tpokorra/Kolab-3.4-Updates/ Currently the Fedora 23 packages don’t build yet completely. I need to look into this later. The CentOS6 and CentOS7 packages should be fine, I just tested them with clean machines! Here are the installation instructions: https://github.com/TBits/KolabScripts/tree/Kolab3.4/copr#installing-kolab-from-the-copr-repositories At last, I want to mention that I had to only add one missing source rpm, for CentOS6. see details at https://github.com/TBits/KolabScripts/tree/Kolab3.4/copr#python-pyasn1 The other packages are identical to the ones at OBS, apart from the CentOS7 packages built against CentOS 7.2, so that should be a direct improvement to the OBS packages.     [Less]