|
Posted
about 11 years
ago
by
Daniel Morlock
Just in time for the official Kolab 3.3 release, our Gentoo packages for Kolab 3.2 became stable and ready to use. This will clear the way for the upcoming release of Kolab 3.3 for Gentoo. Altough this release won't bring any major changes, it
... [More]
prepares the ground for upcoming developments and new features in Kolab 3.3. Further, with Kolab 3.2 we introduced an upgrade path between Kolab releases for Gentoo and we will try our best to keep updates as consistent and comfortable as possible.
[Less]
|
|
Posted
about 11 years
ago
by
Aaron Seigo
I've been a long time fan of Kolab, the free software collaboration and groupware system. I have recommended it, and even helped deploy it a few times, since it launched some ten years ago. I used it back then with KDE's Kontact, and still do to this
... [More]
day.
Kolab interested me because it had the opportunity to join such key free software products as LibreOffice (then Open Office) and Firefox in terms of importance and usage. Think about it: in a professional setting (business, government or educational) what key software tools are universally required? Certainly among them are tools to read and edit office documents; a world-class web browser; and collaboration software (email, calendaring, contacts, resource booking, notes, task lists, file sharing ...). The first two were increasingly well covered, but that last one? Not so much.
And then Kolab walked on to the stage and held out the promise of completing the trifecta.
However, there were years in between then and now when it was less obvious to me that Kolab had a glowing future. It was an amazing early-stage product that filled a huge gap in the free software stack, but development seemed to slow up and promotion was extremely limited. This felt like a small tragedy.
So when I heard that Kolab Systems was launching back in 2010 as a company centered around Kolab, I was excited: Could this be a vehicle which tows Kolab forward towards success? Could this new company propel Kolab effectively into the market which is currently the domain of proprietary products? Only time would tell ... I knew the founders personally, and figured that if anyone could pull this off it would be them. I also knew that they would work with freedom and upstream communities as priorities.
Four years later and Kolab Systems has indeed been successful in bringing Kolab significantly forward technologically and in adoption. Today Kolab is more reliable and has a spectacular set of features, thanks to the solid engineering team that has come together with the help and support of Kolab Systems.
Their efforts have also resulted in Kolab being used more: Fortune 100 companies are using Kolab, the city of Munich is currently migrating to it, there are educational systems using it and, of course, there is My Kolab which is a hosted instance of Kolab hat is being used by an ever growing number of people.
Kolab Systems has also helped the free software it promotes and relies on flourish by investing in it: developers are paid to work on upstream free software such as Roundcube and Kontact in addition to the Kolab serer; community facilitation and public promotion are in focus ... there's a rather nice balance between company and community at play.
There is still a lot to do, however. This is not the end of a success story, perhaps only the end of the beginning. So when the opportunity arose to join Kolab Systems I didn't have to think twice. Starting this month I am joining the Kolab Systems team where I will be engaged in technical efforts (more so in the near term) as well as business and community development. I'm really excited to be joining what is a pretty stellar team of people working on technology I believe in.
Before I wrapping up, I'd like to share something that helped convince me about Kolab Systems. I've known Georg Greve, Kolab Systems' CEO and Free Software Foundation Europe founder, for a good number of years. One afternoon during a friendly walk-and-chat in the countryside near his house, he noted that we should not be satisfied with just making software that is free-as-in-freedom; it should also be awesome software, presented as something worth wanting. It is unrealistic to expect everyone to use free software solely because it is ethically the right thing to do (which it is), but we might expect people to choose free software because it is the most desirable option they know of. To phrase it as an aspiration:
Through excellence we can spread freedom.
I'll probably write more about this philosophy another time, as there are a number of interesting facets to it. I'll also write from time to time about the the interesting things going on in the Kolab world .. but that's all for another time. Right now I need to get back to making notes-on-emails-sync'd-with-a-kolab-server work well. :)
[Less]
|
|
Posted
about 11 years
ago
by
roundcube
PGP encryption is one of the most frequently requested features for Roundcube and for good reasons more and more people start caring about end-to-end encryption in their everyday communication. But unfortunately webmail applications currently can’t
... [More]
fully participate in this game and doing PGP encryption right in web-based applications isn’t a simple task. Although there are ways and even some basic implementations, all of them have their pros and cons. And yet the ultimate solution is still missing.
Browser extensions to the rescue
In our opinion, the way to go is with a browser extension to do the important work and guard the keys. A crucial point is to keep the encryption component under the user’s full control which in the browser and http world can only be provided with a native browser plugin. And the good news is, there are working extensions available today. The most prominent one probably is Mailvelope which detects encrypted message bodies in various webmail applications and also hooks into the message composition to send signed and encrypted email messages with your favorite webmail app. Plus another very promising tool for end-to-end encryption is coming our way: p≡p. A browser extension is at least planned in the longer term. And even Google just started their own project with the recently announced end-to-end Chrome extension.
That’s a good start indeed. However, the encryption capabilities of those extensions only cover the message body but leave out attachments or even pgp/mime messages. Mostly because there extension has limited knowledge about webmail app and there’s no interaction between the web app and the extension. On the other side, the webmail app isn’t aware of the encryption features available in the user’s browser and therefore suppresses certain parts of a message like signatures. A direct interaction between the webmail and the encryption extension could help adding the missing pieces like encrypted attachment upload and message signing. All we need to do is to introduce the two components to each others.
From the webmail developer’s perspective
So here’s a loose list of functionality we’d like to see exposed by an encryption browser extension and which we believe would contribute to an integrated solution for secure emailing.
A global (window.encryption-style) object providing functions to:
List of supported encryption technologies (pgp, s/mime)
Switch to manual mode (i.e. disabling automatic detection of webmail containers)
For message display:
Register message content area (jQuery-like selector)
Setters for message headers (e.g. sender, recipient)
Decrypt message content (String) directly
Validate signature (pass signature as argument)
Download and decrypt attachment from a given URL and
a) prompt for saving file
b) return a FileReader object for inline display
Bonus points: support for pgp/mime; implies full support for MIME message structures
For message composition:
Setters for message recipients (or recipient text fields)
Register message compose text area (jQuery-like selector)
… or functions to encrypt and/or sign message contents (String) directly
Query the existence of a public key/certificate for a given recipient address
File selector/upload with transparent encryption
… or an API to encrypt binary data (from a FileReader object into a new FileReader object)
Regarding file upload for attachments to an encrypted messages, some extra challenges exist in an asynchronous client-server web application: attachment encryption requires the final recipients to be known before the (encrypted) file is uploaded to the server. If the list of recipients or encryption settings change, already uploaded attachments are void and need to be re-encrypted and uploaded again.
And presumably that’s just one example of possible pitfalls in this endeavor to add full featured PGP encryption to webmail applications. Thus, dear developers of Mailvelope, p≡p, WebPG and Google, please take the above list as a source of inspiration for your further development. We’d gladly cooperate to add the missing pieces.
[Less]
|
|
Posted
about 11 years
ago
by
Timotheus Pokorra
On the Kolab IRC we have had some issues with apt-get talking about connection failed etc.
So I updated the blogpost from last year: http://www.pokorra.de/2013/10/downloading-from-obs-repo-via-php-proxy-file/
The port of the Kolab Systems OBS is now
... [More]
port 80, so there is not really a need for a proxy anymore. But perhaps it helps for debugging the apt-get commands.
I have extended the scripts to work for apt-get on Debian/Ubuntu as well, the original script was for yum only it seems.
I have setup a small php script on a server somewhere on the Internet.
In my sample configuration, I use a Debian server with Lighttpd and PHP.
Install:
apt-get install lighttpd spawn-fcgi php5-curl php5-cgi
changes to /etc/lighttpd/lighttpd.conf:
server.modules = (
[...]
"mod_fastcgi",
"mod_rewrite",
)
fastcgi.server = ( ".php" => ((
"bin-path" => "/usr/bin/php5-cgi",
"socket" => "/tmp/php.socket",
"max-procs" => 2,
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "16",
"PHP_FCGI_MAX_REQUESTS" => "10000"
),
"bin-copy-environment" => (
"PATH", "SHELL", "USER"
),
"broken-scriptfilename" => "enable"
)))
url.rewrite-once = (
"^/obs\.kolabsys\.com/index.php" => "$0",
"^/obs\.kolabsys\.com/(.*)" => "/obs.kolabsys.com/index.php?page=$1"
)
and in /var/www/obs.kolabsys.com/index.php:
<?php
$proxyurl="http://kolabproxy2.pokorra.de";
$obsurl="http://obs.kolabsys.com";
// it seems file_get_contents does not return the full page
function curl_get_file_contents($URL)
{
$c = curl_init();
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_URL, str_replace('&', '&', $URL));
$contents = curl_exec($c);
curl_close($c);
if ($contents) return $contents;
else return FALSE;
}
$page = $_GET['page'];
$filename = basename($page);
debug($page . " ".$filename);
$content = curl_get_file_contents($obsurl."/".$page);
if (strpos($content, "Error 404") !== false) {
header("HTTP/1.0 404 Not Found");
die();
}
if (substr($page, -strlen("/")) === "/")
{
# print directory listing
$content = str_replace($obsurl."/", $proxyurl."/obs.kolabsys.com/", $content);
$content = str_replace('href="/', 'href=$proxyurl."/obs.kolabsys.com/', $content);
echo $content;
}
else if (substr($filename, -strlen(".repo")) === ".repo")
{
header("Content-Type: plain/text");
echo str_replace($obsurl."/", $proxyurl."/obs.kolabsys.com/", $content);
}
else
{
#die($filename);
header("Content-Type: application/octet-stream");
header('Content-Disposition: attachment; filename="'.$filename.'"');
header("Content-Transfer-Encoding: binary\n");
echo curl_get_file_contents($obsurl."/".$page);
}
function debug($msg){
if(is_writeable("/tmp/mylog.log")){
$fh = fopen("/tmp/mylog.log",'a+');
fputs($fh,"[Log] ".date("d.m.Y H:i:s")." $msg\n");
fclose($fh);
}
}
?>
Now it is possible to download the repo files like this:
cd /etc/yum.repos.d/
wget http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3/CentOS_6/Kolab:3.3.repo
wget http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/CentOS_6/Kolab:3.3:Updates.repo
yum install kolab
For Ubuntu 14.04:
echo "deb http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3/Ubuntu_14.04/ ./" > /etc/apt/sources.list.d/kolab.list
echo "deb http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/Ubuntu_14.04/ ./" >> /etc/apt/sources.list.d/kolab.list
apt-get install kolab
This works for all other projects and distributions on obs.kolabsys.com too.
[Less]
|
|
Posted
about 11 years
ago
by
Milosz Galazka
I have been using self hosted Kolab Groupware everyday for quite a while now.
Therefore the need arose to monitor process activity and system resources using Monit utility.
Table of contents
Couple of words about monit
Software installation
... [More]
Initial configuration
Command-line operations
Monitor filesystems
Monitor system resources
Monitor system services
cron
rsyslogd
ntpd
OpenSSH
Monitor Kolab services
MySQL
Apache
Kolab daemon
Kolab saslauthd
Wallace
ClamAV
Freshclam
amavisd-new
The main Directory Server daemon
Cyrus IMAP/POP3 daemons
Postfix
Ending notes
Couple of words about monit
monit is a simple and robust utility for monitoring and automatic maintenance, which is supported on Linux, BSD, and OS X.
Software installation
Debian Wheezy currently provides Monit 5.4.
To install it execute command:
$ sudo apt-get install monit
Monit daemon will be started at the boot time. Alternatively you can use standard System V init scripts to manage service.
Initial configuration
Configuration files are located under /etc/monit/ directory. Default settings are stored in the /etc/monit/monitrc file, which I strongly suggest to read.
Custom configuration will be stored in the/etc/monit/conf.d/ directory.
I will override several important settings using local.conf file.
Modified settings
Set email address to [email protected]
Slightly change default template
Define mail server as localhost
Set default interval to 120 seconds with initial delay of 180 seconds
Enable local web server to take advantage of the additional functionality (currently commented out)
$ sudo cat /etc/monit/conf.d/local.conf
# define e-mail recipent
set alert [email protected]
# define e-mail template
set mail-format {
from: monit@$HOST
subject: monit alert -- $EVENT $SERVICE
message: $EVENT Service $SERVICE
Date: $DATE
Action: $ACTION
Host: $HOST
Description: $DESCRIPTION
}
# define server
set mailserver localhost
# define interval and initial delay
set daemon 120 with start delay 180
# set web server for local management
# set httpd port 2812 and use the address localhost allow localhost
Please take a note that enabling built-in web-server in the way I used above will allow every local user to access and perform monit operations. Essentially it should be disabled or secured using username and password combination.
Command-line operations
Verify configuration syntax
To check configuration syntax execute the following command.
$ sudo monit -t
Control file syntax OK
Start, Stop, Restart actions
Start all services and enable monitoring for them.
$ sudo monit start all
Start all services in resources group and enable monitoring for them.
$ sudo monit -g resources start
Start rootfs service and enable monitoring for it.
$ sudo monit start rootfs
You can initiate stop action in the same way as the above one, which will stop service and disable monitoring, or just execute restart action to stop and start corresponding services.
Monitor and unmonitor actions
Monitor all services.
$ sudo monit monitor all
Monitor all services in resources group.
$ sudo monit -g resources monitor
Monitor rootfs service.
$ sudo monit monitor rootfs
Use unmonitor action to disable monitoring for corresponding services.
Status action
Print service status.
$ sudo monit status
The Monit daemon 5.6 uptime: 27d 0h 47m
System 'server'
status Running
monitoring status Monitored
load average [0.26] [0.43] [0.48]
cpu 12.8%us 2.6%sy 0.0%wa
memory usage 2934772 kB [36.4%]
swap usage 2897376 kB [35.0%]
data collected Mon, 29 Sep 2014 22:47:49
Filesystem 'rootfs'
status Accessible
monitoring status Monitored
permission 660
uid 0
gid 6
filesystem flags 0x1000
block size 4096 B
blocks total 17161862 [67038.5 MB]
blocks free for non superuser 7327797 [28624.2 MB] [42.7%]
blocks free total 8205352 [32052.2 MB] [47.8%]
inodes total 4374528
inodes free 4151728 [94.9%]
data collected Mon, 29 Sep 2014 22:47:49
Summary action
Print short service summary.
$ sudo monit summary
The Monit daemon 5.6 uptime: 27d 0h 48m
System 'server' Running
Filesystem 'rootfs' Accessible
Reload action
Reload configuration and reinitialize Monit daemon.
$ sudo monit reload
Quit action
Terminate Monit daemon.
$ sudo monit quit
monit daemon with pid [5248] killed
Monitor filesystems
Configuration syntax is very consistent and easy to grasp. I will start with simple example and then proceed to a slightly more complex ideas. Just remember to check one thing at a time.
I am using VPS service due to easy backup/restore process, so I have only one filesystem on /dev/root device, which I will monitor as a named rootfs service.
Monit daemon will generate alert and send an email if space or inode usage on the rootfs filesystem [stored on /dev/root device] exceeds 80 percent of the available capacity.
$ sudo cat /etc/monit/conf.d/filesystems.conf
check filesystem rootfs with path /dev/root
group resources
if space usage > 80% then alert
if inode usage > 80% then alert
The above service is placed in resources group for easier management.
Monitor system resources
The following configuration will be stored as a named server service as it describes resource usage for the whole mail server.
Monit daemon will check memory usage, and if it exceeds 80% of the available capacity for three subsequent events, it will send alert email.
Recovery message will be sent after two subsequent events to limit number of sent messages. The same rules apply to the remaining system resources.
The system I am using have four available processors, so the alert will be generated after the five minutes load average exceeds five.
$ sudo cat /etc/monit/conf.d/resources.conf
check system server
group resources
if memory usage > 80% for 3 cycles then alert
else if succeeded for 2 cycles then alert
if swap usage > 50% for 3 cycles then alert
else if succeeded for 2 cycles then alert
if cpu(wait) > 30% for 3 cycles then alert
else if succeeded for 2 cycles then alert
if cpu(system) > 60% for 3 cycles then alert
else if succeeded for 2 cycles then alert
if cpu(user) > 60% for 3 cycles then alert
else if succeeded for 2 cycles then alert
if loadavg(5min) > 5 then alert
else if succeeded for 2 cycles then alert
The above service is placed in resources group for easier management.
Monitor system services
cron
cron is a daemon used to execute user-specified tasks at scheduled time.
Monit daemon will use the specified pid file [/var/run/crond.pid] to monitor [cron] service and restart it if it stops for any reason.
Configuration change will generate alert message, permission issue will generate alert message and disable further monitoring.
GID of 102 translates to crontab group.
$ sudo cat /etc/monit/conf.d/cron.conf
check process cron with pidfile /var/run/crond.pid
group system
group scheduled-tasks
start program = "/usr/sbin/service cron start"
stop program = "/usr/sbin/service cron stop"
if 3 restarts within 5 cycles then timeout
depends on cron_bin
depends on cron_rc
depends on cron_rc.d
depends on cron_rc.daily
depends on cron_rc.hourly
depends on cron_rc.monthly
depends on cron_rc.weekly
depends on cron_rc.spool
check file cron_bin with path /usr/sbin/cron
group scheduled-tasks
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file cron_rc with path /etc/crontab
group scheduled-tasks
if failed checksum then alert
if failed permission 644 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory cron_rc.d with path /etc/cron.d
group scheduled-tasks
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory cron_rc.daily with path /etc/cron.daily
group scheduled-tasks
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory cron_rc.hourly with path /etc/cron.hourly
group scheduled-tasks
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory cron_rc.monthly with path /etc/cron.monthly
group scheduled-tasks
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory cron_rc.weekly with path /etc/cron.weekly
group scheduled-tasks
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory cron_rc.spool with path /var/spool/cron/crontabs
group scheduled-tasks
if changed timestamp then alert
if failed permission 1730 then unmonitor
if failed uid root then unmonitor
if failed gid 102 then unmonitor
The above service is placed in system and scheduled-tasks groups for easier management.
rsyslogd
rsyslogd is a message logging service.
$ sudo cat /etc/monit/conf.d/rsyslogd.conf
check process rsyslog with pidfile /var/run/rsyslogd.pid
group system
group logging
start program = "/usr/sbin/service rsyslog start"
stop program = "/usr/sbin/service rsyslog stop"
if 3 restarts within 5 cycles then timeout
depends on rsyslog_bin
depends on rsyslog_rc
depends on rsyslog_rc.d
check file rsyslog_bin with path /usr/sbin/rsyslogd
group logging
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file rsyslog_rc with path /etc/rsyslog.conf
group logging
if failed checksum then alert
if failed permission 644 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory rsyslog_rc.d with path /etc/rsyslog.d
group logging
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in system and logging groups for easier management.
ntpd
Network Time Protocol daemon will be extended by the use of port monitoring.
$ sudo cat /etc/monit/conf.d/ntpd.conf
check process ntp with pidfile /var/run/ntpd.pid
group system
group time
start program = "/usr/sbin/service ntp start"
stop program = "/usr/sbin/service ntp stop"
if failed port 123 type udp then restart
if 3 restarts within 5 cycles then timeout
depends on ntp_bin
depends on ntp_rc
check file ntp_bin with path /usr/sbin/ntpd
group time
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file ntp_rc with path /etc/ntp.conf
group time
if failed checksum then alert
if failed permission 644 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in system and time groups for easier management.
OpenSSH
OpenSSH service will be extended by the use of match statement to test content of the configuration file. I assume it is self explanatory.
$ sudo cat /etc/monit/conf.d/openssh-server.conf
check process openssh with pidfile /var/run/sshd.pid
group system
group sshd
start program = "/usr/sbin/service ssh start"
stop program = "/usr/sbin/service ssh stop"
if failed port 22 with proto ssh then restart
if 3 restarts with 5 cycles then timeout
depend on openssh_bin
depend on openssh_sftp_bin
depend on openssh_rsa_key
depend on openssh_dsa_key
depend on openssh_rc
check file openssh_bin with path /usr/sbin/sshd
group sshd
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file openssh_sftp_bin with path /usr/lib/openssh/sftp-server
group sshd
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file openssh_rsa_key with path /etc/ssh/ssh_host_rsa_key
group sshd
if failed checksum then unmonitor
if failed permission 600 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file openssh_dsa_key with path /etc/ssh/ssh_host_dsa_key
group sshd
if failed checksum then unmonitor
if failed permission 600 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file openssh_rc with path /etc/ssh/sshd_config
group sshd
if failed checksum then alert
if failed permission 644 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
if not match "^PasswordAuthentication no" then alert
if not match "^PubkeyAuthentication yes" then alert
if not match "^PermitRootLogin no" then alert
The above service is placed in system and sshd groups for easier management.
Monitor Kolab services
MySQL
MySQL is an open-source database server used by the wide range of Kolab services.
UID of 106 translates to mysql user. GID of 106 translates to mysql group.
It is the first time I have used unixsocket statement here.
$ sudo cat /etc/monit/conf.d/mysql.conf
check process mysql with pidfile /var/run/mysqld/mysqld.pid
group kolab
group database
start program = "/usr/sbin/service mysql start"
stop program = "/usr/sbin/service mysql stop"
if failed port 3306 protocol mysql then restart
if failed unixsocket /var/run/mysqld/mysqld.sock protocol mysql then restart
if 3 restarts within 5 cycles then timeout
depends on mysql_bin
depends on mysql_rc
depends on mysql_sys_maint
depend on mysql_data
check file mysql_bin with path /usr/sbin/mysqld
group database
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file mysql_rc with path /etc/mysql/my.cnf
group database
if failed checksum then alert
if failed permission 644 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file mysql_sys_maint with path /etc/mysql/debian.cnf
group database
if failed checksum then unmonitor
if failed permission 600 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory mysql_data with path /var/lib/mysql
group database
if failed permission 700 then unmonitor
if failed uid 106 then unmonitor
if failed gid 110 then unmonitor
The above service is placed in kolab and database groups for easier management.
Apache
Apache is an open-source HTTP server used to serve user/admin web-interface.
Please notice that I am checking HTTPS port.
$ sudo cat /etc/monit/conf.d/apache.conf
check process apache with pidfile /var/run/apache2.pid
group kolab
group web-server
start program = "/usr/sbin/service apache2 start"
stop program = "/usr/sbin/service apache2 stop"
if failed port 443 then restart
if 3 restarts within 5 cycles then timeout
depends on apache2_bin
depends on apache2_rc
depends on apache2_rc_mods
depends on apache2_rc_sites
check file apache2_bin with path /usr/sbin/apache2.prefork
group web-server
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory apache2_rc with path /etc/apache2
group web-server
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory apache2_rc_mods with path /etc/apache2/mods-enabled
group web-server
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory apache2_rc_sites with path /etc/apache2/sites-enabled
group web-server
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in kolab and web-server groups for easier management.
Kolab daemon
This is the heart of the whole Kolab unified communication and collaboration system as it is responsible for data synchronization between different services.
UID of 413 translates to kolab-n user. GID of 412 translates to kolab group.
$ sudo cat /etc/monit/conf.d/kolab-server.conf
check process kolab-server with pidfile /var/run/kolabd/kolabd.pid
group kolab
group kolab-daemon
start program = "/usr/sbin/service kolab-server start"
stop program = "/usr/sbin/service kolab-server stop"
if 3 restarts within 5 cycles then timeout
depends on kolab-daemon_bin
depends on kolab-daemon_rc
check file kolab-daemon_bin with path /usr/sbin/kolabd
group kolab-daemon
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file kolab-daemon_rc with path /etc/kolab/kolab.conf
group kolab-daemon
if failed checksum then alert
if failed permission 640 then unmonitor
if failed uid 413 then unmonitor
if failed gid 412 then unmonitor
The above service is placed in kolab and kolab-daemon groups for easier management.
Kolab saslauthd
Kolab saslauthd is the SASL authentication daemon for multi-domain Kolab deployments.
$ sudo cat /etc/monit/conf.d/kolab-saslauthd.conf
check process kolab-saslauthd with pidfile /var/run/kolab-saslauthd/kolab-saslauthd.pid
group kolab
group kolab-saslauthd
start program = "/usr/sbin/service kolab-saslauthd start"
stop program = "/usr/sbin/service kolab-saslauthd stop"
if 3 restarts within 5 cycles then timeout
depends on kolab-saslauthd_bin
check file kolab-saslauthd_bin with path /usr/sbin/kolab-saslauthd
group kolab-saslauthd
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in kolab and kolab-saslauthd groups for easier management.
It can be tempting to monitor /var/run/saslauthd/mux socket, but just leave it alone for now.
Wallace
The Wallace is a content filtering daemon.
$ sudo cat /etc/monit/conf.d/wallace.conf
check process wallace with pidfile /var/run/wallaced/wallaced.pid
group kolab
group wallace
start program = "/usr/sbin/service wallace start"
stop program = "/usr/sbin/service wallace stop"
#if failed port 10026 then restart
if 3 restarts within 5 cycles then timeout
depends on wallace_bin
check file wallace_bin with path /usr/sbin/wallaced
group wallace
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in kolab and wallace groups for easier management.
ClamAV
The ClamAV daemon is an and open-source, cross-platform antivirus software.
$ sudo cat /etc/monit/conf.d/clamav.conf
check process clamav with pidfile /var/run/clamav/clamd.pid
group system
group antivirus
start program = "/usr/sbin/service clamav-daemon start"
stop program = "/usr/sbin/service clamav-daemon stop"
if 3 restarts within 5 cycles then timeout
#if failed unixsocket /var/run/clamav/clamd.ctl type udp then alert
depends on clamav_bin
depends on clamav_rc
check file clamav_bin with path /usr/sbin/clamd
group antivirus
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file clamav_rc with path /etc/clamav/clamd.conf
group antivirus
if failed permission 644 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in kolab and antivirus groups for easier management.
Freshclam
Freshclam is a software used to periodically update ClamAV virus databases.
$ sudo cat /etc/monit/conf.d/freshclam.conf
check process freshclam with pidfile /var/run/clamav/freshclam.pid
group system
group antivirus-updater
start program = "/usr/sbin/service clamav-freshclam start"
stop program = "/usr/sbin/service clamav-freshclam stop"
if 3 restarts within 5 cycles then timeout
depends on freshclam_bin
depends on freshclam_rc
check file freshclam_bin with path /usr/bin/freshclam
group antivirus-updater
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file freshclam_rc with path /etc/clamav/freshclam.conf
group antivirus-updater
if failed permission 444 then unmonitor
if failed uid 110 then unmonitor
if failed gid 4 then unmonitor
The above service is placed in kolab and antivirus-updater groups for easier management.
amavisd-new
Amavis is a high-performance interface between Postfix mail server and content filtering services: SpamAssassin as a spam classifier, and ClamAV as an antivirus protection.
$ sudo cat /etc/monit/conf.d/amavisd-new.conf
check process amavisd-new with pidfile /var/run/amavis/amavisd.pid
group kolab
group content-filter
start program = "/usr/sbin/service amavis start"
stop program = "/usr/sbin/service amavis stop"
if 3 restarts within 5 cycles then timeout
#if failed port 10024 type tcp then restart
#if failed unixsocket /var/lib/amavis/amavisd.sock type udp then alert
depends on amavisd-new_bin
depends on amavisd-new_rc
check file amavisd-new_bin with path /usr/sbin/amavisd-new
group content-filter
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory amavisd-new_rc with path /etc/amavis/
group content-filter
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in kolab and content-filter groups for easier management.
The main Directory Server daemon
The main Directory Server daemon is a 389 LDAP Directory Server.
$ sudo cat /etc/monit/conf.d/dirsrv.conf
check process dirsrv with pidfile /var/run/dirsrv/slapd-xmail.stats
group kolab
group dirsrv
start program = "/usr/sbin/service dirsrv start"
stop program = "/usr/sbin/service dirsrv stop"
if 3 restarts within 5 cycles then timeout
if failed port 389 type tcp then restart
depends on dirsrv_bin
depends on dirsrv_rc
check file dirsrv_bin with path /usr/sbin/ns-slapd
group dirsrv
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory dirsrv_rc with path /etc/dirsrv/
group dirsrv
if changed timestamp then alert
The above service is placed in kolab and dirsrv groups for easier management.
SpamAssasin
SpamAssasin is a content filter used for spam filtering.
$ sudo cat /etc/monit/conf.d/spamd.conf
check process spamd with pidfile /var/run/spamd.pid
group system
group spamd
start program = "/usr/sbin/service spamassassin start"
stop program = "/usr/sbin/service spamassassin stop"
if 3 restarts within 5 cycles then timeout
#if failed port 783 type tcp then restart
depends on spamd_bin
depends on spamd_rc
check file spamd_bin with path /usr/sbin/spamd
group spamd
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory spamd_rc with path /etc/spamassassin/
group spamd
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in kolab and spamd groups for easier management.
Cyrus IMAP/POP3 daemons
cyrus-imapd daemon is responsible for IMAP/POP3 communication.
$ sudo cat /etc/monit/conf.d/cyrus-imapd.conf
check process cyrus-imapd with pidfile /var/run/cyrus-master.pid
group kolab
group cyrus-imapd
start program = "/usr/sbin/service cyrus-imapd start"
stop program = "/usr/sbin/service cyrus-imapd stop"
if 3 restarts within 5 cycles then timeout
if failed port 143 type tcp then restart
if failed port 4190 type tcp then restart
if failed port 993 type tcp then restart
depends on cyrus-imapd_bin
depends on cyrus-imapd_rc
check file cyrus-imapd_bin with path /usr/lib/cyrus-imapd/cyrus-master
group cyrus-imapd
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check file freshclam_rc with path /etc/cyrus.conf
group anti-virus
if failed checksum then alert
if failed permission 644 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in kolab and cyrus-imapd groups for easier management.
Postfix
Postfix is an open-source mail transfer agent used to route and deliver electronic mail.
$ sudo cat /etc/monit/conf.d/postfix.conf
check process postfix with pidfile /var/run/cyrus-master.pid
group kolab
group mta
start program = "/usr/sbin/service postfix start"
stop program = "/usr/sbin/service postfix stop"
if 3 restarts within 5 cycles then timeout
if failed port 25 type tcp then restart
#if failed port 10025 type tcp then restart
#if failed port 10027 type tcp then restart
if failed port 587 type tcp then restart
depends on postfix_bin
depends on postfix_rc
check file postfix_bin with path /usr/lib/postfix/master
group mta
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
check directory postfix_rc with path /etc/postfix/
group mta
if changed timestamp then alert
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor
The above service is placed in kolab and mta groups for easier management.
Ending notes
This blog post is definitely too long, so I will just mention that similar configuration can be used to monitor other integrated solutions like ISPConfig, or custom specialized setups.
In my opinion Monit is a great utility which simplifies system and service monitoring. Additionally it provides interesting proactive features, like service restart, or arbitrary program execution on selected tests.
Everything is described in the manual page.
$ man monit
[Less]
|
|
Posted
about 11 years
ago
by
mollekopf
I have been working on better ways to write asynchronous code. In this post I’m going to analyze one of our current tools, KJob, in how it helps us writing asynchronous code and what is missing. I’m then going to present my prototype solution to
... [More]
address these problems.
KJob
In KDE we have the KJob class to wrap asynchronous operations. KJob gives us a framework for progress and error reporting, a uniform start method, and by subclassing it we can easily write our own reusable asynchronus operations. Such an asynchronous operation typically takes a couple of arguments, and returns a result.
A KJob, in it’s simplest form, is the asynchronous equivalent of a function call:
int doSomething(int argument) {
return getNumber(argument);
}
struct DoSomething : public KJob {
KJob(int argument): mArgument(argument){}
void start() {
KJob *job = getNumberAsync(mArgument);
connect(job, SIGNAL(result(KJob*)), this, SLOT(onJobDone(KJob*)));
job->start();
}
int mResult;
int mArgument;
private slots:
void onJobDone(KJob *job) {
mResult = job->result;
emitResult();
}
};
What you’ll notice immediately that this involves a lot of boilerplate code. It also introduces a lot of complexity in a seemingly trivial task. This is partially because we have to create a class when we actually wanted a function, and partially because we have to use class members to replace variables on the stack, that we don’t have available during an asynchronous operation.
So while KJob gives us a tool to wrap asynchronous operations in a way that they become reusable, it comes at the cost of quite a bit of boilerplate code. It also means that what can be written synchronously in a simple function, requires a class when writing the same code asynchronously.
Inversion of Control
A typical operation is of course slightly more complex than doSomething, and often consists of several (asynchronous) operations itself.
What in imperative code looks like this:
int doSomethingComplex(int argument) {
return operation2(operation1(argument));
}
…results in an asynchronous operation that is scattered over multiple result handlers somewhat like this:
...
void start() {
KJob *job = operation1(mArgument);
connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
job->start();
}
void onOperation1Done(KJob *operation1Job) {
KJob *job = operation2(operation1Job->result());
connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
job->start();
}
void onOperation2Done(KJob *operation1Job) {
mResult = operation1Job->result();
emitResult();
}
...
We are forced to split the code over several functions due to the inversion of control introduced by the handler based asynchronous programming. Unfortunately these additional functions (the handlers), that we are now forced to use, are not helping the program strucutre in any way. This manifests itself also in the rather useless function names that typically follow a pattern such as on”$Operation”Done() or similar. Further, because the code is scattered over functions, values that are available on the stack in a synchronous function have to be stored explicitly as class member, so they are available in the handler where they are required for a further step.
The traditional way to make code easy to comprehend is to split it up into functions that are then called by a higher level function. This kind of function composition is no longer possible with asynchronous programs using our current tools. All we can do is chain handler after handler. Due to the lack of this higher level function that composes the functionality, a reader is also force to read very single line of the code, instead of simply skimming the function names, only drilling deeper if more detailed information about the inner workings are required.
Since we are no longer able to structure the code in a useful way using functions, only classes, and in our case KJob’s, are left to structure the code. However, creating subjob’s is a lot of work when all you need is a function, and while it helps the strucutre, it scatters the code even more, making it potentially harder to read and understand. Due to this we also often end up with large and complex job classes.
Last but not least we loose all available control structures by the inversion of control. If you write asynchronous code you don’t have the if’s, for’s and while’s available that are fundamental to write code. Well, obviously they are still there, but if you write asynchronous code you can’t use them as usual because you can’t plug a complete asynchonous operation inside an if{}-block. The best that you can do, is initiate the operation inside the imperative control structures, and dealing with the results later on in handlers. Because we need control structures to build useful programs, these are usually emulated by building complex statemachines where each function depends on the current class state. A typical (anti)pattern of that kind is a for loop creating jobs, with a decreasing counter in the handler to check if all jobs have been executed. These statemachines greatly increase the complexity of the code, are higly error prone, and make larger classes incomprehensible without drawing complex state diagrams (or simply staring at the screen long enough while tearing your hear out).
Oh, and before I forget, of course we also no longer get any useful backtraces from gdb as pretty much every backtrace comes straight from the eventloop and we have no clue what was happening before.
As a summary, inversion of control causes:
code is scattered over functions that are not helpful to the structure
composing functions is no longer possible, since what would normally be written in a function is written as a class.
control structures are not usable, a statemachine is required to emulate this.
backtraces become mostly useless
As an analogy, your typical asynchronous class is the functional equivalent of single synchronous function (often over 1000 loc!), that uses goto’s and some local variables to build control structures. I think it’s obvious that this is a pretty bad way to write code, to say the least.
JobComposer
Fortunately we received a new tool with C++11: lambda functions
Lambda functions allow us to write functions inline with minimal syntactical overhead.
Armed with this I set out to find a better way to write asynchronous code.
A first obvious solution is to simply write the result handler of a slot as lambda function, which would allow us to write code like this:
make_async(operation1(), [] (KJob *job) {
//Do something after operation1()
make_async(operation2(job->result()), [] (KJob *job) {
//Do something after operation2()
...
});
});
It’s a simple and concise solution, however, you can’t really build reuasable building blocks (like functions) with that. You’ll get one nested tree of lambda’s that depend on each other by accessing the results of the previous jobs. The problem making this solution non-composable is that the lambda function which we pass to make_async, starts the asynchronous task, but also extracts results from the previous job. Therefore you couldn’t, for instance, return an async task containing operation2 from a function (because in the same line we extract the result of the previous job).
What we require instead is a way of chaining asynchronous operations together, while keeping the glue code separated from the reusable bits.
JobComposer is my proof of concept to help with this:
class JobComposer : public KJob
{
Q_OBJECT
public:
//KJob start function
void start();
//This adds a new continuation to the queue
void add(const std::function<void(JobComposer&, KJob*)> &jobContinuation);
//This starts the job, and connects to the result signal. Call from continuation.
void run(KJob*);
//This starts the job, and connects to the result signal. Call from continuation.
//Additionally an error case continuation can be provided that is called in case of error, and that can be used to determine wether further continuations should be executed or not.
void run(KJob*, const std::function<bool(JobComposer&, KJob*)> &errorHandler);
//...
};
The basic idea is to wrap each step using a lambda-function to issue the asynchronous operation. Each such continuation (the lambda function) receives a pointer to the previous job to extract results.
Here’s an example how this could be used:
auto task = new JobComposer;
task->add([](JobComposer &t, KJob*){
KJob *op1Job = operation1();
t.run(op1Job, [](JobComposer &t, KJob *job) {
kWarning() << "An error occurred: " << job->errorString()
});
});
task->add([](JobComposer &t, KJob *job){
KJob *op2Job = operation2(static_cast<Operation1*>(job)->result());
t.run(op2Job, [](JobComposer &t, KJob *job) {
kWarning() << "An error occurred: " << job->errorString()
});
});
task->add([](JobComposer &t, KJob *job){
kDebug() << "Result: " << static_cast<Operation2*>(job)->result();
});
task->start();
What you see here is the equivalent of:
int tmp = operation1();
int res = operation2(tmp);
kDebug() << res;
There are several important advantages of using this to writing traditional asynchronous code using only KJob:
The code above, which would normally be spread over several functions, can be written within a single function.
Since we can write all code within a single function we can compose functions again. The JobComposer above could be returned from another function and integrated into another JobComposer.
Values that are required for a certain step can either be extracted from the previous job, or simply captured in the lambda functions (no more passing of values as members).
You only have to read the start() function of a job that is written this way to get an idea what is going on. Not the complete class.
A “backtrace” functionality could be built in to JobComposer that would allow to get useful information about the state of the program although we’re in the eventloop.
This is of course only a rough prototype, and I’m sure we can craft something better. But at least in my experiments it proved to work very nicely.
What I think would be useful as well are a couple of helper jobs that replace the missing control structures, such as a ForeachJob which triggers a continuation for each result, or a job to execute tasks in parallel (instead of serial as JobComposer does).
As a little showcase I rewrote a job of the imap resource.
You’ll see a bit of function composition, a ParallelCompositeJob that executes jobs in parallel, and you’ll notice that only relevant functions are left and all class members are gone. I find the result a lot better than the original, and the refactoring was trivial and quick.
I’m quite certain that if we build these tools, we can vastly improve our asynchronous code making it easier to write, read, and debug.
And I think it’s past time that we build proper tools.
[Less]
|
|
Posted
about 11 years
ago
by
roundcube
We’re proud to announce the next service release to the stable version 1.0.
It contains some bug fixes and improvements we considered important for the
long term support branch of Roundcube.
It’s considered stable and we recommend to update all
... [More]
productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.
Please do backup before updating!
[Less]
|
|
Posted
about 11 years
ago
by
tobru
Contents
“CASino is an easy to use Single Sign On (SSO) web application written in Ruby”
It supports different authentication backends, one of it is LDAP. It works very well with the
LDAP backend of Kolab. Just put the following
... [More]
configuration snippet into
your config/cas.yml:
production:
authenticators:
kolab:
authenticator: 'LDAP'
options:
host: 'localhost'
port: 389
base: 'ou=People,dc=mydomain,dc=tld'
username_attribute: 'uid'
admin_user: 'uid=kolab-service,ou=Special Users,dc=mydomain,dc=tld'
admin_password: 'mykolabservicepassword'
extra_attributes:
email: 'mail'
fullname: 'uid'
You are now able to sign in using your Kolab uid and manage SSO users with the nice
Kolab Webadmin LDAP frontend.
CASino with Kolab LDAP backend was originally published by Tobias Brunner at tobrunet.ch Techblog on September 27, 2014.
[Less]
|
|
Posted
about 11 years
ago
by
Timotheus Pokorra
This describes how to install a docker image of Kolab.
Please note: this is not meant to be for production use. The main purpose is to provide an easy way for demonstration of features and for product validation.
This installation has not been tested
... [More]
a lot, and could still use some fine tuning. This is just a demonstration of what could be done with Docker for Kolab.
Preparing for Docker
I am using a Jiffybox provided by DomainFactory for downloading a Docker container for Kolab 3.3 running on CentOS 6.
I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:
Install a kernel that is required by Docker:
sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring
After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.
After the restart, uname -a should show something like:
Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Now install docker:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
sudo apt-get update
sudo apt-get install lxc-docker
Install container
The image for the container is available here:
https://index.docker.io/u/tpokorra/kolab33_centos6/
If you want to know how this image was created, read my other blog post http://www.pokorra.de/2014/09/building-a-docker-container-for-kolab-3-3-on-jiffybox/.
To install this image, you need to type in this command:
docker pull tpokorra/kolab33_centos6
You can create a container from this image and run it:
MYAPP=$(sudo docker run --name centos6_kolab33 -P -h kolab33.test.example.org -d -t -i tpokorra/kolab33_centos6)
You can see all your containers:
docker ps -a
You now have to attach to the container, and inside the container start the services:
docker attach $MYAPP
/root/start.sh
Somehow it should work to start the services automatically at startup, but I did not get it to work with CMD or ENTRYPOINT.
To stop the container, type exit on the container’s console, or run from outside:
docker stop $MYAPP
To delete the container:
docker rm $MYAPP
You can reach the Kolab Webadmin on this URL:
https://localhost/kolab-webadmin. Login with user: cn=Directory Manager, password: test
The Webmail interface is available here:
https://localhost/roundcubemail.
[Less]
|
|
Posted
about 11 years
ago
by
Timotheus Pokorra
This article is an update of the previous post that built a Docker container for Kolab 3.1: Building a Docker container for Kolab on Jiffybox (March 2014)
Preparation
I am using a Jiffybox provided by DomainFactory for building a Docker container for
... [More]
Kolab 3.3 running on CentOS 6.
I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:
Install a kernel that is required by Docker:
sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring
After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.
After the restart, uname -a should show something like:
Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Now install docker:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
sudo apt-get update
sudo apt-get install lxc-docker
Create a Docker image
I realised that if I would install Kolab in one go, the image would become too big to upload to https://index.docker.io.
Therefore I have created a Dockerfile which has several steps for downloading and installing various packages. For a detailed description of a Dockerfile, see the Dockerfile Reference
My Dockerfile is available on Github: https://github.com/TBits/KolabScripts/blob/Kolab3.3/kolab/Dockerfile. You should store it with filename Dockerfile in your current directory.
This command will build a container with the instructions from the Dockerfile in the current directory. When the instructions have been successful, an image with the name tpokorra/kolab33_centos6 will be created, and the container will be deleted:
sudo docker build -t tpokorra/kolab33_centos6 .
You can see all your local images with this command:
sudo docker images
To finish the container, we need to run setup-kolab, this time we define a hostname as a parameter:
MYAPP=$(sudo docker run --name centos6_kolab33 --privileged=true -h kolab33.test.example.org -d -t -i tpokorra/kolab33_centos6 /bin/bash)
docker attach $MYAPP
# run inside the container:
echo `hostname -f` > /proc/sys/kernel/hostname
echo 2 | setup-kolab --default --timezone=Europe/Brussels --directory-manager-pwd=test
./initHttpTunnel.sh
./initSSL.sh test.example.org
/root/stop.sh
exit
Typing exit inside the container will stop the container.
Now you commit this last manual change:
docker commit $MYAPP tpokorra/kolab33_centos6
# delete the container
docker rm $MYAPP
You can push this image to https://index.docker.io:
#create a new account, or login with existing account:
sudo docker login
sudo docker push tpokorra/kolab33_centos6
You can now see the image available here: https://index.docker.io/u/tpokorra/kolab33_centos6/
See this post Installing Demo Version of Kolab 3.3 with Docker about how to install this image on the same or a different machine, for demo and validation purposes.
Current status: There are still some things not working fine, and I have not tested everything.
But this should be a good starting point for other people as well, to help with a good demo installation of Kolab on Docker.
[Less]
|