20
I Use This!
Activity Not Available

News

Posted over 8 years ago by seigo
It is not commonly understood how software development projects moving forward as fast as they can and sometimes do, can be in direct conflict with their actual rate of adoption, not to mention their commercial delivery to market. This post outlines ... [More] the differences between short- and long-term commitments, and how the distinction can be used to form complementary facets of a project, rather than remain in perpetual conflict. Generically speaking, the result of a single iteration of an agile development methodology should result in two things; A known, established and verified upgrade path from the previous iteration, However slight an increment in product value. In contrast to the moving target as outcome of iterations in projects using the agile methodology with its virtually inherent unpredictability, businesses sign contracts with promises, including but not limited to; A version of the product that is both quality-assured and stable — meaning there are no surprises at the end of consuming a stream, meaning neither pleasant nor unpleasant surprises, Support for that version for a much longer period of time than the original outcome of any one particular (group of) development iterations, Incremental updates to support sustainability of the product, be it bug fixes or incremental enhancements, Service level agreements guaranteeing turn-around time-lines, A competitive value proposition based on the aforementioned parameters, Services including but not limited to development of customizations, training and consultancy, The next version of the product meeting all facets of the current version of the product. There are a few additional aspects I could go in to that need to be omitted for genuine Free Software ISVs, such as licensing and exclusivity, as I consider both to be strong-arming a competitive edge, or bullying. Short-term Commitments A short-term commitment is a predictable time-line with an established amount of investment in which the product value can incrementally be increased with a known value. We can state with a degree of certainty that these commitments are relatively small compared to a contractually obligatory commitment to paying customers, to support their deployment for up to 72 months, or 6 years. We can also state that the goal with short-term commitments is an increase in product value, and that the motivations for development partaking in short-term commitments prioritizes the incremental enhancements sorted by descending value increment. As such, short-term commitments are made with an eye on the future — research & development prototyping something, for example. Long-term Commitments A long-term commitment is a period of time often spanning multiple years, during which promises need to be delivered on. These promises have been made to oneself, a community at large and paying customers. These promises include hard targets (contractually obligatory facets) and soft targets (the next generation of a product stream). They will have to be kept though — as such long-term commitments are made with an eye on the past, including tomorrow, aka. the future past. We can state with certainty that very few developers are interested in, and even fewer capable of, both the development of the next generation of the product and maintaining up to 6 or 7 legacy versions. We can also state an ability to support and troubleshoot a set of often ill-articulated symptoms remotely is quite a different discipline. It is not easy to establish how much costs are to be associated with long-term commitments ahead of time, albeit there are models to estimate such costs. One method to reduce such costs (and therefore better predict real costs within a smaller margin of error) is to ramp up quality assurance. It is, proverbially speaking, making sure your customers do not have to pick up the phone at all and therefore reduce the support cost while demonstratively increasing the value of the KSP. Quality assurance however still spells a certain degree of uncertainty in real cost, and is another long-term commitment — except when it is automated and frequently executed, preferrably as early as possible. Customer, Product or Business Value? If you were to entertain only short-term commitments, the product value would be unjustly biased toward customer value as a means to achieve business value. Customer value however is often perceived rather than real, as customers will think the world of the feature they request be included — an absolute necessity, can’t work without out. Ultimately, it morphs the product toward a collection of complex features that are not generically applicable to most or all consumers of the product. Product value needs to be understood to mean generically applicable functionality, or high-value functionality applicable more selectively. This implies that functionality applicable more generically is multiplied the product value of implicitly, albeit by an indeterminate factor. It would therefore be prioritized accordingly. When a project deliberately pursues the implementation of more specific functionality, this should be considered a matter of strategy, such as opening up entire market segments, target audiences and industries. For the real punchline, let’s look at business value. In a genuinely Free Software ISV business, there’s little distinction between product value and business value. For all considerations development, future and short-term commitments, the business value of the product can only rise if the product value itself rises. In other words, if the product’s business value, so does the product value itself. If the product value rises, so does its business value. To put things in perspective, draw a distinction between the business value of the product, and the value of the business. Linking It All Up The long-term commitments predicate that the result of short-term commitments come with a meaningful degree of certainty and confidence. That is to say, very, very well verified. Provided “very well verified” translates in to actionable items, continuous integration testing is used to establish what score such certainty and confidence may translate into — as early on as possible. Allowing for the different stages of a single iteration in development means that this must be seeded correctly in order to acknowledge or decline the bar to delivery has been surpassed. This strongly suggests the process used for development teams is Test-Driven Development, which would certainly increase the certainty on correct input to the development process earlier than a retrospective-based feedback cycle on having missed the mark, and increases certainty and confidence on the result of an iteration as well. To prevent source code management from containing tests failing as a result, we can bring mandatory code review in to play.   [Less]
Posted over 8 years ago by seigo
It is not commonly understood how software development projects moving forward as fast as they can and sometimes do, can be in direct conflict with their actual rate of adoption, not to mention their commercial delivery to market. This post outlines ... [More] the differences between short- and long-term commitments, and how the distinction can be used to form complementary facets of a project, rather than remain in perpetual conflict. Generically speaking, the result of a single iteration of an agile development methodology should result in two things; A known, established and verified upgrade path from the previous iteration, However slight an increment in product value. In contrast to the moving target as outcome of iterations in projects using the agile methodology with its virtually inherent unpredictability, businesses sign contracts with promises, including but not limited to; A version of the product that is both quality-assured and stable — meaning there are no surprises at the end of consuming a stream, meaning neither pleasant nor unpleasant surprises, Support for that version for a much longer period of time than the original outcome of any one particular (group of) development iterations, Incremental updates to support sustainability of the product, be it bug fixes or incremental enhancements, Service level agreements guaranteeing turn-around time-lines, A competitive value proposition based on the aforementioned parameters, Services including but not limited to development of customizations, training and consultancy, The next version of the product meeting all facets of the current version of the product. There are a few additional aspects I could go in to that need to be omitted for genuine Free Software ISVs, such as licensing and exclusivity, as I consider both to be strong-arming a competitive edge, or bullying. Short-term Commitments A short-term commitment is a predictable time-line with an established amount of investment in which the product value can incrementally be increased with a known value. We can state with a degree of certainty that these commitments are relatively small compared to a contractually obligatory commitment to paying customers, to support their deployment for up to 72 months, or 6 years. We can also state that the goal with short-term commitments is an increase in product value, and that the motivations for development partaking in short-term commitments prioritizes the incremental enhancements sorted by descending value increment. As such, short-term commitments are made with an eye on the future — research & development prototyping something, for example. Long-term Commitments A long-term commitment is a period of time often spanning multiple years, during which promises need to be delivered on. These promises have been made to oneself, a community at large and paying customers. These promises include hard targets (contractually obligatory facets) and soft targets (the next generation of a product stream). They will have to be kept though — as such long-term commitments are made with an eye on the past, including tomorrow, aka. the future past. We can state with certainty that very few developers are interested in, and even fewer capable of, both the development of the next generation of the product and maintaining up to 6 or 7 legacy versions. We can also state an ability to support and troubleshoot a set of often ill-articulated symptoms remotely is quite a different discipline. It is not easy to establish how much costs are to be associated with long-term commitments ahead of time, albeit there are models to estimate such costs. One method to reduce such costs (and therefore better predict real costs within a smaller margin of error) is to ramp up quality assurance. It is, proverbially speaking, making sure your customers do not have to pick up the phone at all and therefore reduce the support cost while demonstratively increasing the value of the KSP. Quality assurance however still spells a certain degree of uncertainty in real cost, and is another long-term commitment — except when it is automated and frequently executed, preferrably as early as possible. Customer, Product or Business Value? If you were to entertain only short-term commitments, the product value would be unjustly biased toward customer value as a means to achieve business value. Customer value however is often perceived rather than real, as customers will think the world of the feature they request be included — an absolute necessity, can’t work without out. Ultimately, it morphs the product toward a collection of complex features that are not generically applicable to most or all consumers of the product. Product value needs to be understood to mean generically applicable functionality, or high-value functionality applicable more selectively. This implies that functionality applicable more generically is multiplied the product value of implicitly, albeit by an indeterminate factor. It would therefore be prioritized accordingly. When a project deliberately pursues the implementation of more specific functionality, this should be considered a matter of strategy, such as opening up entire market segments, target audiences and industries. For the real punchline, let’s look at business value. In a genuinely Free Software ISV business, there’s little distinction between product value and business value. For all considerations development, future and short-term commitments, the business value of the product can only rise if the product value itself rises. In other words, if the product’s business value, so does the product value itself. If the product value rises, so does its business value. To put things in perspective, draw a distinction between the business value of the product, and the value of the business. Linking It All Up The long-term commitments predicate that the result of short-term commitments come with a meaningful degree of certainty and confidence. That is to say, very, very well verified. Provided “very well verified” translates in to actionable items, continuous integration testing is used to establish what score such certainty and confidence may translate into — as early on as possible. Allowing for the different stages of a single iteration in development means that this must be seeded correctly in order to acknowledge or decline the bar to delivery has been surpassed. This strongly suggests the process used for development teams is Test-Driven Development, which would certainly increase the certainty on correct input to the development process earlier than a retrospective-based feedback cycle on having missed the mark, and increases certainty and confidence on the result of an iteration as well. To prevent source code management from containing tests failing as a result, we can bring mandatory code review in to play.   [Less]
Posted over 8 years ago by seigo
It is not commonly understood how software development projects moving forward as fast as they can and sometimes do, can be in direct conflict with their actual rate of adoption, not to mention their commercial delivery to market. This post outlines ... [More] the differences between short- and long-term commitments, and how the distinction can be used to form complementary facets of a project, rather than remain in perpetual conflict. Generically speaking, the result of a single iteration of an agile development methodology should result in two things; A known, established and verified upgrade path from the previous iteration, However slight an increment in product value. In contrast to the moving target as outcome of iterations in projects using the agile methodology with its virtually inherent unpredictability, businesses sign contracts with promises, including but not limited to; A version of the product that is both quality-assured and stable — meaning there are no surprises at the end of consuming a stream, meaning neither pleasant nor unpleasant surprises, Support for that version for a much longer period of time than the original outcome of any one particular (group of) development iterations, Incremental updates to support sustainability of the product, be it bug fixes or incremental enhancements, Service level agreements guaranteeing turn-around time-lines, A competitive value proposition based on the aforementioned parameters, Services including but not limited to development of customizations, training and consultancy, The next version of the product meeting all facets of the current version of the product. There are a few additional aspects I could go in to that need to be omitted for genuine Free Software ISVs, such as licensing and exclusivity, as I consider both to be strong-arming a competitive edge, or bullying. Short-term Commitments A short-term commitment is a predictable time-line with an established amount of investment in which the product value can incrementally be increased with a known value. We can state with a degree of certainty that these commitments are relatively small compared to a contractually obligatory commitment to paying customers, to support their deployment for up to 72 months, or 6 years. We can also state that the goal with short-term commitments is an increase in product value, and that the motivations for development partaking in short-term commitments prioritizes the incremental enhancements sorted by descending value increment. As such, short-term commitments are made with an eye on the future — research & development prototyping something, for example. Long-term Commitments A long-term commitment is a period of time often spanning multiple years, during which promises need to be delivered on. These promises have been made to oneself, a community at large and paying customers. These promises include hard targets (contractually obligatory facets) and soft targets (the next generation of a product stream). They will have to be kept though — as such long-term commitments are made with an eye on the past, including tomorrow, aka. the future past. We can state with certainty that very few developers are interested in, and even fewer capable of, both the development of the next generation of the product and maintaining up to 6 or 7 legacy versions. We can also state an ability to support and troubleshoot a set of often ill-articulated symptoms remotely is quite a different discipline. It is not easy to establish how much costs are to be associated with long-term commitments ahead of time, albeit there are models to estimate such costs. One method to reduce such costs (and therefore better predict real costs within a smaller margin of error) is to ramp up quality assurance. It is, proverbially speaking, making sure your customers do not have to pick up the phone at all and therefore reduce the support cost while demonstratively increasing the value of the KSP. Quality assurance however still spells a certain degree of uncertainty in real cost, and is another long-term commitment — except when it is automated and frequently executed, preferrably as early as possible. Customer, Product or Business Value? If you were to entertain only short-term commitments, the product value would be unjustly biased toward customer value as a means to achieve business value. Customer value however is often perceived rather than real, as customers will think the world of the feature they request be included — an absolute necessity, can’t work without out. Ultimately, it morphs the product toward a collection of complex features that are not generically applicable to most or all consumers of the product. Product value needs to be understood to mean generically applicable functionality, or high-value functionality applicable more selectively. This implies that functionality applicable more generically is multiplied the product value of implicitly, albeit by an indeterminate factor. It would therefore be prioritized accordingly. When a project deliberately pursues the implementation of more specific functionality, this should be considered a matter of strategy, such as opening up entire market segments, target audiences and industries. For the real punchline, let’s look at business value. In a genuinely Free Software ISV business, there’s little distinction between product value and business value. For all considerations development, future and short-term commitments, the business value of the product can only rise if the product value itself rises. In other words, if the product’s business value, so does the product value itself. If the product value rises, so does its business value. To put things in perspective, draw a distinction between the business value of the product, and the value of the business. Linking It All Up The long-term commitments predicate that the result of short-term commitments come with a meaningful degree of certainty and confidence. That is to say, very, very well verified. Provided “very well verified” translates in to actionable items, continuous integration testing is used to establish what score such certainty and confidence may translate into — as early on as possible. Allowing for the different stages of a single iteration in development means that this must be seeded correctly in order to acknowledge or decline the bar to delivery has been surpassed. This strongly suggests the process used for development teams is Test-Driven Development, which would certainly increase the certainty on correct input to the development process earlier than a retrospective-based feedback cycle on having missed the mark, and increases certainty and confidence on the result of an iteration as well. To prevent source code management from containing tests failing as a result, we can bring mandatory code review in to play.   [Less]
Posted over 8 years ago by Aaron Seigo
Erlang is a very nice fit for many of the requirements various components in Kolab have ... perhaps one of these days I'll write something more in detail about why that is. For now, suffice it to say that we've started using Erlang for some of the ... [More] new server-side components in Kolab. The most common application protocol spoken in Kolab is IMAP. Unfortunately there was no maintained, functional IMAP client written in Erlang that we could find which met our needs. So, apparently the world needed another IMAP client, this time written in Erlang. (Note: When I say "IMAP client" I do not mean a GUI for users, but rather something that implements the client-side of the IMAP protocol: connect to a server, authenticate, run commands, etc.) So say hello to eimap. Usage Overview eimap is implemented as a finite state machine that is meant to run in its own Erlang process. Each instance of an eimap represents a single connection to an IMAP server which can be used by one or more other processes to connect, authenticate and run commands against the server. The public API of eimap consists mostly of requests that queue commands to be sent to the server. These functions take the process ID (PID) to send the result of the command to, and an optional response token that will accompany the response. Commands in the queue are processed in sequence, and the server responses are parsed into nice normal Erlang terms so one does not need to concern themselves with the details of the IMAP message protocols. Details like selecting folders before accessing them or setting up TLS is handled automagically by eimap by inserting necessary commands into the queue for the user. Here is a short example of using eimap: ServerConfig = #eimap_server_config{ host = "192.168.56.101", port = 143, tls = false }, { ok, Conn } = eimap:start_link(ServerConfig), eimap:login(Conn, self(), undefined, "doe", "doe"), eimap:get_folder_metadata(Conn, self(), folder_metadata, "*", ["/shared/vendor/kolab/folder-type"]), eimap:logout(Conn, self(), undefined), eimap:connect(Conn). It starts an eimap process, queues up a login, getmetadata and logout command, then connects. The connect call could have come first, but it doesn't matter. When the connection is established the command queue is processed. eimap exits automatically when the connection closes, making cleanup nice and easy. You can also see the response routing in each of the command functions, e.g. self(), folder_metadata which means that the results of that GETMADATA IMAP command will be sent to this process as { folder_metadata, ParsedResponse } once completed. This is typically handled in a handle_info/3 function for gen_server processes (and similar). Internally, each IMAP command is implemented in its own module which contains at least a new and a parse function. The new function creates the string to send the server for a given command, and parse does what it says returning a tuple that tells eimap whether it is completed, needs to consume more data from the server, or has encountered an error. This allows simple commands can be implemented very quickly, e.g.: -module(eimap_command_compress). -behavior(eimap_command). -export([new/1, parse/2]). new(_Args) -> <<"COMPRESS DEFLATE">>. parse(Data, Tag) -> formulate_reponse(eimap_utils:check_response_for_failure(Data, Tag)). formulate_reponse(ok) -> compression_active; formulate_reponse({ _, Reason }) -> { error, Reason }. There is also a "passthrough" mode which allows a user to use eimap as a pipe between it and the IMAP server directly, bypassing the whole command queueing mechanism. However, if commands are queued, eimap drops out of passthrough to run those commands and process their responses before returning to passthrough. It is not a complicated design by any means, and that's a virtue. :) Plans and more plans! As we write more Erlang code for use with Kolab and IMAP in general, eimap will be increasingly used and useful. The audit trail system for groupware objects needs some very basic IMAP functionality; the Guam IMAP proxy/filter heavily relies on this; and future projects such as a scalable JMAP proxy will also be needing it. So we will have a number of consumers for eimap as time goes on. While the core design is mostly in place, there are quite a few commands that need to be implemented which you can see on the eimap workboard. Writing commands is quite straightforward as each goes into its own module in the src/commands directory and is developed with a corresponding test in the test/ directory; you don't even need an IMAP server, just the lovely (haha) relevant IMAP RFC. Once complete add a function to eimap itself to queue the command, and eimap handles the rest for you from there. Easy, peasy. I've personally been adding the commands that I have immediate use for, and will be generally adding the rest over time. Participation, feedback and patches are welcome!   [Less]
Posted over 8 years ago by Aaron Seigo
These days, the bulk of my work at Kolab Systems does not involve writing code. I have been spending quite a bit of time on the business side of things (and we have some genuinely exciting things coming in 2016), customer and partner interactions ... [More] , as well as on higher-level technical design and consideration. So I get to roll around Roundcube Next, Kube (an Akonadi2-based client for desktop and mobile ... but more on that another time), Kolab server hardware pre-installs .. and that's all good and fun. Still, I do manage to write a bit of code most weeks, and one of the projects I've been working on lately is an IMAP filter/proxy called Guam. I've been wanting to blog about it for a while, and as we are about to roll version 0.4 I figured now is as good a time as any. The Basics of Guam Guam provides a simple framework to alter data being passed between an IMAP client and server in real time. This "interference" is done using sets of rules. Each port that Guam listens has a set of rules with their own order and configuration. Initially rules start out passive and based on the data flow may elect to become active. Once active, a rule gets to peek at the data on the wire and may take whatever actions it wish, including altering that data before it gets sent on. In this way rules may alter client messages as well as server responses; they may also record or perform other out-of-band tasks. The imagination is the limit, really. Use Cases The first practical use case Guam is fulfilling is selective hiding of folders from IMAP clients. Kolab stores groupware data such as calendars, notes, tags and more in plain old IMAP folders. Clients that connect over IMAP to a Kolab server which are not aware of this get shown all those folders. I've even heard of users who have seen these folders and delete them thinking they were not supposed to be there, only to then wonder where the heck their calendars went. ;) So there is a simple rule called filter_groupware_folders that tries to detect if the client is a Kolab-aware client by looking at the ID string it sends and if it does not look like a Kolab client it goes about filtering out those groupware folders. Kolab continues on as always, and IMAP clients do as well but simply do not see those other special folders. Problem solved. But Guam can be used for much more than this simple, if rather practical, use case. Rules could be written that prevent downloading of attachments from mobile devices, or accessing messages marked as top-secret when being accessed from outside an organization's firewall. Or they could limit message listings to just the most recent or unread ones and provide access to that as a special service on a non-standard port. They could round-robin between IMAP servers, or direct different users to different IMAP servers transparently. And all of these can be chained in whichever order suits you. The Essential Workings The two most important things to configure in Guam are the IMAP servers to be accessed and the ports to accept client connections on. Listener configuration includes the interface and port to listen on, TLS settings, which IMAP backend to use and, of course, the rule set to apply to traffic. IMAP server configuration includes the usual host/port and TLS preferences, and the listeners refer to them by name. It's really not very complicated. :) Rules are implemented in Guam as Erlang modules which implement a simple behavior (Erlangish for "interface"): new/1, applies/3, apply_to_client_message/3, apply_to_server_message/3, and optionally imap_data/3. The name of the module defines the name of the rule in the config: a rule named foobar would be implemented in a module named kolab_guam_rule_foobar. ... and for a quick view that's about all there is to it! Under the hood I chose to write it in Erlang because the use case is pretty much perfect for it: lots of simultaneous connections that must be kept separate from one another. Failure in any single connection (including a crash of some sort in the code) does not interfere with any other connection; everything is asynchronous while remaining simple (the core application is a bit under 500 lines of code); and Erlang's VM scales very well as you add cores. In other words: stability, efficiency, simplicity. Behind the scenes, Guam uses an Erlang IMAP client library that I've been working on called eimap. I won't get any awards for creativity in the naming of it, certainly, but "erlang IMAP" does what it says on the box: it's IMAP in Erlang. That code base is rather larger than the Guam one, and is quite interesting in its own right: finite state machines! passthrough modes! commands as independent modules! async and multi-process! ooh! aaaaaah! sparkles! eimap is a very easy project to get your fingers dirty with (new commands can be implemented in well under 6 lines of code) and will be used by a number of applications in future. More in the next blog entry about that, however. In the meantime, if you want to get involved, check out the git repo, read the docs in there and take a look at the Guam workboard.   [Less]
Posted over 8 years ago by seigo
In the past, even before I joined the Kolab community, our product was released on a feature-based schedule. We’ve changed that to time-based releases, with an aim to be able to provide something new every 6 months or so. We’re now changing it again ... [More] , but not in the way you might think. Not too long ago, software was developed, tagged, branched, released, tested, fixed, patched, packaged and deployed, not necessarily all of them in that order, at whichever pace the project set for itself, then distributors would pick it up, then environments would install it — often under the guidance of road-maps and milestones with the final result in mind (the “waterfall” methodology), upgrade paths and support life-cycles. Today, we’re seeing more and more software that does not actually get released, ever, is perpetually in motion, and shows weekly or even daily incremental progress. A common methodology used here is the agile methodology which, in contrast to the waterfall approach, is an iterative process and rarely ever leads to a final result. The way this works is by regularly gathering the stakeholders to look at actual, functioning software at the end of an iteration (a “sprint” in Scrum), and take in feedback. This meeting at the end of a sprint is called a retrospective in Scrum. Sprints are time-boxed efforts to achieve a number of goals set out at the beginning of each sprint. The larger the window of time in which the effort is boxed, the more incremental changes will have been made, and the more difficult it becomes to genuinely facilitate agility. And this is where the evolution of Kolab development kicks off. You can imagine that the delivery of software on a daily or even just a weekly basis can amount to quite an effort, but the delivery of the software is a prerequisite for the retrospective. To facilitate the delivery, a few major aspects need to be taken in to account. In no particular order; 1. Packaging Packaging the software solely for the sake of delivery in time for a retrospective doesn’t make sense, let alone packaging the software in such way that would be acceptable for inclusion in to distributor’s stock software repositories. If you want to know more about why or how it doesn’t, please leave a comment so I know to write another blog post on just that subject. 2. Delivery The retrospective intends to use whatever means necessary to be able to show off functioning software — as such it has little to do with delivery to market, nor the ability to install a package is therefore a void requirement. A priori, you need to prioritize delivery of git master efficiently and effectively, not to say continuously, and not to say automatically. 3. Constraints The development team participating in a sprint cannot be held back by arbitrary constraints, for they will be labelled “inhibitors” — negatively impacting the team’s “velocity” (ability to deliver, if you will). This includes constraints such as the version of PHP or NodeJS that is available on a target platform. In fact, the entire concept of maintaining a target platform is out the window. However, you can set constraints on lower-level system abilities — the version of glibc or C come to mind. It usually does pay off to set a reference platform for implementation. With Kolab, we have set such reference platform for implementation. Not only did we do that, we also set a reference implementation for the features and environment topology — a single server deployment aka. localhost.localdomain. 4. Software Development Process To be absolutely clear on the implementation of specifications outlined in requirements, so that the implementation of software and expectation about delivered functionality is on par, to ensure a development team is provided with sufficiently descriptive architecture and design documentation, use-cases, test-cases, mock-ups and work-flow definitions (user stories), to increase the certainty and confidence at the point of delivery and to avoid the need to fix things just in time for a retrospective to take place, you will want to make sure that what is put in to git master during the sprint is tested. This spells out a need for a test-driven development process, and mandatory (peer) code review. Before a retrospective takes place, the developments are reviewed (having been verified through continuous integration), and merged and become a line-item on the list of “things to show off” during the retrospective. 5. Containers Docker containers offer a great opportunity to not only install software that is not packaged, but apply a series of operations and fix gotchas and configure ahead of distribution, while allowing for a certain degree of variety in where the image actually gets run — because the ability to distribute to a wider audience is inherited. In the case of Kolab, orchestration allows for many moving parts to move in parallel, and hence we’re interested in using containers to address the “works on my system” syndrome, allow vendorizing massive chunks of code as part of a continuous delivery scheme, and increase participation because everyone can run the results of the latest iteration almost immediately.   [Less]
Posted over 8 years ago by roundcube
We’re proud to announce that the beta release of the next major version 1.2 of Roundcube webmail is out now for download and testing. With this milestone we introduce new features primarily focusing on security and PGP encryption: PHP7 compatibility ... [More] PGP encryption Drag-n-drop attachments from mail preview to compose window Mail messages searching with predefined date interval Improved security measures to protect from brute-force attacks And of course plenty of small improvements and bug fixes. The PGP encryption support in Roundcube comes with two options: Mailvelope The integration of this browser plugin for Firefox and Chrome comes out of the box in Roundcube 1.2 and is enabled if the Mailvelope API is detected in a user’s browser. See the Mailvelope documentation how to enable it for your site. Read more about the Mailvelope integration and how this looks like. Enigma plugin This Roundcube plugin adds server-side PGP encryption features to Roundcube. Enabling this means that users need to fully trust the webmail server as encryption is done on the server (using GnuPG) and private keys are also stored there. In order to activate server-side PGP encryption for all your users, the ‘enigma’ plugin, which is shipped with this package, has to be enabled in the Roundcube config. See the plugin’s README for details. Also read this blogpost about the Enigma plugin and how it works. IMPORTANT: with this version, we finally deprecate some old Roundcube library functions. Please test your plugins thoroughly and look for deprecation warnings in the logs. See the complete Changelog at trac.roundcube.net/wiki/Changelog and download the new packages from roundcube.net/download. Please note that this is a beta release and we recommend to test it on a separate environment. And don’t forget to backup your data before installing it.   [Less]
Posted over 8 years ago by seigo
Today, I’ve turned off our last two CentOS systems, and now we run Red Hat Enterprise Linux 100% (minus the Fedora workstations), because, you know, amateur-hour be gone! Regretfully, they were firewalls, meaning at some point a unicorn somewhere may have felt the network failing over. Twice.  
Posted over 8 years ago by seigo
Somebody mentions to me that “struggles” may be a big word for such a small problem, and that mentioning Nulecule in the same breath may not be fair at all. That person, Joe Brockmeier, is correct, and I hereby pledge to buy him a beer — as I know ... [More] Joe loves beer — does FOSDEM work for you Joe? My earlier blog post did not give you any insight on what, how or why I struggled with Nulecule, nor whether the struggle is with Nulecule specifically or orchestrating too many containers with few too many micro-services as a whole. My Nulecule application is Kolab — the full application suite. It depends on a number of other applications which are principally valid Nulecule applications in their own right. Examples include MongoDB, MariaDB, PostgreSQL, HAProxy, 389 Directory Services, FreeIPA, and such and so forth. Some of these have existing Docker images. Nulecule applications are available for some of these, or are easily made in to Nulecule applications. Some are slightly harder to encapsulate however, and I’ll take one example to illustrate the point; 389 Directory Server. Complex and Custom: 389 DS A default 389 Directory Server installation falls just short of what Kolab requires or desires to become fully and properly functional; Schema extensions provide additional functionality, Anonymous binds should not be allowed, ACLs should be more restrictive, Better passwords should be allowed than only 7-bit ones, Access logging is I/O intensive and less important than Audit logging, Additional indexes on attributes included in the schema extensions need to be created, Service- and Administration accounts and roles need to be created. To facilitate this particular form of setting up 389 Directory Server, we currently ship our own version of /usr/share/data/dirsrv/template.ldif, in which we substitute some values during initial startup. With these specifics, how would I first create a generic Nulecule application for 389 Directory Server and then extend it? This is a philosophical question I think deserves answering but probably requires a lot of deliberation. Multiple Instances of a Nulecule Application In another area of my application suite, 6 of my micro-services require a database — MariaDB. The challenge here is three-fold; Atomicapp collects the configuration data required for the Nulecule applications by application graph name only, and the position of the application in the graph is discarded. This is to say that application A and B both requiring application C would have one combined configuration section for the application C, which is then to be utilized for both A and B. Furthermore, the Nulecule application for MariaDB (the case in point) creates pods and services all of them named “mariadb” — but there can be only one (per namespace). The creation of the second pod or service will fail. I would use ‘generateName’ for the pod and service name, but that is not currently supported. Therefore, this restriction applies to all Nulecule applications and not just MariaDB. My way to work around this restriction for now, is to fork off mariadb-centos7-atomicapp and substitute its id and name parameters. The third part of the problem is the level of customization that may be recommended for a MariaDB service; the maximum size of allowed packets, the use of one file per table in InnoDB, buffer sizes, cache sizes, etc., etc. An external Nulecule application should come with the best run-time defaults, and allow for a level of customization by the consuming application. Frankly, the same goes for the 389 Directory Server application I talked about earlier — it would just function differently. Focus on Priorities I’m relatively new to both the Atomic and Nulecule development communities, despite spending time with a select group of people in Red Hat’s Westford offices last year, so I can only say so much. A number of the conversations on mailing lists and IRC and in meetings today evolve around integrating atomicapp’s and atomic’s command-line interfaces, whether or not Nulecule applications should or have to ship atomicapp code themselves. These are not unimportant topics and may very well need to be addressed sooner rather than later, or risk they become a sink-hole we later need to climb out of. Fair enough. However, these topics dominate the conversation. It seems a disproportionate amount of resources is being spent on them, whereas we lack persistent volume configuration for Nulecule applications, duplicate specifications, settings and configuration items, and lack proper support to deploy most things more complex than a web service with a database — which is not to say it is completely impossible, it’s just needlessly difficult. I believe the best way it can be made easier to develop and deploy Nulecule applications is to walk people through in clearly articulated, documented show-cases of example applications, every one slightly more complex than the last one. Part of the ramp-up cost is to learn about setting up the necessary systems and how to do anything not already an example. I will probably get more involved in these topics to support my team of developers when the time is right.   [Less]
Posted over 8 years ago by seigo
I’m working on enabling continuous deployment with help of Nulecule, so that I can have my developer’s edges as well as a series of central environments entertain deployment based on the same triggers. For those of you outside the inner circle ... [More] , Nulecule is supposed to abstract deployment to different cloud orchestration providers, whether vanilla Docker, something assisted by Kubernetes, or something as fancy as OpenShift. In the Kolab Groupware world, orchestration of micro-services is no new challenge. We operate many, many environments that deploy “micro-services” based on (fat) VMs, but the way this works based on (thin) containers is different — there’s no running Puppet inside a container, for instance, or if there is, you’re doing it wrong. So, aside from the fact we know what our micro-services are across the application suite, the trick is in the orchestration. I have some 25-30 micro-services that require access to a service account’s credentials, because we do not normally allow anonymous binds against LDAP (any exploit anywhere would open you up to reconnaissance otherwise). Currently, setting one container to hold and configure those credentials in a canonical authentication and authorization database and distributing those credentials to the other 24-29 containers doesn’t seem possible without the user or administrator installing Kolab using containers specifying the credentials 25-30 times over. On a similar note, it is currently not possible to specify what volumes for a Docker container are actually supposed to be persistent storage. The point to take from this is to appreciate that Kolab is often at the forefront of technologies that have little to do with bouncing back and forth some emails among people that just so happen to share a calendar — in this case specifically, we’re abusing our use-case’s requirements with regards to orchestration to direct the conversation about a unified format to describe such requirements.   [Less]