I Use This!
Activity Not Available

News

Analyzed 3 months ago. based on code collected about 6 years ago.
Posted about 2 years ago
The diffoscope maintainers are pleased to announce the release of diffoscope version 207. This version includes the following changes: * Fix a gnarly regression when comparing directories against non-directories. (Closes: ... [More] reproducible-builds/diffoscope#292) * Use our assert_diff utility where we can within test_directory.py You find out more by visiting the project homepage. [Less]
Posted about 2 years ago
SSH private key scanner (keys without passphrase) So for policy reasons, customer wanted to ensure that every SSH private key in use by a human on their systems has a passphrase set. And asked us to make sure this is the case. There is no way in ... [More] SSH to check this during connection, so client side needs to be looked at. Which means looking at actual files on the system. Turns out there are multiple formats for the private keys - and I really do not want to implement something able to deal with that on my own. OpenSSH to the rescue, it ships a little tool ssh-keygen, most commonly known for its ability to generate SSH keys. But it can do much more with keys. One action is interesting here for our case: The ability to print out the public key to a given private key. For a key that is unprotected, this will just work. A key with a passphrase instead leads to it asking you for one. So we have our way to check if a key is protected by a passphrase. Now we only need to find all possible keys (note, the requirement is not “keys in .ssh/”, but all possible, so we need to scan for them. But we do not want to run ssh-keygen on just any file, we would like to do it when we are halfway sure, that it is actually a key. Well, turns out, even though SSH has multiple formats, they all appear to have the string PRIVATE KEY somewhere very early (usually first line). And they are tiny - even a 16384bit RSA key is just above 12000 bytes long. Lets find every file thats less then 13000 bytes and has the magic string in it, and throw it at ssh-keygen - if we get a public key back, flag it. Also, we supply a random (ohwell, hardcoded) passphrase, to avoid it prompting for any. Scanning the whole system, one will find quite a surprising number of “unprotected” SSH keys. Well, better description possibly “Unprotected RSA private keys”, so the output does need to be checked by a human. This, of course, can be done in shell, quite simple. So i wrote some Rust code instead, as I am still on my task to try and learn more of it. If you are interested, you can find sshprivscanner and play with it, patches/fixes/whatever welcome. [Less]
Posted about 2 years ago
I bought myself a new keyboard last November, a Logitech G213. True keyboard fans will tell me it’s not a real mechanical keyboard, but it was a lot cheaper and met my requirements of having some backlighting and a few media keys (really all I ... [More] use are the volume control keys). Oh, and being a proper UK layout. While the G213 isn’t fully independent RGB per key it does have a set of zones that can be controlled. Also this has been reverse engineered, so there are tools to do this under Linux. All I really wanted was some basic backlighting to make things a bit nicer in the evenings, but with the ability to control colour I felt I should put it to good use. As previously mentioned I have a personal desktop / work laptop setup combined with a UGREEN USB 3.0 Sharing Switch Box, so the keyboard is shared between both machines. So I configured up both machines to set the keyboard colour when the USB device is plugged in, and told them to use different colours. Instant visual indication of which machine I’m currently typing on! Running the script on USB detection is easy, a file in /etc/udev/rules.d/. I called it 99-keyboard-colour.rules: # Change the keyboard colour when we see it ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="046d", ATTR{idProduct}=="c336", \ RUN+="/usr/local/sbin/g213-set" g213-set is a simple bit of Python: #!/usr/bin/python3 import sys found = False devnum = 0 while not found: try: with open("/sys/class/hidraw/hidraw" + str(devnum) + "/device/uevent") as f: for line in f: line = line.rstrip() if line == 'HID_NAME=Logitech Gaming Keyboard G213': found = True except: break if not found: devnum += 1 if not found: print("Could not find keyboard device") sys.exit(1) eventfile = "/dev/hidraw" + str(devnum) # z r g b command = [ 0x11, 0xff, 0x0c, 0x3a, 0, 1, 0xff, 0xff, 0x00, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] with open(eventfile, "wb") as f: f.write(bytes(command)) I did wonder about trying to make it turn red when I’m in a root terminal, but that gets a bit more complicated (I’m guessing I need to hook into GNOME Terminal some how?) and this simple hack gives me a significant win anyway. [Less]
Posted about 2 years ago
Anarcat's "procmail considered harmful" post convinced me to get my act together and finally migrate my venerable procmail based setup to sieve. My setup was nontrivial, so I migrated with an intermediate step in which sieve scripts would by ... [More] default pipe everything to procmail, which allowed me to slowly move rules from procmailrc to sieve until nothing remained in procmailrc. Here's what I did. Literature review https://brokkr.net/2019/10/31/lets-do-dovecot-slowly-and-properly-part-3-lmtp/ has a guide quite aligned with current Debian, and could be a starting point to get an idea of the work to do. https://wiki.dovecot.org/HowTo/PostfixDovecotLMTP is way more terse, but more aligned with my intentions. Reading the former helped me in understanding the latter. https://datatracker.ietf.org/doc/html/rfc5228 has the full Sieve syntax. https://doc.dovecot.org/configuration_manual/sieve/pigeonhole_sieve_interpreter/ has the list of Sieve features supported by Dovecot. https://doc.dovecot.org/settings/pigeonhole/ has the reference on Dovecot's sieve implementation. https://raw.githubusercontent.com/dovecot/pigeonhole/master/doc/rfc/spec-bosch-sieve-extprograms.txt is the hard to find full reference for the functions introduced by the extprograms plugin. Debugging tools: doveconf to dump dovecot's configuration to see if what it understands matches what I mean sieve-test parses sieve scripts: sieve-test file.sieve /dev/null is a quick and dirty syntax check Backup of all mails processed One thing I did with procmail was to generate a monthly mailbox with all incoming email, with something like this: BACKUP="/srv/backupts/test-`date +%Y-%m-d`.mbox" :0c $BACKUP I did not find an obvious way in sieve to create montly mailboxes, so I redesigned that system using Postfix's always_bcc feature, piping everything to an archive user. I'll then recreate the monthly archiving using a chewmail script that I can simply run via cron. Configure dovecot apt install dovecot-sieve dovecot-lmtpd I added this to the local dovecot configuration: service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { user = postfix group = postfix mode = 0666 } } protocol lmtp { mail_plugins = $mail_plugins sieve } plugin { sieve = file:~/.sieve;active=~/.dovecot.sieve } This makes Dovecot ready to receive mail from Postfix via a lmtp unix socket created in Postfix's private chroot. It also activates the sieve plugin, and uses ~/.sieve as a sieve script. The script can be a file or a directory; if it is a directory, ~/.dovecot.sieve will be a symlink pointing to the .sieve file to run. This is a feature I'm not yet using, but if one day I want to try enabling UIs to edit sieve scripts, that part is ready. Delegate to procmail To make sieve scripts that delegate to procmail, I enabled the sieve_extprograms plugin: plugin { sieve = file:~/.sieve;active=~/.dovecot.sieve + sieve_plugins = sieve_extprograms + sieve_extensions +vnd.dovecot.pipe + sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe + sieve_trace_dir = ~/.sieve-trace + sieve_trace_level = matching + sieve_trace_debug = yes } and then created a script for it: mkdir -p /usr/local/lib/dovecot/sieve-pipe/ (echo "#!/bin/sh'; echo "exec /usr/bin/procmail") > /usr/local/lib/dovecot/sieve-pipe/procmail chmod 0755 /usr/local/lib/dovecot/sieve-pipe/procmail And I can have a sieve script that delegates processing to procmail: require "vnd.dovecot.pipe"; pipe "procmail"; Activate the postfix side These changes switched local delivery over to Dovecot: --- a/roles/mailserver/templates/dovecot.conf +++ b/roles/mailserver/templates/dovecot.conf @@ -25,6 +25,8 @@ … +auth_username_format = %Ln + … diff --git a/roles/mailserver/templates/main.cf b/roles/mailserver/templates/main.cf index d2c515a..d35537c 100644 --- a/roles/mailserver/templates/main.cf +++ b/roles/mailserver/templates/main.cf @@ -64,8 +64,7 @@ virtual_alias_domains = … -mailbox_command = procmail -a "$EXTENSION" -mailbox_size_limit = 0 +mailbox_transport = lmtp:unix:private/dovecot-lmtp … Without auth_username_format = %Ln dovecot won't be able to understand usernames sent by postfix in my specific setup. Moving rules over to sieve This is mostly straightforward, with the luxury of being able to do it a bit at a time. The last tricky bit was how to call spamc from sieve, as in some situations I reduce system load by running the spamfilter only on a prefiltered selection of incoming emails. For this I enabled the filter directive in sieve: plugin { sieve = file:~/.sieve;active=~/.dovecot.sieve sieve_plugins = sieve_extprograms - sieve_extensions +vnd.dovecot.pipe + sieve_extensions +vnd.dovecot.pipe +vnd.dovecot.filter sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe + sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve-filter sieve_trace_dir = ~/.sieve-trace sieve_trace_level = matching sieve_trace_debug = yes } Then I created a filter script: mkdir -p /usr/local/lib/dovecot/sieve-filter/" (echo "#!/bin/sh'; echo "exec /usr/bin/spamc") > /usr/local/lib/dovecot/sieve-filter/spamc chmod 0755 /usr/local/lib/dovecot/sieve-filter/spamc And now what was previously: :0 fw | /usr/bin/spamc :0 * ^X-Spam-Status: Yes .spam/ Can become: require "vnd.dovecot.filter"; require "fileinto"; filter "spamc"; if header :contains "x-spam-level" "**************" { discard; } elsif header :matches "X-Spam-Status" "Yes,*" { fileinto "spam"; } Updates Ansgar mentioned that it's possible to replicate the monthly mailbox using the variables and date extensions, with a hacky trick from the extensions' RFC: require "date" require "variables" if currentdate :matches "month" "*" { set "month" "${1}"; } if currentdate :matches "year" "*" { set "year" "${1}"; } fileinto :create "${month}-${year}"; [Less]
Posted about 2 years ago
By far the biggest LVM Cache surprise is just how well it works. Between 2010 and 2020, my single, biggest and most consistent headache managing servers at May First has been disk i/o. We run a number of physical hosts with encrypted disks, with ... [More] each providing a dozen or so sundry KVM guests. And they consume a lot of disk i/o. This problem kept me awake at night and made me want to put my head on the table and cry during the day as I monitored the output of vmstat 1 and watched each disk i/o death spiral unfold. We tried everything. Turned off fsck’s, turned off RAID monthly checks. Switched to less intensive backup systems. Added solid state drives and tried to stragically distribute them to our database partitions and other read/write heavy services. Added tmpfs file systems where it was possible. But, the sad truth was: we simply did not have the resources to pay for the infrastructure that could support the disk i/o our services demanded. Then, we discovered LVM caching (cue Hallelujah). We starting provisioning SSD partitions to back up our busiest spinning disk logical volumes and presto. Ten years of agony gone like a poof of smoke! I don’t know which individuals are responsible for writing the LVM caching code but if you see this: THANK YOU! Your contributions to the world are noticed, appreciated and have had an enormous impact on at least one individual. Some surprises Filters For the last two years, with the exception of one little heart attack, LVM caches have gone very smoothly. Then, last week we upgraded 13 physical servers straight through from stretch to bullseye. It went relatively smoothly for the first half of our servers (the old ones hosting fewer resources). But, after rebooting our first server with lvm caching going on, we noticed that the cached disk wasn’t accessible. No problem, we reasoned. We’ll just uncache it. Except that didn’t work either. We tried every argument we could find on the Internet but lvm insisted that the block device from the SSD volume group (that provides the caching device) was not available. Running pvs showed an “unknown” device and vgs reported similar errors. Now I started to panic a bit. There was a clean shutdown of the server, so surely all the data had been flushed to the disk. But, how can we get that data? We started a restore from backup process because we really thought that data was gone for ever. Then we had a really great theory: the caching logical volume comes from the SSD volume group, which gets decrypted after the spinning disk volume group. Maybe there’s a timing issue? When the spinning disk volume group comes online, the caching logical volume is not yet available. So, we booted into busybox, and manually decrypted the SSD volume first, followed by the spinning disk volume. Alas, no dice. Now that we were fully desperate, we decided to restore the lvm configuration file for the entire spinning disk volume group. This felt kinda risky since we might be damaging all the currently working logical volumes, but it seemed like the only option we had. The main problem was that busybox didn’t seem to have the lvm config tool we needed to restore the configuration from our backup (I think it might be there but it was late and we couldn’t figure it out). And, our only readily available live install media was a Debian stretch disk via debirf. Debian stretch is pretty old and we really would have preferred to have the most modern tools available, but we decided to go with what we had. And, that was a good thing, because as soon as we booted into stretch and decrypted the disks, the lvm volume suddenly appeared, happy as ever. We uncached it and booted into the host system and there it was. We went to bed confused but relieved. The next morning my co-worker figured it out: filtering. During the stretch days we occassionally ran into an annoying problem: the logical volumes from guests would suddenly pop up on the host. This was mostly annoying but also it made possible some serious mistakes if you accidentally took a volume from a guest and used it on the host. The LVM folks seemed to have noticed this problem and introduced a new default filter that tries to only show you the devices that you should be seeing. Unfortunately for us, this new filter removed logical volumes from the list of available physical volumes. That does make sense for most people. But, not for us. It sounds a bit weird, but our setup looks like this: One volume group derived from the spinning disks One volume group derived from the SSD disks Then we carve out logical volumes from each for each guest. Once we discovered LVM caching, we carved out SSD logical volumes to be used as caches for the spinning logical volumes. In restrospect, if we could start over, we would probably do it differently. In any event, once we discovered the problem, we used the handy configuration options in lvm.conf to tweak the filters to include our cache disks and once again, everything is back to working. Saturated SSDs The other surprise seems unrelated to the upgrade. We have a phsyical server that has been suffering from disk i/o problems despite our use of LVM caching. Our answer, of course, was to add more LVM caches to the spinning logical volumes that seemed to be suffering. But somehow this was making things even worse. Then, we finally just removed the LVM caches from all the spinning disks and presto, disk i/o problems seemed to go away. What? Isn’t that the opposite of what’s supposed to happen? We’re still trying to figure this one out, but it seems that our SSDs are saturated, in which case adding them as a caching volume really is going to make things worse. We’re still not sure why they are saturated when none of the SSDs on our other hosts are saturated, but a few theories include: They are doing more writing and/or it’s a different kind of writing. I’m still not sure I quite have the right tool to compare this host with other hosts. And, this host is our only MySQL network database server, hosting hundreds of GBs of database - all writing/reading direclty onto the SSDs. They are broken or substanard SSDs (smartctl doesn’t uncover any problems but maybe it’s a bad model?) I’ll update this post as we learn more but welcome any suggestions in the comments. [Less]
Posted about 2 years ago
By far the biggest LVM Cache surprise is just how well it works. Between 2010 and 2020, my single, biggest and most consistent headache managing servers at May First has been disk i/o. We run a number of physical hosts with encrypted disks, with ... [More] each providing a dozen or so sundry KVM guests. And they consume a lot of disk i/o. This problem kept me awake at night and made me want to put my head on the table and cry during the day as I monitored the output of vmstat 1 and watched each disk i/o death spiral unfold. We tried everything. Turned off fsck’s, turned off RAID monthly checks. Switched to less intensive backup systems. Added solid state drives and tried to stragically distribute them to our database partitions and other read/write heavy services. Added tmpfs file systems where it was possible. But, the sad truth was: we simply did not have the resources to pay for the infrastructure that could support the disk i/o our services demanded. Then, we discovered LVM caching (cue Hallelujah). We starting provisioning SSD partitions to back up our busiest spinning disk logical volumes and presto. Ten years of agony gone like a poof of smoke! I don’t know which individuals are responsible for writing the LVM caching code but if you see this: THANK YOU! Your contributions to the world are noticed, appreciated and have had an enormous impact on at least one individual. Some surprises Filters For the last two years, with the exception of one little heart attack, LVM caches have gone very smoothly. Then, last week we upgraded 13 physical servers straight through from stretch to bullseye. It went relatively smoothly for the first half of our servers (the old ones hosting fewer resources). But, after rebooting our first server with lvm caching going on, we noticed that the cached disk wasn’t accessible. No problem, we reasoned. We’ll just uncache it. Except that didn’t work either. We tried every argument we could find on the Internet but lvm insisted that the block device from the SSD volume group (that provides the caching device) was not available. Running pvs showed an “unknown” device and vgs reported similar errors. Now I started to panic a bit. There was a clean shutdown of the server, so surely all the data had been flushed to the disk. But, how can we get that data? We started a restore from backup process because we really thought that data was gone for ever. Then we had a really great theory: the caching logical volume comes from the SSD volume group, which gets decrypted after the spinning disk volume group. Maybe there’s a timing issue? When the spinning disk volume group comes online, the caching logical volume is not yet available. So, we booted into busybox, and manually decrypted the SSD volume first, followed by the spinning disk volume. Alas, no dice. Now that we were fully desperate, we decided to restore the lvm configuration file for the entire spinning disk volume group. This felt kinda risky since we might be damaging all the currently working logical volumes, but it seemed like the only option we had. The main problem was that busybox didn’t seem to have the lvm config tool we needed to restore the configuration from our backup (I think it might be there but it was late and we couldn’t figure it out). And, our only readily available live install media was a Debian stretch disk via debirf. Debian stretch is pretty old and we really would have preferred to have the most modern tools available, but we decided to go with what we had. And, that was a good thing, because as soon as we booted into stretch and decrypted the disks, the lvm volume suddenly appeared, happy as ever. We uncached it and booted into the host system and there it was. We went to bed confused but relieved. The next morning my co-worker figured it out: filtering. During the stretch days we occassionally ran into an annoying problem: the logical volumes from guests would suddenly pop up on the host. This was mostly annoying but also it made possible some serious mistakes if you accidentally took a volume from a guest and used it on the host. The LVM folks seemed to have noticed this problem and introduced a new default filter that tries to only show you the devices that you should be seeing. Unfortunately for us, this new filter removed logical volumes from the list of available physical volumes. That does make sense for most people. But, not for us. It sounds a bit weird, but our setup looks like this: One volume group derived from the spinning disks One volume group derived from the SSD disks Then we carve out logical volumes from each for each guest. Once we discovered LVM caching, we carved out SSD logical volumes to be used as caches for the spinning logical volumes. In restrospect, if we could start over, we would probably do it differently. In any event, once we discovered the problem, we used the handy configuration options in lvm.conf to tweak the filters to include our cache disks and once again, everything is back to working. Saturated SSDs The other surprise seems unrelated to the upgrade. We have a phsyical server that has been suffering from disk i/o problems despite our use of LVM caching. Our answer, of course, was to add more LVM caches to the spinning logical volumes that seemed to be suffering. But somehow this was making things even worse. Then, we finally just removed the LVM caches from all the spinning disks and presto, disk i/o problems seemed to go away. What? Isn’t that the opposite of what’s supposed to happen? We’re still trying to figure this one out, but it seems that our SSDs are saturated, in which case adding them as a caching volume really is going to make things worse. We’re still not sure why they are saturated when none of the SSDs on our other hosts are saturated, but a few theories include: They are doing more writing and/or it’s a different kind of writing. I’m still not sure I quite have the right tool to compare this host with other hosts. And, this host is our only MySQL network database server, hosting hundreds of GBs of database - all writing/reading direclty onto the SSDs. They are broken or substanard SSDs (smartctl doesn’t uncover any problems but maybe it’s a bad model?) I’ll update this post as we learn more but welcome any suggestions in the comments. Update: 2022-03-07 Two more possible causes: Our use of the write back feature: LVM cache has a nice feature that caches writes to smooth out writes to the underlying disk. Maybe our disks are simply writing more then can be handled and not using write back is our solution. This server supports a guest with an unusually large disk. Maybe we haven’t allocated a big enough LVM cache for the given volume so the contents are constantly being ejected? [Less]
Posted about 2 years ago
Note: this post is also available on my website, where it will be updated periodically. When things are difficult – maybe there’s been a disaster, or an invasion (this page is being written in 2022 just after Russia invaded Ukraine), or maybe ... [More] you’re just backpacking off the grid – there are tools that can help you keep in touch, or move your data around. This page aims to survey some of them, roughly in order from easiest to more complex. Simple radios Handheld radios shouldn’t be forgotten. They are cheap, small, and easy to operate. Their range isn’t huge – maybe a couple of miles in rural areas, much less in cities – but they can be a useful place to start. They tend to have no actual encryption features (the “privacy” features really aren’t.) In the USA, options are FRS/GMRS and CB. Syncthing With Syncthing, you can share files among your devices or with your friends. Syncthing essentially builds a private mesh for file sharing. Devices will auto-discover each other when on the same LAN or Wifi network, and opportunistically sync. I wrote more about offline uses of Syncthing, and its use with NNCP, in my blog post A simple, delay-tolerant, offline-capable mesh network with Syncthing (+ optional NNCP). Yes, it is a form of a Mesh Network! Homepage: https://syncthing.net/ Briar Briar is an instant messaging service based around Android. It’s IM with a twist: it can use a mesh of Bluetooh devices. Or, if Internet is available, Tor. It has even been extended to support the use of SD cards and USB sticks to carry your messages. Like some others here, it can relay messages for third parties as well. Homepage: https://briarproject.org/ Manyverse and Scuttlebutt Manyverse is a client for Scuttlebutt, which is a sort of asynchronous, offline-friendly social network. You can use it to keep in touch with your family and friends, and it supports syncing over Bluetooth and Wifi even in the absence of Internet. Homepages: https://www.manyver.se/ and https://scuttlebutt.nz/ Yggdrasil Yggdrasil is a self-healing, fully end-to-end Encrypted Mesh Network. It can work among local devices or on the global Internet. It has network services that can egress onto things like Tor, I2P, and the public Internet. Yggdrasil makes a perfect companion to ad-hoc wifi as it has auto peer discovery on the local network. I talked about it in more detail in my blog post Make the Internet Yours Again With an Instant Mesh Network. Homepage: https://yggdrasil-network.github.io/ Ad-Hoc Wifi Few people know about the ad-hoc wifi mode. Ad-hoc wifi lets devices in range talk to each other without an access point. You just all set your devices to the same network name and password and there you go. However, there often isn’t DHCP, so IP configuration can be a bit of a challenge. Yggdrasil helps here. NNCP Moving now to more advanced tools, NNCP lets you assemble a network of peers that can use Asynchronous Communication over sneakernet, USB drives, radios, CD-Rs, Internet, tor, NNCP over Yggdrasil, Syncthing, Dropbox, S3, you name it . NNCP supports multi-hop file transfer and remote execution. It is fully end-to-end encrypted. Think of it as the offline version of ssh. Homepage: https://nncp.mirrors.quux.org/ Meshtastic Meshtastic uses long-range, low-power LoRa radios to build a long-distance, encrypted, instant messaging system that is a Mesh Network. It requires specialized hardware, about $30, but will tend to get much better range than simple radios, and with very little power. Homepages: https://meshtastic.org/ and https://meshtastic.letstalkthis.com/ Portable Satellite Communicators You can get portable satellite communicators that can send SMS from anywhere on earth with a clear view of the sky. The Garmin InReach mini and Zoleo are two credible options. Subscriptions range from about $10 to $40 per month depending on usage. They also have global SOS features. Telephone Lines If you have a phone line and a modem, UUCP can get through just about anything. It’s an older protocol that lacks modern security, but will deal with slow and noisy serial lines well. XBee SX radios also have a serial mode that can work well with UUCP. Additional Suggestions It is probably useful to have a Linux live USB stick with whatever software you want to use handy. Debian can be installed from the live environment, or you could use a security-focused distribution such as Tails or Qubes. References This page originated in my Mastodon thread and incorporates some suggestions I received there. It also formed a post on my blog. [Less]
Posted about 2 years ago
About 4 years ago, I posted about making a 3D printed case for my then-new phone. The FP2 was already a few years old when I got one and by now, some spares are unavailable - which is a problem, because I'm terribly hard on hardware. Indeed ... [More] , that's why I need a very sturdy case for my phone - a case which can be ablative when necessary. With the arrival of my new Fairphone 4, I've updated my case design. Sadly the FP4 doesn't have a notification LED - I guess we're supposed to be glued to the screen and leaving the phone ignored in a corner unless it lights up is forbidden. But that does at least make the printing simpler, as there's no need for a window for the LED. Source code: https://www.chiark.greenend.org.uk/ucgi/~ianmdlvl/git?p=reprap-play.git;a=blob;f=fairphone4-case.scad;h=1738612c2aafcd4ee4ea6b8d1d14feffeba3b392;hb=629359238b2938366dc6e526d30a2a7ddec5a1b0 And the diagrams (which are part of the source, although I didn't update them for the FP4 changes: https://www.chiark.greenend.org.uk/ucgi/~ianmdlvl/git?p=reprap-diagrams.git;a=tree;f=fairphone-case;h=65f423399cbcfd3cf24265ed3216e6b4c0b26c20;hb=07e1723c88a294d68637bb2ca3eac388d2a0b5d4 ( big pictures ) comments [Less]
Posted about 2 years ago
TL;DR: procmail is a security liability and has been abandoned upstream for the last two decades. If you are still using it, you should probably drop everything and at least remove its SUID flag. There are plenty of alternatives to chose from, and ... [More] conversion is a one-time, acceptable trade-off. Procmail is unmaintained procmail is unmaintained. The "Final release", according to Wikipedia, dates back to September 10, 2001 (3.22). That release was shipped in Debian since then, all the way back from Debian 3.0 "woody", twenty years ago. Debian also ships 25 uploads on top of this, with 3.22-21 shipping the "3.23pre" release that has been rumored since at least the November 2001, according to debian/changelog at least: procmail (3.22-1) unstable; urgency=low * New upstream release, which uses the `standard' format for Maildir filenames and retries on name collision. It also contains some bug fixes from the 3.23pre snapshot dated 2001-09-13. * Removed `sendmail' from the Recommends field, since we already have `exim' (the default Debian MTA) and `mail-transport-agent'. * Removed suidmanager support. Conflicts: suidmanager (<< 0.50). * Added support for DEB_BUILD_OPTIONS in the source package. * README.Maildir: Do not use locking on the example recipe, since it's wrong to do so in this case. -- Santiago Vila Wed, 21 Nov 2001 09:40:20 +0100 All Debian suites from buster onwards ship the 3.22-26 release, although the maintainer just pushed a 3.22-27 release to fix a seven year old null pointer dereference, after this article was drafted. Procmail is also shipped in all major distributions: Fedora and its derivatives, Debian derivatives, Gentoo, Arch, FreeBSD, OpenBSD. We all seem to be ignoring this problem. The upstream website (http://procmail.org/) has been down since about 2015, according to Debian bug #805864, with no change since. In effect, every distribution is currently maintaining its fork of this dead program. Note that, after filing a bug to keep Debian from shipping procmail in a stable release again, I was told that the Debian maintainer is apparently in contact with the upstream. And, surprise! they still plan to release that fabled 3.23 release, which has been now in "pre-release" for all those twenty years. In fact, it turns out that 3.23 is considered released already, and that the procmail author actually pushed a 3.24 release, codenamed "Two decades of fixes". That amounts to 25 commits since 3.23pre some of which address serious security issues, but none of which address fundamental issues with the code base. Procmail is insecure By default, procmail is installed SUID root:mail in Debian. There's no debconf or pre-seed setting that can change this. There has been two bug reports against the Debian to make this configurable (298058, 264011), but both were closed to say that, basically, you should use dpkg-statoverride to change the permissions on the binary. So if anything, you should immediately run this command on any host that you have procmail installed on: dpkg-statoverride --update --add root root 0755 /usr/bin/procmail Note that this might break email delivery. It might also not work at all, thanks to usrmerge. Not sure. Yes, everything is on fire. This is fine. In my opinion, even assuming we keep procmail in Debian, that default should be reversed. It should be up to people installing procmail to assign it those dangerous permissions, after careful consideration of the risk involved. The last maintainer of procmail explicitly advised us (in that null pointer dereference bug) and other projects (e.g. OpenBSD, in [2]) to stop shipping it, back in 2014. Quote: Executive summary: delete the procmail port; the code is not safe and should not be used as a basis for any further work. I just read some of the code again this morning, after the original author claimed that procmail was active again. It's still littered with bizarre macros like: #define bit_set(name,which,value) \ (value?(name[bit_index(which)]|=bit_mask(which)):\ (name[bit_index(which)]&=~bit_mask(which))) ... from regexp.c, line 66 (yes, that's a custom regex engine). Or this one: #define jj (aleps.au.sopc) It uses insecure functions like strcpy extensively. malloc() is thrown around gotos like it's 1984 all over again. (To be fair, it has been feeling like 1984 a lot lately, but that's another matter entirely.) That null pointer deref bug? It's fixed upstream now, in this commit merged a few hours ago, which I presume might be in response to my request to remove procmail from Debian. So while that's nice, this is the just tip of the iceberg. I speculate that one could easily find an exploitable crash in procmail if only by running it through a fuzzer. But I don't need to speculate: procmail had, for years, serious security issues that could possibly lead to root privilege escalation, remotely exploitable if procmail is (as it's designed to do) exposed to the network. Maybe I'm overreacting. Maybe the procmail author will go through the code base and do a proper rewrite. But I don't think that's what is in the cards right now. What I expect will happen next is that people will start fuzzing procmail, throw an uncountable number of bug reports at it which will get fixed in a trickle while never fixing the underlying, serious design flaws behind procmail. Procmail has better alternatives The reason this is so frustrating is that there are plenty of modern alternatives to procmail which do not suffer from those problems. Alternatives to procmail(1) itself are typically part of mail servers. For example, Dovecot has its own LDA which implements the standard Sieve language (RFC 5228). (Interestingly, Sieve was published as RFC 3028 in 2001, before procmail was formally abandoned.) Courier also has "maildrop" which has its own filtering mechanism, and there is fdm (2007) which is a fetchmail and procmail replacement. Update: there's also mailprocessing, which is not an LDA, but processing an existing folder. It was, however, specifically designed to replace complex Procmail rules. But procmail, of course, doesn't just ship procmail; that would just be too easy. It ships mailstat(1) which we could probably ignore because it only parses procmail log files. But more importantly, it also ships: lockfile(1) - conditional semaphore-file creator formail(1) - mail (re)formatter lockfile(1) already has a somewhat acceptable replacement in the form of flock(1), part of util-linux (which is Essential, so installed on any normal Debian system). It might not be a direct drop-in replacement, but it should be close enough. formail(1) is similar: the courier maildrop package ships reformail(1) which is, presumably, a rewrite of formail. It's unclear if it's a drop-in replacement, but it should probably possible to port uses of formail to it easily. Update: the maildrop package ships a SUID root binary (two, even). So if you want only reformail(1), you might want to disable that with: dpkg-statoverride --update --add root root 0755 /usr/bin/lockmail.maildrop dpkg-statoverride --update --add root root 0755 /usr/bin/maildrop It would be perhaps better to have reformail(1) as a separate package, see bug 1006903 for that discussion. The real challenge is, of course, migrating those old .procmailrc recipes to Sieve (basically). I added a few examples in the appendix below. You might notice the Sieve examples are easier to read, which is a nice added bonus. Conclusion There is really, absolutely, no reason to keep procmail in Debian, nor should it be used anywhere at this point. It's a great part of our computing history. May it be kept forever in our museums and historical archives, but not in Debian, and certainly not in actual release. It's just a bomb waiting to go off. It is irresponsible for distributions to keep shipping obsolete and insecure software like this for unsuspecting users. Note that I am grateful to the author, I really am: I used procmail for decades and it served me well. But now, it's time to move, not bring it back from the dead. Appendix Previous work It's really weird to have to write this blog post. Back in 2016, I rebuilt my mail setup at home and, to my horror, discovered that procmail had been abandoned for 15 years at that point, thanks to that LWN article from 2010. I would have thought that I was the only weirdo still running procmail after all those years and felt kind of embarrassed to only "now" switch to the more modern (and, honestly, awesome) Sieve language. But no. Since then, Debian shipped three major releases (stretch, buster, and bullseye), all with the same vulnerable procmail release. Then, in early 2022, I found that, at work, we actually had procmail installed everywhere, possibly because userdir-ldap was using it for lockfile until 2019. I sent a patch to fix that and scrambled to remove get rid of procmail everywhere. That took about a day. But many other sites are now in that situation, possibly not imagining they have this glaring security hole in their infrastructure. Procmail to Sieve recipes I'll collect a few Sieve equivalents to procmail recipes here. If you have any additions, do contact me. All Sieve examples below assume you drop the file in ~/.dovecot.sieve. deliver mail to "plus" extension folder Say you want to deliver [email protected] to the folder foo. You might write something like this in procmail: MAILDIR=$HOME/Maildir/ DEFAULT=$MAILDIR LOGFILE=$HOME/.procmail.log VERBOSE=off EXTENSION=$1 # Need to rename it - ?? does not like $1 nor 1 :0 * EXTENSION ?? [a-zA-Z0-9]+ .$EXTENSION/ That, in sieve language, would be: require ["variables", "envelope", "fileinto", "subaddress"]; ######################################################################## # wildcard +extension # https://doc.dovecot.org/configuration_manual/sieve/examples/#plus-addressed-mail-filtering if envelope :matches :detail "to" "*" { # Save name in ${name} in all lowercase set :lower "name" "${1}"; fileinto "${name}"; stop; } Subject into folder This would file all mails with a Subject: line having FreshPorts in it into the freshports folder, and mails from alternc.org mailing lists into the alternc folder: :0 ## mailing list freshports * ^Subject.*FreshPorts.* .freshports/ :0 ## mailing list alternc * ^List-Post.*mailto:.*@alternc.org.* .alternc/ Equivalent Sieve: if header :contains "subject" "FreshPorts" { fileinto "freshports"; } elsif header :contains "List-Id" "alternc.org" { fileinto "alternc"; } Mail sent to root to a reports folder This double rule: :0 * ^Subject: Cron * ^From: .*root@ .rapports/ Would look something like this in Sieve: if header :comparator "i;octet" :contains "Subject" "Cron" { if header :regex :comparator "i;octet" "From" ".*root@" { fileinto "rapports"; } } Note that this is what the automated converted does (below). It's not very readable, but it works. Bulk email I didn't have an equivalent of this in procmail, but that's something I did in Sieve: if header :contains "Precedence" "bulk" { fileinto "bulk"; } Any mailing list This is another rule I didn't have in procmail but I found handy and easy to do in Sieve: if exists "List-Id" { fileinto "lists"; } This or that I wouldn't remember how to do this in procmail either, but that's an easy one in Sieve: if anyof (header :contains "from" "example.com", header :contains ["to", "cc"] "[email protected]") { fileinto "example"; } You can even pile up a bunch of options together to have one big rule with multiple patterns: if anyof (exists "X-Cron-Env", header :contains ["subject"] ["security run output", "monthly run output", "daily run output", "weekly run output", "Debian Package Updates", "Debian package update", "daily mail stats", "Anacron job", "nagios", "changes report", "run output", "[Systraq]", "Undelivered mail", "Postfix SMTP server: errors from", "backupninja", "DenyHosts report", "Debian security status", "apt-listchanges" ], header :contains "Auto-Submitted" "auto-generated", envelope :contains "from" ["nagios@", "logcheck@", "root@"]) { fileinto "rapports"; } Automated script There is a procmail2sieve.pl script floating around, and mentioned in the dovecot documentation. It didn't work very well for me: I could use it for small things, but I mostly wrote the sieve file from scratch. Progressive migration Enrico Zini has progressively migrated his procmail setup to Sieve using a clever way: he hooked procmail inside sieve so that he could deliver to the Dovecot LDA and progressively migrate rules one by one, without having a "flag day". See this explanatory blog post for the details, which also shows how to configure Dovecot as an LMTP server with Postfix. Other examples The Dovecot sieve examples are numerous and also quite useful. At the time of writing, they include virus scanning and spam filtering, vacation auto-replies, includes, archival, and flags. Harmful considered harmful I am aware that the "considered harmful" title has a long and controversial history, being considered harmful in itself (by some people who are obviously not afraid of contradictions). I have nevertheless deliberately chosen that title, partly to make sure this article gets maximum visibility, but more specifically because I do not have doubts at this moment that procmail is, clearly, a bad idea at this moment in history. Developing story I must also add that, incredibly, this story has changed while writing it. This article is derived from this bug I filed in Debian to, quite frankly, kick procmail out of Debian. But filing the bug had the interesting effect of pushing the upstream into action: as mentioned above, they have apparently made a new release and merged a bunch of patches in a new git repository. This doesn't change much of the above, at this moment. If anything significant comes out of this effort, I will try to update this article to reflect the situation. I am actually happy to retract the claims in this article if it turns out that procmail is a stellar example of defensive programming and survives fuzzing attacks. But at this moment, I'm pretty confident that will not happen, at least not in scope of the next Debian release cycle. [Less]
Posted about 2 years ago
TL;DR: procmail is a security liability and has been abandoned upstream for the last two decades. If you are still using it, you should probably drop everything and at least remove its SUID flag. There are plenty of alternatives to chose from, and ... [More] conversion is a one-time, acceptable trade-off. Procmail is unmaintained procmail is unmaintained. The "Final release", according to Wikipedia, dates back to September 10, 2001 (3.22). That release was shipped in Debian since then, all the way back from Debian 3.0 "woody", twenty years ago. Debian also ships 25 uploads on top of this, with 3.22-21 shipping the "3.23pre" release that has been rumored since at least the November 2001, according to debian/changelog at least: procmail (3.22-1) unstable; urgency=low * New upstream release, which uses the `standard' format for Maildir filenames and retries on name collision. It also contains some bug fixes from the 3.23pre snapshot dated 2001-09-13. * Removed `sendmail' from the Recommends field, since we already have `exim' (the default Debian MTA) and `mail-transport-agent'. * Removed suidmanager support. Conflicts: suidmanager (<< 0.50). * Added support for DEB_BUILD_OPTIONS in the source package. * README.Maildir: Do not use locking on the example recipe, since it's wrong to do so in this case. -- Santiago Vila Wed, 21 Nov 2001 09:40:20 +0100 All Debian suites from buster onwards ship the 3.22-26 release, although the maintainer just pushed a 3.22-27 release to fix a seven year old null pointer dereference, after this article was drafted. Procmail is also shipped in all major distributions: Fedora and its derivatives, Debian derivatives, Gentoo, Arch, FreeBSD, OpenBSD. We all seem to be ignoring this problem. The upstream website (http://procmail.org/) has been down since about 2015, according to Debian bug #805864, with no change since. In effect, every distribution is currently maintaining its fork of this dead program. Note that, after filing a bug to keep Debian from shipping procmail in a stable release again, I was told that the Debian maintainer is apparently in contact with the upstream. And, surprise! they still plan to release that fabled 3.23 release, which has been now in "pre-release" for all those twenty years. In fact, it turns out that 3.23 is considered released already, and that the procmail author actually pushed a 3.24 release, codenamed "Two decades of fixes". That amounts to 25 commits since 3.23pre some of which address serious security issues, but none of which address fundamental issues with the code base. Procmail is insecure By default, procmail is installed SUID root:mail in Debian. There's no debconf or pre-seed setting that can change this. There has been two bug reports against the Debian to make this configurable (298058, 264011), but both were closed to say that, basically, you should use dpkg-statoverride to change the permissions on the binary. So if anything, you should immediately run this command on any host that you have procmail installed on: dpkg-statoverride --update --add root root 0755 /usr/bin/procmail Note that this might break email delivery. It might also not work at all, thanks to usrmerge. Not sure. Yes, everything is on fire. This is fine. In my opinion, even assuming we keep procmail in Debian, that default should be reversed. It should be up to people installing procmail to assign it those dangerous permissions, after careful consideration of the risk involved. The last maintainer of procmail explicitly advised us (in that null pointer dereference bug) and other projects (e.g. OpenBSD, in [2]) to stop shipping it, back in 2014. Quote: Executive summary: delete the procmail port; the code is not safe and should not be used as a basis for any further work. I just read some of the code again this morning, after the original author claimed that procmail was active again. It's still littered with bizarre macros like: #define bit_set(name,which,value) \ (value?(name[bit_index(which)]|=bit_mask(which)):\ (name[bit_index(which)]&=~bit_mask(which))) ... from regexp.c, line 66 (yes, that's a custom regex engine). Or this one: #define jj (aleps.au.sopc) It uses insecure functions like strcpy extensively. malloc() is thrown around gotos like it's 1984 all over again. (To be fair, it has been feeling like 1984 a lot lately, but that's another matter entirely.) That null pointer deref bug? It's fixed upstream now, in this commit merged a few hours ago, which I presume might be in response to my request to remove procmail from Debian. So while that's nice, this is the just tip of the iceberg. I speculate that one could easily find an exploitable crash in procmail if only by running it through a fuzzer. But I don't need to speculate: procmail had, for years, serious security issues that could possibly lead to root privilege escalation, remotely exploitable if procmail is (as it's designed to do) exposed to the network. Maybe I'm overreacting. Maybe the procmail author will go through the code base and do a proper rewrite. But I don't think that's what is in the cards right now. What I expect will happen next is that people will start fuzzing procmail, throw an uncountable number of bug reports at it which will get fixed in a trickle while never fixing the underlying, serious design flaws behind procmail. Procmail has better alternatives The reason this is so frustrating is that there are plenty of modern alternatives to procmail which do not suffer from those problems. Alternatives to procmail(1) itself are typically part of mail servers. For example, Dovecot has its own LDA which implements the standard Sieve language (RFC 5228). (Interestingly, Sieve was published as RFC 3028 in 2001, before procmail was formally abandoned.) Courier also has "maildrop" which has its own filtering mechanism, and there is fdm (2007) which is a fetchmail and procmail replacement. But procmail, of course, doesn't just ship procmail; that would just be too easy. It ships mailstat(1) which we could probably ignore because it only parses procmail log files. But more importantly, it also ships: lockfile(1) - conditional semaphore-file creator formail(1) - mail (re)formatter lockfile(1) already has a somewhat acceptable replacement in the form of flock(1), part of util-linux (which is Essential, so installed on any normal Debian system). It might not be a direct drop-in replacement, but it should be close enough. formail(1) is similar: the courier maildrop package ships reformail(1) which is, presumably, a rewrite of formail. It's unclear if it's a drop-in replacement, but it should probably possible to port uses of formail to it easily. The real challenge is, of course, migrating those old .procmailrc recipes to Sieve (basically). I added a few examples in the appendix below. You might notice the Sieve examples are easier to read, which is a nice added bonus. Conclusion There is really, absolutely, no reason to keep procmail in Debian, nor should it be used anywhere at this point. It's a great part of our computing history. May it be kept forever in our museums and historical archives, but not in Debian, and certainly not in actual release. It's just a bomb waiting to go off. It is irresponsible for distributions to keep shipping obsolete and insecure software like this for unsuspecting users. Note that I am grateful to the author, I really am: I used procmail for decades and it served me well. But now, it's time to move, not bring it back from the dead. Appendix Previous work It's really weird to have to write this blog post. Back in 2016, I rebuilt my mail setup at home and, to my horror, discovered that procmail had been abandoned for 15 years at that point, thanks to that LWN article from 2010. I would have thought that I was the only weirdo still running procmail after all those years and felt kind of embarrassed to only "now" switch to the more modern (and, honestly, awesome) Sieve language. But no. Since then, Debian shipped three major releases (stretch, buster, and bullseye), all with the same vulnerable procmail release. Then, in early 2022, I found that, at work, we actually had procmail installed everywhere, possibly because userdir-ldap was using it for lockfile until 2019. I sent a patch to fix that and scrambled to remove get rid of procmail everywhere. That took about a day. But many other sites are now in that situation, possibly not imagining they have this glaring security hole in their infrastructure. Procmail to Sieve recipes I'll collect a few Sieve equivalents to procmail recipes here. If you have any additions, do contact me. All Sieve examples below assume you drop the file in ~/.dovecot.sieve. deliver mail to "plus" extension folder Say you want to deliver [email protected] to the folder foo. You might write something like this in procmail: MAILDIR=$HOME/Maildir/ DEFAULT=$MAILDIR LOGFILE=$HOME/.procmail.log VERBOSE=off EXTENSION=$1 # Need to rename it - ?? does not like $1 nor 1 :0 * EXTENSION ?? [a-zA-Z0-9]+ .$EXTENSION/ That, in sieve language, would be: require ["variables", "envelope", "fileinto", "subaddress"]; ######################################################################## # wildcard +extension # https://doc.dovecot.org/configuration_manual/sieve/examples/#plus-addressed-mail-filtering if envelope :matches :detail "to" "*" { # Save name in ${name} in all lowercase set :lower "name" "${1}"; fileinto "${name}"; stop; } Subject into folder This would file all mails with a Subject: line having FreshPorts in it into the freshports folder, and mails from alternc.org mailing lists into the alternc folder: :0 ## mailing list freshports * ^Subject.*FreshPorts.* .freshports/ :0 ## mailing list alternc * ^List-Post.*mailto:.*@alternc.org.* .alternc/ Equivalent Sieve: if header :contains "subject" "FreshPorts" { fileinto "freshports"; } elsif header :contains "List-Id" "alternc.org" { fileinto "alternc"; } Mail sent to root to a reports folder This double rule: :0 * ^Subject: Cron * ^From: .*root@ .rapports/ Would look something like this in Sieve: if header :comparator "i;octet" :contains "Subject" "Cron" { if header :regex :comparator "i;octet" "From" ".*root@" { fileinto "rapports"; } } Note that this is what the automated converted does (below). It's not very readable, but it works. Bulk email I didn't have an equivalent of this in procmail, but that's something I did in Sieve: if header :contains "Precedence" "bulk" { fileinto "bulk"; } Any mailing list This is another rule I didn't have in procmail but I found handy and easy to do in Sieve: if exists "List-Id" { fileinto "lists"; } This or that I wouldn't remember how to do this in procmail either, but that's an easy one in Sieve: if anyof (header :contains "from" "example.com", header :contains ["to", "cc"] "[email protected]") { fileinto "example"; } You can even pile up a bunch of options together to have one big rule with multiple patterns: if anyof (exists "X-Cron-Env", header :contains ["subject"] ["security run output", "monthly run output", "daily run output", "weekly run output", "Debian Package Updates", "Debian package update", "daily mail stats", "Anacron job", "nagios", "changes report", "run output", "[Systraq]", "Undelivered mail", "Postfix SMTP server: errors from", "backupninja", "DenyHosts report", "Debian security status", "apt-listchanges" ], header :contains "Auto-Submitted" "auto-generated", envelope :contains "from" ["nagios@", "logcheck@", "root@"]) { fileinto "rapports"; } Automated script There is a procmail2sieve.pl script floating around, and mentioned in the dovecot documentation. It didn't work very well for me: I could use it for small things, but I mostly wrote the sieve file from scratch. Other examples The Dovecot sieve examples are numerous and also quite useful. At the time of writing, they include virus scanning and spam filtering, vacation auto-replies, includes, archival, and flags. Harmful considered harmful I am aware that the "considered harmful" title has a long and controversial history, being considered harmful in itself (by some people who are obviously not afraid of contradictions). I have nevertheless deliberately chosen that title, partly to make sure this article gets maximum visibility, but more specifically because I do not have doubts at this moment that procmail is, clearly, a bad idea at this moment in history. Developing story I must also add that, incredibly, this story has changed while writing it. This article is derived from this bug I filed in Debian to, quite frankly, kick procmail out of Debian. But filing the bug had the interesting effect of pushing the upstream into action: as mentioned above, they have apparently made a new release and merged a bunch of patches in a new git repository. This doesn't change much of the above, at this moment. If anything significant comes out of this effort, I will try to update this article to reflect the situation. I am actually happy to retract the claims in this article if it turns out that procmail is a stellar example of defensive programming and survives fuzzing attacks. But at this moment, I'm pretty confident that will not happen, at least not in scope of the next Debian release cycle. [Less]