I Use This!
Activity Not Available

News

Analyzed 4 months ago. based on code collected over 6 years ago.
Posted about 2 years ago
This is a bit of a sneak preview announcement since I'm waiting for the ISC mirror to update before sending the official announcement to the normal channels, but INN 2.6.5 has been released. (The release was finalized a few days ago, and I'm a ... [More] bit behind in posting it.) This is a bug fix and minor feature release over INN 2.6.4, and the upgrade should be painless. You can download the new release from ftp.isc.org (once it updates) or my personal INN pages. The latter also has links to the full changelog and the other INN documentation. As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN! Changes in this release: A new step in INN development has been achieved with the migration of the INN project to GitHub. We now make use of the features GitHub provides: issue tracker, pull requests, continuous integration, a user-friendly interface to browse the code, etc. Our Subversion repository has therefore been migrated to Git, and our Trac tickets to the GitHub issue tracker. An up-to-date nocem.ctl file is provided with this release. You should manually update your nocem.ctl file with the new information recorded about NoCeM issuers, and make sure the right PGP keys are present on your system. Up-to-date control.ctl and moderators files are provided with this release. You should manually update them (notably for the fido7.* hierarchy). Added a stricter validation of article numbers given in NNTP commands so that numbers superior to 2^31 are correctly considered invalid. Thanks to Richard Kettlewell for the patch. Added a check in rc.news for the existence of the pathrun directory. INN won't start until this directory is writable. Previously, it bailed out quickly after starting, without clear logs about why it failed. Fixed parallel builds using make -j. Thanks to Richard Kettlewell for the path. nnrpd now properly gathers timer statistics when a compression layer is active. nnrpd now properly discards data received from a news client after a timeout when a TLS layer is active. It previously tried to read incoming data before closing the socket, leading to decoding errors from an underlying compression or SASL layer. innfeed and ovdb_stat now generate status reports in valid HTML syntax. Fixed a bug in the buffindexed overview that prevented it from working on several systems, amongst them FreeBSD. Unsupported, and useless, permission bits were given to semaphores. Fixed the detection of library paths at configure time: multilib directories (lib32 or lib64) are now also used if they exist, even it the system does not use multilib. It will notably fix the detection of the OpenSSL 3.0.0 library. The tlscertfile parameter in inn.conf now permits the use of a complete certificate chain, instead of necessarily having to use tlscafile for additional certificates. Added support for the new OpenSSL 3.0.0 API, which deprecated a few functions. The inn.conf default value for tlsprotocols no longer contains TLS versions 1.0 and 1.1, which have been deprecated by RFC 8996. A new inn.conf parameter has been added to tune the length of the queue of pending connections to innd, nnrpd and the ovdb overview storage method: the maxlisten parameter now permits configuring their listen backlog, whose previously hard-coded values were 128 for nnrpd and 25 for the others, which was not high enough for some uses. The default value is now 128 for all of them, and configurable in inn.conf. Thanks to Kevin Bowling for the patch. The name of seven man pages for routines built in libinn(3) are now prefixed with libinn_ so as not to consume namespace and conflict with other packages (notably, the list(3) and uwildmat(3) man pages are now named libinn_list(3) and libinn_uwildmat(3)). Other minor bug fixes and documentation improvements, notably a revised installation checklist and a section summarizing the most used configuration at the beginning of a few complex man pages. [Less]
Posted about 2 years ago
… that I made my first upload to CRAN as demonstrated by the very bottom of the ChangeLog file of the RQuantLib package: 2002-02-25 Dirk Eddelbuettel * Initial 0.1.0 release And quite a few more uploads followed since. (Also see the ... [More] earlier twenty years ago … post about my initial contributions to the Debian R package I had by then adopted too.) If you like this or other open-source work I do, you can now sponsor me at GitHub. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [Less]
Posted about 2 years ago
As of this morning, Rcpp stands at 2501 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time. Rcpp was first released in November ... [More] 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages in July 2016, 800 packages in October 2016, 900 packages early January 2017, 1000 packages in April 2017, 1250 packages in November 2017, 1500 packages in November 2018, 1750 packages in August 2019, 2000 packages in July 2020, and 2250 package in March of last year. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A (manually curated) list of packages using Rcpp is available too. Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent in the summer of 2016, nine percent mid-December 2016, cracked ten percent in the summer of 2017, eleven percent in 2018—and passed 12.5 percent or one in every eight CRAN packages dependens on Rcpp along with the 2000 packages mark. Truly stunning. As before, there is more detail in the chart: how CRAN seems to be pushing back more and removing more aggressively (which my CRANberries tracks but not in as much detail as it could), how the growth of Rcpp seems to be slowing somewhat outright and even more so as a proportion of CRAN – as one would expect a growth curve to. The Rcpp team continues to aim for keeping Rcpp as performant and reliable as it has been (and see e.g. here for some more details). A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [Less]
Posted about 2 years ago
The diffoscope maintainers are pleased to announce the release of diffoscope version 206. This version includes the following changes: * Also allow "Unicode text, UTF-8 text" as well as "UTF-8 Unicode text" to match for .buildinfo files too. * ... [More] Add a test for recent file(1) issue regarding .changes files. (Re: reproducible-builds/diffoscope#291) * Drop "_PATH" suffix from some module-level globals that are not paths. You find out more by visiting the project homepage. [Less]
Posted about 2 years ago
At the beginning of this year I updated a hundred of media types associated with file name extensions in the file called /etc/mime.types, distributed by the media-types package. Most changes are additions originating from recent submissions to ... [More] the IANA. Amon the themes that caught my attention, there are telecommunications, computer security, commerce, healthcare and industrial automation. The vast majority of the update come from western provenance. Did the rest of the World decide to move ahead without us? [Less]
Posted about 2 years ago
Welcome to the 36th post of the really randomly reverberating R, or R4 for short, write-ups. Today’s post is about using Redis, and especially RcppRedis, for live or (near) real-time monitoring with R. There is an saying that “you can take the ... [More] boy out of the valley, but you cannot the valley out of the boy” so for those of us who spent a decade or two in finance and on trading floors, having “some” market price information available becomes second nature. And/or sometimes it is just good fun to program this. A good while back Josh posted a gist on a simple-yet-robust while loop. It (very cleverly) uses his quantmod package to access the SP500 in “real-time”. (I use quotes here because at the end of retail broadband one is not at the same market action as someone co-located in a New Jersey data center. It is however not delayed: as an index, it is not immediately tradeable as a stock, etf, or derivative may be all of which are only disseminated as delayed price information, usually by ten minutes.) I quite enjoyed the gist and used it and started tinkering with it. For example, it collects data but only saves (i.e. “persists”) it after market close. If for whatever reason one needs to restart recent history is gone. In any event, I used his code and generalized it a little and published this about a year ago as function intradayMarketMonitor() in my dang package. (See this blog post announcing it.) The chart of the left shows this in action, the chart is a snapshot from a couple of days ago when the vignettes (more on them below) were written. As lovely as intradayMarketMonitor() is, it also limits itself to market hours. And sometimes you want to see, say, how the market opens on Sunday (futures usually restart at 17h Chicago time), or how news dissipates during the night, or where markets are pre-open, or …. So I both wanted to complement this with futures, and also ‘cache’ it locally so that, say, one machine might collect data and one (or several others) can visualize. For such tasks, Redis is unparalleled. (Yet I also always felt Redis could do with another, simple, short and sweet introduction stressing the key features of i) being multi-lingual: write in one language, consume in another and ii) loose coupling: no linking as one talks to Redis via standard tcp/ip networking. So I wrote a new intro vignette that is now in RcppRedis. I hope this comes in handy. Comments welcome!) Our RcppRedis package had long been used for such tasks, and it was easy to set it up. “Standard use” is to loop, fetch some data, push it to Redis, sleep, and start over. Clients do the same: fetch most recent data, plot or report it, sleep, start over. That works, but it has a dual delay as the client sleeping may miss the data update! The standard answer to this is called publish/pubscribe, or pub/sub. Libraries such as 0mq or zeromq specialise in this. But it turns out Redis already has it. I had some initial difficulty adding it to RcppRedis so for a trial I tested the marvellous rredis package by Bryan and simply instantiated two Redis clients. Now the data getter simply ‘publishes’ a new data point in a given channel, by convention named after the security it tracks. Clients register with the Redis server which does all the actual work of keeping track of who listens to what. The clients now simply ‘listen’ (which is a blocking operation) and as soon as data comes in receive it. This is quite mesmerizing when you just run two command-line clients (in a byobu session, say). As sone as the data is written (as shown on console log) it is consumed. No measruable overhead. Just lovely. Bryan and I then talked a litte as he may or may not retire rredis. Having implemented the pub/sub logic for both sides once, he took a good hard look at RcppRedis and “just like that” added it there. With some really clever wrinkles for (optional) per-symbol callback as closure attached to the instance. Truly amazeballs And once we had it in there, generalizing from publishing or subscribing to just one symbol easily generalizes to having one listener collect and publish for multiple symbols, and having one or more clients subscribe and listen one, more or even all symbol. All with ease thanks tp Redis. The second chart, also from a few days ago, shows four symbols for four (front-contract) futures for Bitcoin, Crude Oil, SP500, and Gold. As all this can get a little technical, I wrote a second vignette for RcppRedis on just this: market monitoring. Give this a read, if interested, feedback on this one is most welcome too! But all the code you need is included in the package—just run a local Redis instance. Before closing, one sour note. I uploaded all this in a new and much improved updated RcppRedis 0.2.0 to CRAN on March 13 – ten days ago. Not only is it still not “there”, but CRAN in their most delightful way also refuses to answer any emails of mine. Just lovely. The package exhibited just one compiler warning: a C++ compiler objected to the (embedded) C library hiredis (included as a fallback) for using a C language construct. Yes. A C++ compiler complaining about C. It’s a non-issue. Yet it’s been ten days and we still have nothing. So irritating and demotivating. Anyway, you can get the package off its GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [Less]
Posted about 2 years ago
Last week I received (finally) my Fairphone 4, supplied with a de-googled operating system, which I had ordered from the E Foundation’s shop in December. (I’m am very hard on hardware and my venerable Fairphone 2 is really on its last legs.) I ... [More] expect to have full control over the software on any computing device I own which is as complicated, capable, and therefore, hazardous, as a mobile phone. Unfortunately the Eos image (they prefer to spell it “/e/ os”, srsly!) doesn’t come with a way to get root without taking fairly serious measures including unlocking the bootloader. Unlocking the bootloader wouldn’t be desirable for me but I can’t live without root. So. I started with these helpful instructions: https://forum.xda-developers.com/t/fairphone-4-root.4376421/ I found the whole process a bit of a trial, and I thought I would write down what I did. But, it’s not straightforward, at least for someone like me who only has a dim understanding of all this Android stuff. Unfortunately, due to the number of missteps and restarts, what I actually did is not really a sensible procedure. So here is a retcon of a process I think will work: Unlock the bootloader The E Foundation provide instructions for unlocking the bootloader on a stock FP4, here https://doc.e.foundation/devices/FP4/install and they seem applicable to the “Murena” phone supplied with Eos pre-installed, too. NB tht unlocking the bootloader wipes the phone. So we do it first. So: Power on the phone, with no SIM installed You get a welcome screen. Skip all things on startup including wifi Go to the very end of the settings, tap a gazillion times on the phone’s version until you’re a developer In the developer settings, allow usb debugging In the developer settings, allow oem bootloader unlocking Connect a computer via a USB cable, say yes on phone to USB debugging adb reboot bootloader The phone will reboot into a texty kind of screen, the bootloader fastboot flashing unlock The phone will reboot, back to the welcome screen Repeat steps 3-9 (maybe not all are necessary) fastboot flashing unlock_critical The phone will reboot, back to the welcome screen Note that although you are running fastboot, you must run this command with the phone in “bootloader” mode, not “fastboot” (aka “fastbootd”) mode. If you run fastboot flashing unlcok from fastboot you just get a “don’t know what you’re talking about”. I found conflicting instructions on what kind of Vulcan nerve pinches could be used to get into which boot modes, and had poor experiences with those. adb reboot bootloader always worked reliably for me. Some docs say to run fastboot oem unlock; I used flashing. Maybe this depends on the Android tools version. Initial privacy prep and OTA update We want to update the supplied phone OS. The build mine shipped with is too buggy to run Magisk, the application we are going to use to root the phone. (With the pre-installed phone OS, Magisk crashes at the “patch boot image” step.) But I didn’t want to let the phone talk to Google, even for the push notifications registration. From the welcome screen, skip all things except location, date, time. Notably, do not set up wifi In settings, “microg” section turn off cloud messaging turn off google safetynet turn off google registration (NB you must do this after the other two, because their sliders become dysfunctional after you turn google registration off) turn off both location modules In settings, location section, turn off allowed location for browser and magic earth Now go into settings and enable wifi, giving it your wifi details Tell the phone to update its operating system. This is a big download. Install Magisk, the root manager (As a starting point I used these instructions https://www.xda-developers.com/how-to-install-magisk/ and a lot of random forum posts.) You will need the official boot.img. Bizarrely there doesn’t seem to be a way to obtain this from the phone. Instead, you must download it. You can find it by starting at https://doc.e.foundation/devices/FP4/install which links to https://images.ecloud.global/stable/FP4/. At the time of writing, the most recent version, whose version number seemed to correspond to the OS update I installed above, was IMG-e-0.21-r-20220123158735-stable-FP4.zip. Download the giant zipfile to your computer Unzip it to extract boot.img Copy the file to your phone’s “storage”. Eg, via adb: with the phone booted into the main operating system, using USB debugging, adb push boot.img /storage/self/primary/Download. On the phone, open the browser, and enter https://f-droid.org. Click on the link to install f-droid. You will need to enable installing apps from the browser (follow the provided flow to the settings, change the setting, and then use Back, and you can do the install). If you wish, you can download the f-droid apk separately on a computer, and verify it with pgp. Using f-droid, install Magisk. You will need to enable installing apps from f-droid. (I installed Magisk from f-droid because 1. I was going to trust f-droid anyway 2. it has a shorter URL than Magisk’s.) Open the Magisk app. Tell Magisk to install (Magisk, not the app). There will be only one option: patch boot file. Tell it to patch the boot.img file from before. Transfer the magisk_patched-THING.img back to your computer (eg via adb pull). adb reboot bootloader fastboot boot magisk_patched-THING.img (again, NB, from bootloader mode, not from fastboot mode) In Magisk you’ll see it shows as installed. But it’s not really; you’ve just booted from an image with it. Ask to install Magisk with “Direct install”. After you have done all this, I believe that each time you do an over-the-air OS update, you must, between installing the update and rebooting the phone, ask Magisk to “Install to inactive slot (after OTA)”. Presumably if you forget you must do the fastboot boot dance again. After all this, I was able to use tsu in Termux. There’s a strange behaviour with the root prompt you get apropos Termux’s request for root; I found that it definitely worked if Termux wasn’t the foreground app… You have to leave the bootloader unlocked. Howwever, as I understand it, the phone’s encryption will still prevent an attacker from hoovering the data out of your phone. The bootloader lock is to prevent someone tricking you into entering the decryption passkey into a trojaned device. Other things to change Probably, after you’re done with this, disable installing apps from the Browser. I will install Signal before doing that, since that’s not in f-droid because of mutual distrust between the f-droid and Signal folks. The permission is called “Install unknown apps”. Turn off “instant apps” aka “open links in apps even if the app is not installed”. OMG WTF BBQ. Turn off “wifi scanning even if wifi off”. WTF. I turned off storage manager auto delete, on the grounds that I didn’t know what the phone might think of as “having been backed up”. I can manage my own space use, thanks very much. There are probably other things to change. I have not yet transferred my Signal account from my old phone. It is possible that Signal will require me to re-enable the google push notifications, but I hope that having disabled them in microg it will be happy to use its own system, as it does on my old phone. comments [Less]
Posted about 2 years ago
I recently learned about the Zephyr Project which is a rather neat embedded OS for devices too small to run Linux. This led me to wondering if I could adapt arduino-copilot to target Zephyr, and so be able to program any of the 350+ boards it ... [More] supports using Haskell. At the same time I had an opportunity to give a talk at the Houston Functional Programmers group. On February 1st I decided to give that talk, about arduino-copilot. That left 2 weeks to buy some hardware supported by Zephyr and port arduino-copilot to it. The result is zephyr-copilot, and I was able to demo it during my talk. This example can be used with any of 293 different boards, and will blink an on-board LED: module Examples.Blink.Demo where import Copilot.Zephyr.Board.Generic main :: IO () main = zephyr $ do led0 =: blinking delay =: MilliSeconds (constant 100) Doing much more than that needs a board specific module to set up GPIO pins etc. So far I only have written those for a couple of boards I have, but they are fairly easy to write. I'd be happy to help anyone who wants to contribute one. Due to the time constraints I have not implemented serial port support, or PWM or ADC yet, although all should be fairly easy. Zephyr also has no end of other capabilities, from networking to file systems to sensors, that could perhaps be supported in zephyr-copilot. My talk has now been published on youtube. I really enjoyed presenting again for the first time in 4 years(!), and to a very nice group of people. Thanks to Claude Rubinson for his persistence in getting me to give a talk. Development of zephyr-copilot was sponsored by Mark Reidenbach, Erik Bjäreholt, Jake Vosloo, and Graham Spencer on Patreon. [Less]
Posted about 2 years ago
This has ended up longer than I expected. I’ll write up posts about some of the individual steps with some more details at some point, but this is an overview of the yak shaving I engaged in. The TL;DR is: I wanted to upgrade my internet ... [More] connection, but: My router wasn’t fast enough, so: I bought a new one and: Proceeded to help work on mainline Linux support, and: Did some tweaking of my Debian setup to allow for a squashfs root, and: Upgraded it to Debian 11 (bullseye) in the process, except: It turned out my home automation devices weren’t happy, so: I dug into some memory issues on my ESP8266 firmware, which: Led to diagnosing some TLS interaction issues with the firmware, and: I had an interlude into some interrupt affinity issues, but: I finally got there. The desire for a faster connection When I migrated my home connection to FTTP I kept the same 80M/20M profile I’d had on FTTC. I didn’t have a pressing need for faster, and I saved money because I was no longer paying for the phone line portion. I wanted more, but at the time I think the only option was for a 160M/30M profile instead and I didn’t need it and it wasn’t enough better to convince me. Time passed and BT rolled out their GigE (really 900M) download option. And again, I didn’t need it, but I wanted it. My provider, Aquiss, initially didn’t offer this (I think they had up to 330M download options available by this point). So I stayed on 80M/20M. And the only time I really wanted it to be faster was when pushing off-site backups to rsync.net. Of course, we’ve had the pandemic, and that’s involved 2 adults working from home with plenty of video calls throughout the day. The 80M/20M connection has proved rock solid for this, so again, I didn’t feel an upgrade was justified. We got a 4K capable TV last year and while the bandwidth usage for 4K streaming is noticeably higher, again the connection can handle it no problem. At some point last year I noticed Aquiss had added speed options all the way to 900M down. At the end of the year I accepted a new role, which is fully remote, so I had a bit of an acceptance about the fact that I wasn’t going back into an office any time soon. The combination (and the desire for the increased upload speed) finally allowed me to justify the upgrade to myself. Testing the current setup for bottlenecks The first thing to do was see whether my internal network could cope with an upgrade. I’m mostly running Cat6 GigE so I wasn’t worried about that side of things. However I’m using an RB3011 as my core router, and while it has some coprocessors for routing acceleration they’re not supported under mainline Linux (and unlikely to be any time soon). So I had to benchmark what it was capable of routing. I run a handful of VLANs within my home network, with stateful firewalling between them, so I felt that would be a good approximation of the maximum speed to the outside world I might be able to get if I had the external connection upgraded. I went for the easy approach and fired up iPerf3 on 2 hosts, both connected via ethernet but on separate networks, so routed through the RB3011. That resulted in slightly more than a 300Mb/s throughput. Ok. I confirmed that I could get 900Mb/s+ on 2 hosts both on the same network, just to be sure there wasn’t some other issue I was missing. Nope, so unsurprisingly the router was the bottleneck. So. To upgrade my internet speed I need to upgrade my router. I could just buy something off the shelf, but I like being able to run Debian (or OpenWRT) on the router rather than some horrible vendor firmware. Lucky MikroTik launched the RB5009 towards the end of last year. RouterOS is probably more than capable, but what really interested me was the fact it’s an ARM64 platform based on an Armada 7040, which is pretty well supported in mainline kernels already. There’s a 10G connection from the internal switch to the CPU, as well as a 2.5Gb/s ethernet port and a 10G SFP+ cage. All good stuff. I ordered one just before the New Year. Thankfully the OpenWRT folk had done all of the hard work on getting a mainline kernel booting on the device; Sergey Sergeev and Robert Marko in particular fighting RouterBoot and producing a suitable device tree file to get everything up and running. I ended up soldering a serial console connection up to aid debugging, and lightly patching Rob’s u-boot to fix the incorrect RAM size reported by RouterBoot. A few kernel tweaks were necessary to make the networking entirely happy and at that point it was time to think about actually doing a replacement. Upgrading to Debian 11 (bullseye) My RB3011 is currently running Debian 10 (buster); an upgrade has been on my todo list, but with the impending replacement I decided I’d hold off and create a new Debian 11 (bullseye) image for the RB5009. Additionally, I don’t actually run off the internal NAND in the RB3011; I have a USB flash drive for the rootfs and just the kernel booting off internal NAND. Originally this was for ease of testing, then a combination of needing to figure out a good read-only root solution and a small enough image to fit in the 120M available. For the upgrade I decided to finally look at these pieces. I’ve ended up with a script that will build me a squashfs image, and the initial rootfs takes care of mounting this and then a tmpfs as an overlay fs. That means I can easily see what pieces are being written to. The RB5009 has a total of 1G NAND so I’m not as space constrained, but the squashfs ends up under 50M. I’ve added some additional pieces to allow me to pre-populate the overlay fs with updates rather than always needing to rebuild the squashfs image. With that done I decided to try it out on the RB3011; I tweaked the build script to be able to build for armhf (the RB3011) or arm64 (the RB5009) and to deal with some slight differences in configuration between the two (e.g. interface naming). The idea here was to ensure I’d got all the appropriate configuration sorted for the RB5009, in the known-good existing environment. Everything is still on a USB stick at this stage and the new device has an armhf busybox root meaning it can be used on either device, and the init script detects the architecture to select the appropriate squashfs to mount. A problem with ESP8266 home automation devices Everything seemed to work fine - a few niggles with the watchdog, which is overly sensitive on the RB3011, but I got those sorted (and the build script updated) and the device came up and successfully did the PPPoE dance to bring up external connectivity. And then I noticed that my home automation devices were having problems connecting to the mosquitto MQTT server. It turned out it was only the ESP8266 based devices that were failing, and examining the serial debug output on one of my test devices revealed it was hitting an out of memory issue (displaying E:M 280) when establishing the TLS MQTT connection. I rolled back to the Debian 10 image and set about creating a test environment to look at the ESP8266 issues. My first action was to try and reduce my RAM footprint to try and ensure there was enough spare to establish the connection. I moved a few functions that were still sitting in IRAM into flash. I cleaned up a couple of buffers that are on the stack to be more correctly sized. I tried my new image, and I didn’t get the memory issue. Instead I progressed a bit further and got a watchdog reset. Doh! It was obviously something related to the TLS connection, but I couldn’t easily see what the difference was; the same x509 cert was in use, it looked like the initial handshake was the same (and trying with openssl s_client looked pretty similar too). I set about instrumenting the ancient Mbed TLS used in the Espressif SDK and discovered that whatever had changed between buster + bullseye meant the EPS8266 was now trying a TLS-DHE-RSA-WITH-AES-256-CBC-SHA256 handshake instead of a TLS-RSA-WITH-AES-256-CBC-SHA256 handshake and that was causing enough extra CPU usage that it couldn’t complete in time and the watchdog kicked in. So I commented out MBEDTLS_KEY_EXCHANGE_DHE_RSA_ENABLED in the config_esp.h for mbedtls and rebuilt things. Hacky, but I’ll go back to trying to improve this generally at some point. A detour into interrupt load Now, my testing of the RB3011 image is generally done at weekends, when I have enough time to tear down and rebuild the connection rather than doing it in the evening and having limited time to get things working again in time for work in the morning. So at the point I had an image ready to go I pulled the trigger on the line upgrade. I went with the 500M/75M option rather than the full 900M - I suspect I’d have difficulty actually getting that most of the time and 75M of upload bandwidth seems fairly substantial for now. It only took a couple of days from the order to the point the line was regraded (which involved no real downtime - just a reconnection in the night). Of course this happened just after the weekend I’d discovered the ESP8266 issue. This provided an opportunity to see just what the RB3011 could actually manage. In the configuration I had it turned out to be not much more than the 80Mb/s speeds I had previously seen. The upload jumped from a solid 20Mb/s to 75Mb/s, so I knew the regrade had actually happened. Looking at CPU utilisation clearly showed the problem; softirqs were using almost 100% of a CPU core. Now, the way the hardware is setup on the RB3011 is that there are two separate 5 port switches, each connected back to the CPU via a separate GigE interface. For various reasons I had everything on a single switch, which meant that all traffic was boomeranging in and out of the same CPU interface. The IPQ8064 has dual cores, so I thought I’d try moving the external connection to the other switch. That puts it on its own GigE CPU interface, which then allows binding the interrupts to a different CPU core. That helps; throughput to the outside world hits 140Mb/s+. Still a long way from the expected max, but proof we just need more grunt. Success Which brings us to this past weekend, when, having worked out all the other bits, I tried the squashfs root image again on the RB3011. Success! The home automation bits connected to it, the link to the outside world came up, everything seemed happy. So I double checked my bootloader bits on the RB5009, brought it down to the comms room and plugged it in instead. And, modulo my failing to update the nftables config to allow it to do forwarding, it all came up ok. Some testing with iperf3 internally got a nice 912Mb/s sustained between subnets, and some less scientific testing with wget + speedtest-cli saw speeds of over 460Mb/s to the outside world. Time from ordering the router until it was in service? Just under 8 weeks… [Less]
Posted about 2 years ago
Review: Elder Race, by Adrian Tchaikovsky Publisher: Tordotcom Copyright: November 2021 ISBN: 1-250-76871-3 Format: Kindle Pages: 199 (It's a shame that a lot of people ... [More] will be reading this novella on a black-and-white ebook reader, since the Emmanuel Shiu cover is absolutely spectacular. There's a larger image without the words at the bottom of that article.) When reports arrive at the court about demons deep in the forest that are taking over animals and humans and bending them to their will, the queen doesn't care. It's probably some unknown animal, and regardless, the forest kingdom is a rival anyway. Lynesse Fourth Daughter disagrees vehemently, but she has no power at court. Even apart from her lack of seniority, her love of stories and daring and adventures is a source of endless frustration to her mother. That is why this novella opens with her climbing the mountain path to the Tower of Nyrgoth Elder, the last of the ancient wizards, to seek his help. Nyr Illim Tevitch is an anthropologist second class of Earth's Explorer Corps, part of the second wave of Earth's outward expansion through the galaxy. In the first wave, colonies were seeded on habitable planets, only to be left stranded when Earth's civilization collapsed in an ecological crisis. Nyr was a member of a team of four, sent to make careful and limited contact with one of those lost colonies as part of Earth's second flourishing with more advanced technology. When the team lost contact with Earth, the other three went back while Nyr stayed to keep their field observations going. It's now 291 years of intermittent suspended animation later. Nyr's colleagues never came back, and there have been no messages from Earth. Elder Race is a Prime Directive anthropology story, a subgenre so long-standing that it has its own conventions and variations. Variations of the theme have been written by everyone from Eleanor Arnason to Iain M. Banks (linking to the book I have in mind is arguably a spoiler). Per the dedication, Tchaikovsky's take is based on Gene Wolfe's story "Trip, Trap," which I have not read but whose plot looks very similar. To that story structure, Tchaikovsky brings two major twists. First, Nyr is cut off from his advanced civilization, and has considerable reason to believe that civilization no longer exists. Do noninterference rules still have any meaning if Nyr is stranded and the civilization that made the rules is gone? Second, Nyr has already broken those rules rather spectacularly. More than a hundred years previously, he had ridden with Astresse Regent, a warrior queen and Lynesse's ancestor, to defeat a local warlord who had found control codes for abandoned advanced machinery and was using it as weaponry. In the process, he fell in love and made a rash promise to come to the aid of any of her descendants if he were needed. Lynesse has come to collect on the promise. Elder Race is told in alternating chapters between Nyr and Lynesse's viewpoints: first person for Nyr and tight third person for Lynesse. The core of the story is this doubled perspective, one from a young woman who wants to live in a fantasy novel and one from a deeply depressed anthropologist torn between wanting human contact, wanting to follow the rules of his profession, and wanting to explain to Lynesse that he is not a wizard. Nyr talks himself into helping with another misuse of advanced technology using the same logic he used a hundred years earlier: he's protecting Lynesse's pre-industrial society from interference rather than causing it. But the demons Lynesse wants him to fight are something entirely unexpected. This parallel understanding is a great story structure. What worked less for me was Tchaikovsky's reliance on linguistic barriers to prevent shared understanding. Whenever Nyr tries to explain something, Lynesse hears it in terms of magic and high fantasy, and often exactly backwards from how Nyr intended it. This is where my suspension of disbelief failed me, even though I normally don't have suspension of disbelief problems in SF stories. I was unable to map Lynesse's misunderstandings to any realistic linguistic model. Lynesse's language is highly complex (a realistic development within an isolated population), and Nyr complains about his inability to speak it properly given it's blizzard of complex modifiers. This is entirely believable. What is far less believable is that Lynesse perceives him as fluent in her language, but often saying the precise opposite of what he's trying to say. One chapter in the middle of the book gives Nyr's intended story side-by-side with Lynesse's understanding. This is a brilliant way to show the divide, but I found the translation errors unbelievable. If Nyr is failing that profoundly to communicate his meaning, he should be making more egregious sentence-level errors, occasionally saying something bizarre or entirely nonsensical, referring to a person as an animal or a baby, or otherwise not fluently telling a coherent story that's fundamentally different than the one he thinks he's telling. If you can put that aside, though, this is a fun story. Nyr has serious anxiety and depression made worse by his isolation, and copes by using an implanted device called a Dissociative Cognition System that lets him temporarily turn off his emotions at the cost of letting them snowball. He has a wealth of other augments and implants, including horns, which Lynesse sees as evidence that he's a different species of magical being and which he sees as occasionally irritating field equipment with annoying visual menus. The key to writing a story like this is for both perspectives to be correct given their own assumptions, and to offer insight that the other perspective is missing. I thought the linguistic part of that was unsuccessful, but the rest of it works. One of the best parts of novellas is that they don't wear out their welcome. This is a fun spin on well-trodden ground that tells a complete story in under 200 pages. I wish the ending had been a bit more satisfying and the linguistics had been more believable, but I enjoyed the time I spent in this world. Content warning for some body horror. Rating: 7 out of 10 [Less]