I Use This!
Very High Activity

News

Analyzed about 24 hours ago. based on code collected 2 days ago.
Posted over 3 years ago by Daniel Stenberg
Embroidered and put on the kitchen wall, on a mug or just as words of wisdom to bring with you in life?
Posted over 3 years ago by Jan de Mooij
Introduction We have enabled Warp, a significant update to SpiderMonkey, by default in Firefox 83. SpiderMonkey is the JavaScript engine used in the Firefox web browser. With Warp (also called WarpBuilder) we’re making big changes to our JIT ... [More] (just-in-time) compilers, resulting in improved responsiveness, faster page loads and better memory usage. The new architecture is also more maintainable and unlocks additional SpiderMonkey improvements. This post explains how Warp works and how it made SpiderMonkey faster. How Warp works Multiple JITs The first step when running JavaScript is to parse the source code into bytecode, a lower-level representation. Bytecode can be executed immediately using an interpreter or can be compiled to native code by a just-in-time (JIT) compiler. Modern JavaScript engines have multiple tiered execution engines. JS functions may switch between tiers depending on the expected benefit of switching: Interpreters and baseline JITs have fast compilation times, perform only basic code optimizations (typically based on Inline Caches), and collect profiling data. The Optimizing JIT performs advanced compiler optimizations but has slower compilation times and uses more memory, so is only used for functions that are warm (called many times). The optimizing JIT makes assumptions based on the profiling data collected by the other tiers. If these assumptions turn out to be wrong, the optimized code is discarded. When this happens the function resumes execution in the baseline tiers and has to warm-up again (this is called a bailout). For SpiderMonkey it looks like this (simplified): Profiling data Our previous optimizing JIT, Ion, used two very different systems for gathering profiling information to guide JIT optimizations. The first is Type Inference (TI), which collects global information about the types of objects used in the JS code. The second is CacheIR, a simple linear bytecode format used by the Baseline Interpreter and the Baseline JIT as the fundamental optimization primitive. Ion mostly relied on TI, but occasionally used CacheIR information when TI data was unavailable. With Warp, we’ve changed our optimizing JIT to rely solely on CacheIR data collected by the baseline tiers. Here’s what this looks like: There’s a lot of information here, but the thing to note is that we’ve replaced the IonBuilder frontend (outlined in red) with the simpler WarpBuilder frontend (outlined in green). IonBuilder and WarpBuilder both produce Ion MIR, an intermediate representation used by the optimizing JIT backend. Where IonBuilder used TI data gathered from the whole engine to generate MIR, WarpBuilder generates MIR using the same CacheIR that the Baseline Interpreter and Baseline JIT use to generate Inline Caches (ICs). As we’ll see below, the tighter integration between Warp and the lower tiers has several advantages. How CacheIR works Consider the following JS function: function f(o) { return o.x - 1; } The Baseline Interpreter and Baseline JIT use two Inline Caches for this function: one for the property access (o.x), and one for the subtraction. That’s because we can’t optimize this function without knowing the types of o and o.x. The IC for the property access, o.x, will be invoked with the value of o. It can then attach an IC stub (a small piece of machine code) to optimize this operation. In SpiderMonkey this works by first generating CacheIR (a simple linear bytecode format, you could think of it as an optimization recipe). For example, if o is an object and x is a simple data property, we generate this: GuardToObject inputId 0 GuardShape objId 0, shapeOffset 0 LoadFixedSlotResult objId 0, offsetOffset 8 ReturnFromIC Here we first guard the input (o) is an object, then we guard on the object’s shape (which determines the object’s properties and layout), and then we load the value of o.x from the object’s slots. Note that the shape and the property’s index in the slots array are stored in a separate data section, not baked into the CacheIR or IC code itself. The CacheIR refers to the offsets of these fields with shapeOffset and offsetOffset. This allows many different IC stubs to share the same generated code, reducing compilation overhead. The IC then compiles this CacheIR snippet to machine code. Now, the Baseline Interpreter and Baseline JIT can execute this operation quickly without calling into C++ code. The subtraction IC works the same way. If o.x is an int32 value, the subtraction IC will be invoked with two int32 values and the IC will generate the following CacheIR to optimize that case: GuardToInt32 inputId 0 GuardToInt32 inputId 1 Int32SubResult lhsId 0, rhsId 1 ReturnFromIC This means we first guard the left-hand side is an int32 value, then we guard the right-hand side is an int32 value, and we can then perform the int32 subtraction and return the result from the IC stub to the function. The CacheIR instructions capture everything we need to do to optimize an operation. We have a few hundred CacheIR instructions, defined in a YAML file. These are the building blocks for our JIT optimization pipeline. Warp: Transpiling CacheIR to MIR If a JS function gets called many times, we want to compile it with the optimizing compiler. With Warp there are three steps: WarpOracle: runs on the main thread, creates a snapshot that includes the Baseline CacheIR data. WarpBuilder: runs off-thread, builds MIR from the snapshot. Optimizing JIT Backend: also runs off-thread, optimizes the MIR and generates machine code. The WarpOracle phase runs on the main thread and is very fast. The actual MIR building can be done on a background thread. This is an improvement over IonBuilder, where we had to do MIR building on the main thread because it relied on a lot of global data structures for Type Inference. WarpBuilder has a transpiler to transpile CacheIR to MIR. This is a very mechanical process: for each CacheIR instruction, it just generates the corresponding MIR instruction(s). Putting this all together we get the following picture (click for a larger version): We’re very excited about this design: when we make changes to the CacheIR instructions, it automatically affects all of our JIT tiers (see the blue arrows in the picture above). Warp is simply weaving together the function’s bytecode and CacheIR instructions into a single MIR graph. Our old MIR builder (IonBuilder) had a lot of complicated code that we don’t need in WarpBuilder because all the JS semantics are captured by the CacheIR data we also need for ICs. Trial Inlining: type specializing inlined functions Optimizing JavaScript JITs are able to inline JavaScript functions into the caller. With Warp we are taking this a step further: Warp is also able to specialize inlined functions based on the call site. Consider our example function again: function f(o) { return o.x - 1; } This function may be called from multiple places, each passing a different shape of object or different types for o.x. In this case, the inline caches will have polymorphic CacheIR IC stubs, even if each of the callers only passes a single type. If we inline the function in Warp, we won’t be able to optimize it as well as we want. To solve this problem, we introduced a novel optimization called Trial Inlining. Every function has an ICScript, which stores the CacheIR and IC data for that function. Before we Warp-compile a function, we scan the Baseline ICs in that function to search for calls to inlinable functions. For each inlinable call site, we create a new ICScript for the callee function. Whenever we call the inlining candidate, instead of using the default ICScript for the callee, we pass in the new specialized ICScript. This means that the Baseline Interpreter, Baseline JIT, and Warp will now collect and use information specialized for that call site. Trial inlining is very powerful because it works recursively. For example, consider the following JS code: function callWithArg(fun, x) { return fun(x); } function test(a) { var b = callWithArg(x => x + 1, a); var c = callWithArg(x => x - 1, a); return b + c; } When we perform trial inlining for the test function, we will generate a specialized ICScript for each of the callWithArg calls. Later on, we attempt recursive trial inlining in those caller-specialized callWithArg functions, and we can then specialize the fun call based on the caller. This was not possible in IonBuilder. When it’s time to Warp-compile the test function, we have the caller-specialized CacheIR data and can generate optimal code. This means we build up the inlining graph before functions are Warp-compiled, by (recursively) specializing Baseline IC data at call sites. Warp then just inlines based on that without needing its own inlining heuristics. Optimizing built-in functions IonBuilder was able to inline certain built-in functions directly. This is especially useful for things like Math.abs and Array.prototype.push, because we can implement them with a few machine instructions and that’s a lot faster than calling the function. Because Warp is driven by CacheIR, we decided to generate optimized CacheIR for calls to these functions. This means these built-ins are now also properly optimized with IC stubs in our Baseline Interpreter and JIT. The new design leads us to generate the right CacheIR instructions, which then benefits not just Warp but all of our JIT tiers. For example, let’s look at a Math.pow call with two int32 arguments. We generate the following CacheIR: LoadArgumentFixedSlot resultId 1, slotIndex 3 GuardToObject inputId 1 GuardSpecificFunction funId 1, expectedOffset 0, nargsAndFlagsOffset 8 LoadArgumentFixedSlot resultId 2, slotIndex 1 LoadArgumentFixedSlot resultId 3, slotIndex 0 GuardToInt32 inputId 2 GuardToInt32 inputId 3 Int32PowResult lhsId 2, rhsId 3 ReturnFromIC First, we guard that the callee is the built-in pow function. Then we load the two arguments and guard they are int32 values. Then we perform the pow operation specialized for two int32 arguments and return the result of that from the IC stub. Furthermore, the Int32PowResult CacheIR instruction is also used to optimize the JS exponentiation operator, x ** y. For that operator we might generate: GuardToInt32 inputId 0 GuardToInt32 inputId 1 Int32PowResult lhsId 0, rhsId 1 ReturnFromIC When we added Warp transpiler support for Int32PowResult, Warp was able to optimize both the exponentiation operator and Math.pow without additional changes. This is a nice example of CacheIR providing building blocks that can be used for optimizing different operations. Results Performance Warp is faster than Ion on many workloads. The picture below shows a couple examples: we had a 20% improvement on Google Docs load time, and we are about 10-12% faster on the Speedometer benchmark: We’ve seen similar page load and responsiveness improvements on other JS-intensive websites such as Reddit and Netflix. Feedback from Nightly users has been positive as well. The improvements are largely because basing Warp on CacheIR lets us remove the code throughout the engine that was required to track the global type inference data used by IonBuilder, resulting in speedups across the engine. The old system required all functions to track type information that was only useful in very hot functions. With Warp, the profiling information (CacheIR) used to optimize Warp is also used to speed up code running in the Baseline Interpreter and Baseline JIT. Warp is also able to do more work off-thread and requires fewer recompilations (the previous design often overspecialized, resulting in many bailouts). Synthetic JS benchmarks Warp is currently slower than Ion on certain synthetic JS benchmarks such as Octane and Kraken. This isn’t too surprising because Warp has to compete with almost a decade of optimization work and tuning for those benchmarks specifically. We believe these benchmarks are not representative of modern JS code (see also the V8 team’s blog post on this) and the regressions are outweighed by the large speedups and other improvements elsewhere. That said, we will continue to optimize Warp the coming months and we expect to see improvements on all of these workloads going forward. Memory usage Removing the global type inference data also means we use less memory. For example the picture below shows JS code in Firefox uses 8% less memory when loading a number of websites (tp6): We expect this number to improve the coming months as we remove the old code and are able to simplify more data structures. Faster GCs The type inference data also added a lot of overhead to garbage collection. We noticed some big improvements in our telemetry data for GC sweeping (one of the phases of our GC) when we enabled Warp by default in Firefox Nightly on September 23: Maintainability and Developer Velocity Because WarpBuilder is a lot more mechanical than IonBuilder, we’ve found the code to be much simpler, more compact, more maintainable and less error-prone. By using CacheIR everywhere, we can add new optimizations with much less code. This makes it easier for the team to improve performance and implement new features. What’s next? With Warp we have replaced the frontend (the MIR building phase) of the IonMonkey JIT. The next step is removing the old code and architecture. This will likely happen in Firefox 85. We expect additional performance and memory usage improvements from that. We will also continue to incrementally simplify and optimize the backend of the IonMonkey JIT. We believe there’s still a lot of room for improvement for JS-intensive workloads. Finally, because all of our JITs are now based on CacheIR data, we are working on a tool to let us (and web developers) explore the CacheIR data for a JS function. We hope this will help developers understand JS performance better. Acknowledgements Most of the work on Warp was done by Caroline Cullen, Iain Ireland, Jan de Mooij, and our amazing contributors André Bargull and Tom Schuster. The rest of the SpiderMonkey team provided us with a lot of feedback and ideas. Christian Holler and Gary Kwong reported various fuzz bugs. Thanks to Ted Campbell, Caroline Cullen, Steven DeTar, Matthew Gaudet, Melissa Thermidor, and especially Iain Ireland for their great feedback and suggestions for this post. The post Warp: Improved JS performance in Firefox 83 appeared first on Mozilla Hacks - the Web developer blog. [Less]
Posted over 3 years ago by Gregory Mierzwinski
Using the Mach Perftest Notebook In my previous blog post, I discussed an ETL (Extract-Transform-Load) implementation for doing local data analysis within Mozilla in a standardized way. That work provided us with a straightforward way of consuming ... [More] data from various sources and standardizing it to conform to the expected structure for pre-built analysis scripts. Today, not only are we using this ETL (called PerftestETL) locally, but also in our CI system! There, we have a tool called Perfherder which ingests data created by our tests in CI so that we can build up visualizations and dashboards like Are We Fast Yet (AWFY). The big benefit that this new ETL system provides here is that it greatly simplifies the path to getting from raw data to a Perfherder-formatted datum. This lets developers ignore the data pre-processing stage which isn’t important to them. All of this is currently available in our new performance testing framework MozPerftest. One thing that I omitted from the last blog post is how we use this tool locally for analyzing our data. We had two students from Bishop’s University (:axew, and :yue) work on this tool for course credits during the 2020 Winter semester. Over the summer, they continued hacking with us on this project and finalized the work needed to do local analyses using this tool through MozPerftest. Below you can see a recording of how this all works together when you run tests locally for comparisons. http://blog.mozilla.org/performance/files/2020/10/Copy-of-Using-Mach-Perftest-Notebook.mp4 There is no audio in the video so here’s a short summary of what’s happening: We start off with a folder containing some historical data from past test iterations that we want to compare the current data to. We start the MozPerftest test using the `–notebook-compare-to` option to specify what we want to compare the new data with. The test runs for 10 iterations (the video is cut to show only one). Once the test ends, the PerftestETL runs and standardizes the data and then we start a server to serve the data to Iodide (our notebook of choice). A new browser window/tab opens to Iodide with some custom code already inserted into the editor (through a POST request) for a comparison view that we built. Iodide is run, and a table is produced in the report. The report then shows the difference between each historical iteration and the newest run. With these pre-built analyses, we can help developers who are tracking metrics for a product over time, or simply checking how their changes affected performance without having to build their own scripts. This ensures that we have an organization-wide standardized approach for analyzing and viewing data locally. Otherwise, developers each start making their own report which (1) wastes their time, and more importantly, (2) requires consumers of these reports to familiarize themselves with the format. That said, the real beauty in the MozPerftest integration is that we’ve decoupled the ETL from the notebook-specific code. This lets us expand the notebooks that we can open data in so we could add the capability for using R-Studio, Google Data Studio, or Jupyter Notebook in the future. If you have any questions, feel free to reach out to us on Riot in #perftest.   [Less]
Posted over 3 years ago by Gregory Mierzwinski
In my previous blog post, I discussed an ETL (Extract-Transform-Load) implementation for doing local data analysis within Mozilla in a standardized way. That work provided us with a straightforward way of consuming data from various sources and ... [More] standardizing it to conform to the expected structure for pre-built analysis scripts. Today, not only are we using this ETL (called PerftestETL) locally, but also in our CI system! There, we have a tool called Perfherder which ingests data created by our tests in CI so that we can build up visualizations and dashboards like Are We Fast Yet (AWFY). The big benefit that this new ETL system provides here is that it greatly simplifies the path to getting from raw data to a Perfherder-formatted datum. This lets developers ignore the data pre-processing stage which isn’t important to them. All of this is currently available in our new performance testing framework MozPerftest. One thing that I omitted from the last blog post is how we use this tool locally for analyzing our data. We had two students from Bishop’s University (:axew, and :yue) work on this tool for course credits during the 2020 Winter semester. Over the summer, they continued hacking with us on this project and finalized the work needed to do local analyses using this tool through MozPerftest. Below you can see a recording of how this all works together when you run tests locally for comparisons. http://blog.mozilla.org/performance/files/2020/10/Copy-of-Using-Mach-Perftest-Notebook.mp4 There is no audio in the video so here’s a short summary of what’s happening: We start off with a folder containing some historical data from past test iterations that we want to compare the current data to. We start the MozPerftest test using the `–notebook-compare-to` option to specify what we want to compare the new data with. The test runs for 10 iterations (the video is cut to show only one). Once the test ends, the PerftestETL runs and standardizes the data and then we start a server to serve the data to Iodide (our notebook of choice). A new browser window/tab opens to Iodide with some custom code already inserted into the editor (through a POST request) for a comparison view that we built. Iodide is run, and a table is produced in the report. The report then shows the difference between each historical iteration and the newest run. With these pre-built analyses, we can help developers who are tracking metrics for a product over time, or simply checking how their changes affected performance without having to build their own scripts. This ensures that we have an organization-wide standardized approach for analyzing and viewing data locally. Otherwise, developers each start making their own report which (1) wastes their time, and more importantly, (2) requires consumers of these reports to familiarize themselves with the format. That said, the real beauty in the MozPerftest integration is that we’ve decoupled the ETL from the notebook-specific code. This lets us expand the notebooks that we can open data in so we could add the capability for using R-Studio, Google Data Studio, or Jupyter Notebook in the future. If you have any questions, feel free to reach out to us on Riot in #perftest.   [Less]
Posted over 3 years ago by Dave Hunt
In October there were 202 alerts generated, resulting in 25 regression bugs being filed on average 4.4 days after the regressing change landed. Welcome to the second edition of the new format for the performance sheriffing newsletter! In last month’s ... [More] newsletter I shared details of our sheriffing efficiency metrics. If you’re interested in the latest results for these you can find them summarised below, or (if you have access) you can view them in detail on our full dashboard. As sheriffing efficiency is so important to the prevention of shipping performance regressions to users, I will include these metrics in each month’s newsletter. Sheriffing efficiency All alerts were triaged in an average of 1.7 days 75% of alerts were triaged within 3 days Valid regression alerts were associated with a bug in an average of 2 days 95% of valid regression alerts were associated with a bug within 5 days Regressions by release For this edition I’m going to focus on a relatively new metric that we’ve been tracking, which is regressions by release. This metric shows valid regressions by the version of Firefox that they were first identified in, grouped by status. It’s important to note that we are running these performance tests against our Nightly builds, which is where we land new and experimentational changes, and the results cannot be compared with our release builds. In the above chart I have excluded bugs resolved as duplicates, as well as results prior to release 72 and later than 83 as these datasets are incomplete. You’ll notice that the more recent releases have unresolved bugs, which is to be expected if the investigations are ongoing. What’s concerning is that there are many regression bugs for earlier releases that remain unresolved. The following chart highlights this by carrying unresolved regressions into the following release versions. When performance sheriffs open regression bugs, they do their best to identify the commit that caused the regression and open a needinfo for the author. The affected Firefox version is indicated (this is how we’re able to gather these metrics), and appropriate keywords are added to the bug so that it shows up in the performance triage and release management workflows. Whilst the sheriffs do attempt to follow up on regression bugs, it’s clear that there are situations where the bug makes slow progress. We’re already thinking about how we can improve our procedures and policies around performance regressions, but in the meantime please consider taking a look over the open bugs listed below to see if any can be nudged back into life. 2 open regression bugs for Firefox 72 2 open regression bugs for Firefox 73 3 open regression bugs for Firefox 75 6 open regression bugs for Firefox 76 3 open regression bugs for Firefox 77 7 open regression bugs for Firefox 78 6 open regression bugs for Firefox 79 3 open regression bugs for Firefox 80 Summary of alerts Each month I’ll highlight the regressions and improvements found. 😍 19 bugs were associated with improvements 🤐 3 regressions were accepted 🤩 6 regressions were fixed (or backed out) 🤥 3 regressions were invalid 🤗 1 regression is assigned 😨 12 regressions are still open Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time. I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs. The dashboard for October can be found here (for those with access). [Less]
Posted over 3 years ago by Benjamin Bouvier
Over the last year, Mozilla has decided to shut down the IRC network and replace it with a more modern platform. To my greatest delight, the Matrix ecosystem has been selected among all the possible replacements. For those who might not know Matrix ... [More] , it's a modern, decentralized protocol, using plain HTTP JSON-formatted endpoints, well-documented, and it implements both features that are common in recent messaging systems (e.g. file attachments, message edits and deletions), as well as those needed to handle large groups (e.g. moderation tools, private rooms, invite-only rooms). In this post I reflect on my personal history of writing chat bots, and then present a panel of features that the bot has, some user-facing ones, some others that embody what I esteem to be a sane, well-behaved Matrix bot. but first, some history Back in 2014 when I was an intern at Mozilla, I made a silly IRC JavaScript bot that would quote the @horsejs twitter account, when asked to do so. Then a few other useless features were added: "karma" tracking 1, being a karma guardian angel (lowering the karma of people lowering the karma of some predefined people), keeping track of contextless quotes from misc people... Over time, it slowly transformed into an IRC bot framework, with modules you could attach and configure at startup, setting which rooms the bot would join, what should be the cooldowns for message sending (more on this later), and so much more! Hence it was renamed meta-bot. an aside on the morality of bots I find making bots a fun activity, since once you've passed the step of connecting and sending messages, the rest is mostly easy (cough cough regular expressions cough cough) and creative work. And it's unfortunately easy to be reckless too. At this time, I never considered the potentially bad effects of quoting text from a random source, viz. fetching tweets from the @horsejs account. If the source would return a message that was inconsiderate, rude, or even worse, aggressive, then the bot would replicate this behavior. It is a real issue because although the bot doesn't think by itself and doesn't mean any harm, its programmers can do better, and they should try to avoid these issues at all costs. A chat bot replicates the culture of the engineers who made it on one hand, but also contributes to propagating this culture in the chat rooms it participates in, normalizing it to the chat participants. My bot happened to be well-behaved most of the time... until one time where it was not. After noticing the incident and expressing my deepest apologies, I deactivated the module and went through the whole list of modules, to make sure none could cause any harm, in any possible way. I should have known better in the first place! I am really not trying to signal my own virtue, since I failed in a way that should have been predictable. I hope by writing this that other people may reflect about the actions of their bots as well, in case they could be misbehaving like this. the former fleet of mozilla bots There were a few other useful IRC bots (of which I wasn't the author) hanging out in the Mozilla IRC rooms, notably Firebot and mrggigles. The latter probably started as a joke too, to enumerate puns from a list in the JavaScript channel. Then it outgrew its responsibilities by helping with a handful of requests: who can review this or this file in Mozilla's source code? what's the status of the continuous integration trees? can this particular C++ function used in Gecko cause a garbage collection? When we moved over to Matrix, the bots unfortunately became outdated, since the communication protocol (IRC) they were using was different. We could have ported them to the Matrix protocol, but the Not-Invented-Here syndrom was strong with this one: I've been making bots for a while, and I was personally interested in the Matrix protocol and trying out the JS facilities offered by the Matrix ecosystem. Botzilla features So I've decided to write Botzilla, a successor in spirit to meta-bot and mrgiggles, written in TypeScript. This is a very unofficial bot, tailored for Mozilla's needs but probably useful in other contexts. I've worked on it informally as a side-project, on my copious spare time. Crafting tools that show useful to other people has been sufficient a reward to motivate me to work on it, so it's been quite fun! Botzilla's logo, courtesy of Nical Let's take a look at all the features that the bot offers, at this point. uuid: Generate unique IDs This was a feature of Firebot, and easy enough to replicate, so this was the test feature for the Matrix bot. When saying !uuid, the bot will automatically generate a unique id (using uuid v4), guaranteed GMO-free and usable in any context that would require it. This was the first module, designed to test the framework. treestatus: Inform about CI tree status Mozilla developers tend to interact a lot with the continuous integration trees, because code is sometimes landed, sometimes backed out (sorry/thank you sheriffs!), sometimes merged across branches. This leads to the integration trees being closed. Before we had the feature to automatically land patch stacks when the trees reopened, it was useful to be able to get the open/close status of a tree. Asking !treestatus will answer with a list of the status of some common trees. It is also possible to request the status of a particular tree, e.g. for the "mozilla-central" tree, by asking !treestatus mozilla-central (or just central, as a handy shortcut). Expand bug status If you have ever interacted with Mozilla's code, there's chances that you've used Bugzilla, and mentioned bug numbers in conversations. The bot caches any message containing bug XXX and will respond with a link to this bug, the nickname of the person assigned to this bug if there's one, and the summary of this bug, if it's public. This is by far the most used and useful module, since it doesn't require a special incantation, but will react automatically to a lot of messages written with no particular intent (see below where it's explained how to not be spammy, though). Who Can Review X? This was a very nice feature that mrgiggles had: ask for potential reviewers for a particular file in the Gecko source tree and get a list of most recent reviewers. Botzilla replicates this, when seeing the trigger: who can review js/src/wasm/WasmJS.cpp?. The list of potential reviewers is extracted from Mercurial logs, looking for the N last reviewers of this particular file. As a bonus, there's no need to pass the full path to the file, if the file's name is unique in the tree's source code. Botzilla will trigger a search in Searchfox, and will use the unique name in the result list, if there's such a unique result. The previous example thus can be shortened to who can review WasmJS.cpp? since the file's name is unique in the whole code base. {Github,Gitlab} {issues,{P,M}Rs} It is possible for a room administrator to "connect" a given Matrix room to a Github repository. Later on, any mention of issues or pull requests by their number, e.g. #1234, will make Botzilla react with the summary and a link to the issue/PR at stake. This also works for Gitlab repositories, with slight differences: the administrator has to precise what's the root URL of the Gitlab instance (since Gitlab can be selfhosted). Issues are caught when numbers follows a # sign, while merge requests are caught when the numbers follow a ! sign. !tweet/!toot: Post on Twitter/Mastodon An administrator can configure a room to tie it up to a Twitter (respectively Mastodon) user account, using API tokens. Then, any person with an administrative role can post messages with !tweet something shocking for the bird site(respectively !toot something heartful for the mammoth site). This makes it possible to allow other people to post on these social networks without the need to give them the account's password. Unfortunately, the Twitter module hasn't ever been tested, since when I've tried to create a developer account, Twitter accepted it after a few days but then never displayed the API tokens on the interface. The support also never answered when I asked for help. Thankfully Mastodon can be self-hosted and thus it is easier to test. I'm happy to report that it works quite well! confession and histoire It is quite common in teams to set up regular standup meetings, where everyone in the team announces what they've been working on in the last few days or week. It also strikes me as important for personal recognition, including towards management, to be able to show off (just a bit!) what you've accomplished recently, and to remember this when times are harder (see also Julia Evans' blog post on the topic). There's a Botzilla module for this. Every time someone starts a message with confession:, then everything after the colon will be saved in a database (...wait for it!). Then, all the confessions are displayed on the Histoire 2 website, with one message feed per user. Note it is possible to send confessions privately to Botzilla (that doesn't affect the frontend though, which is open and public to all!), or in a public channel. Public channels somehow equate to team members, so channels also get their own pages on the frontend. Now the fun/cursed part is how all of this works. This was implemented in mrgiggles, and I liked it a lot, since it required no kind of backend or frontend server. How so? By (ab)using Github files as the database and Github pages as the frontend. Sending a confession will trigger a request to a Github endpoint to find a database file segregated by time, then it will trigger another request to create/modify it with the content of the confession. The frontend then uses other requests to public Github APIs to read the confessions before dynamically rendering those. Astute readers will notice that under a lot of confession activity, the bot would be a bit slowed down by Github's API use rates. In this case, there's some exponential backoff behavior before trying to re-send unsaved confessions to Github. Overall it works great, and API limitation rates have never quite been a problem. Intrinsic features: they're good bots, bront In addition to all the user-facing features, the bot has a few other interesting attributes that are more relevant to consider from a framework point of view. Hopefully some of these ideas can be useful for other bot authors! Join All The Rooms! Every time the bot is invited to a channel, be it public or private, it will join the channel, making it easy to use in general. It was implemented for free by the JS framework I've been using, and it is a definitive improvement over the IRC version of the bot. Sometimes Matrix rooms are upgraded to a new version of the room. The bot will try to join the upgraded room if it can, keeping all its room settings intact during the transition. Thou shalt not spam To avoid spamming the channel, especially for modules that are reactions to other messages (think: bug numbers, issues/pull requests mentions), the bot has had to learn how to keep quiet. There are two rules triggering the quieting behavior: if the bot has already reacted less than N minutes ago (where N is a configurable amount) in the same room, or if it has already reacted to some entity in a message, and there's been fewer than M messages in between the last reaction and the last message mentioning the same entity in the same room (M is also configurable) If any of these two criteria is met, then the bot will keep quiet and it will not react to another similar message. The combination of these two has proven over time to be quite solid in my experience, based on observing the bot's behavior and public reactions to its behavior. Some similar mechanism is used for the confession module: on a first confession, the bot will answer with a message saying it has seen the confession, including a link to where it is going to be posted, and will add an emoji "eyes" reaction to the message. Posting this long form message could be quite spammy, if there's a lot of confessions around the same time. Under the same criteria, it will just react with an "eyes" emoji to other confessions. Later on, it'll resend the full message, once both criterias aren't blocking it from doing so. Decentralized administration self-service The bot can be administrated, by discussing with it using the !admin command. This can happen in both a private conversation with it, or in public channels, yet it is recommended to do so in private channels. To confirm that an admin action has succeeded, it'll use the thumbs-up emoji on the message doing the particular action. To have a single administrator for the bot would be quite the burden, and it is not resilient to people switching roles, leaving the company, etc. Normally you'd solve this by implementing your own access control lists. Fortunately, Matrix already has a concept of power levels that assigns roles to users, among which there are the administrator and moderator roles. The bot will rely on this to decide to which requests it will answer. Somebody marked as an administrator or a moderator of a room can administrate Botzilla in this particular room, using the !admin commands. There's still a super-admin role, that must be defined in the configuration, in case things go awry. While administrators only have power over the current room, a super-admin can use its super-powers to change anything in any room. This decentralization of the administrative roles makes it easy to have different settings for different rooms, and to rely a bit less on single individuals. Key-value store In general, the bot contains a key-value store implemented in an sqlite database, making it easy to migrate and add context that's preserved across restarts of the bot. This is used to store private information like user repository information and settings for most rooms. Conceptually, each pair of room and module has its own key-value store, so that there's no risk of confusion between different rooms and modules. There's also a key-value per-module store that's applicable to all the rooms, to represent global settings. If there's some non-global (per room) settings for a room, these are preferred over the global settings. Self-documentation Each chat module is implemented as a ECMAScript module and must export an help string along the main reaction function. This is then captured and aggregated as part of an !help command, that can be used to request help about usage of the bot. The main help message will display the list of all the enabled modules, and help about a specific module may be queried with e.g. !help uuid. Future work and conclusion If I were to start again, I'd do a few things differently: now that the Rust ecosystem around the Matrix platform has matured a bit, I'd probably write this bot in Rust. Starting from JavaScript and moving to TypeScript has helped me catch a few static issues. I'd expect moving to Rust would help handling Matrix events faster, provide end-to-end encryption support for free, and be quite pleasant to use in general thanks to the awesome Rust tooling. use a real single-page app framework for the Histoire website. Maybe? I mean I'm a big fan of VanillaJS, but using it means re-creating your own Web framework like thing to make it nice and productive to use. despite being a fun hack, using Github as a backend has algorithmic limitations, that can make the web app sluggish. In particular, a combined feed for N users on M eras (think: periods) will trigger NxM Github API requests. Using a plain database with a plain API would probably be simpler at this point. This is mitigated with an in-memory cache so only the first time all the requests happen, but crafting my own requests would be more expressive and efficient, and allow for more features too (like displaying the list of rooms on the start view). provide a (better) commands parser. Regular expressions in this context are a bit feeble and limited. Also right now each module could in theory reuse the same command triggers as another one, etc. implement the chat modules in WebAssembly :-) In fact, I think there's a whole business model which would consist in having the bot framework including a wasm VM, and interacting with different communication platforms (not restricted to Matrix). Developers in such a bot platform could choose which source language to use for developing their own modules. It ought to be possible to define a clear, restricted, WASI-like capabilities-based interface that gets passed to each chat module. In such a sandboxed environment, the responsibility for hosting the bot's code is decoupled from the responsibility of writing modules. So a company could make the platform available, and paying users would develop the modules and host them. Imagine git pushing your chat modules and they get compiled to wasm and deployed on the fly. But I digress! (Please do not forget to credit me with a large $$$ envelope/a nice piece of swag if implementing this at least multi-billion dollars idea.) I'd like to finish by thanking the authors of the previous Mozilla bots, namely sfink and glob: your puppets have been incredible sources of inspiration. Also huge thanks to the people hanging in the matrix-bot-sdk chat room, who've answered questions and provided help in a few occasions. I hope you liked this presentation of Botzilla and its features! Of course, all the code is free and open-source, including the bot as well as the histoire frontend. At this point it is addressing most of the needs I had, so I don't have immediate plans to extend it further. I'd happily take contributions, though, so feel free to chime in if you'd like to implement anything! It's also a breeze to run on any machine, thanks to Docker-based deployment. Have fun with it! Karma is an IRC idiosyncrasy, in which users rate up and down other users using their nickname suffixed with ++ or --. Karma tracking consists in keeping scores and displaying those. ↩ Histoire is the French for "history" and "story". Inherited from Steve Fink's very own mrgiggles :-) ↩ [Less]
Posted over 3 years ago by Meridel Walkington
The Firefox UX content team has a new name that better reflects how we work. Co-authored with Betsy Mikel Photo by Brando Makes Branding on Unsplash   Hello. We’re the Firefox Content Design team. We’ve actually met before, but our name then was ... [More] the Firefox Content Strategy team. Why did we change our name to Content Design, you ask? Well, for a few (good) reasons. It better captures what we do We are designers, and our material is content. Content can be words, but it can be other things, too, like layout, hierarchy, iconography, and illustration. Words are one of the foundational elements in our design toolkit — similar to color or typography for visual designers — but it’s not always written words, and words aren’t created in a vacuum. Our practice is informed by research and an understanding of the holistic user journey. The type of content, and how content appears, is something we also create in close partnership with UX designers and researchers. “Then, instead of saying ‘How shall I write this?’, you say, ‘What content will best meet this need?’ The answer might be words, but it might also be other things: pictures, diagrams, charts, links, calendars, a series of questions and answers, videos, addresses, maps […] and many more besides. When your job is to decide which of those, or which combination of several of them, meets the user’s need — that’s content design.” — Sarah Richards defined the content design practice in her seminal work, Content Design It helps others understand how to work with us While content strategy accurately captures the full breadth of what we do, this descriptor is better understood by those doing content strategy work or very familiar with it. And, as we know from writing product copy, accuracy is not synonymous with clarity. Strategy can also sound like something we create on our own and then lob over a fence. In contrast, design is understood as an immersive and collaborative practice, grounded in solving user problems and business goals together. Content design is thus a clearer descriptor for a broader audience. When we collaborate cross-functionally (with product managers, engineers, marketing), it’s important they understand what to expect from our contributions, and how and when to engage us in the process. We often get asked: “When is the right time to bring in content? And the answer is: “The same time you’d bring in a designer.” We’re aligning with the field Content strategy is a job title often used by the much larger field of marketing content strategy or publishing. There are website content strategists, SEO content strategists, and social media content strategists, all who do different types of content-related work. Content design is a job title specific to product and user experience. And, making this change is in keeping with where the field is going. Organizations like Slack, Netflix, Intuit, and IBM also use content design, and practice leaders Shopify and Facebook recently made the change, articulating reasons that we share and echo here. It distinguishes the totality of our work from copywriting Writing interface copy is about 10% of what we do. While we do write words that appear in the product, it’s often at the end of a thoughtful design process that we participate in or lead. We’re still doing all the same things we did as content strategists, and we are still strategic in how we work (and shouldn’t everyone be strategic in how they work, anyway?) but we are choosing a title that better captures the unseen but equally important work we do to arrive at the words. It’s the best option for us, but there’s no ‘right’ option Job titles are tricky, especially for an emerging field like content design. The fact that titles are up for debate and actively evolving shows just how new our profession is. While there have been people creating product content experiences for a while, the field is really starting to now professionalize and expand. For example, we just got our first dedicated content design and UX writing conference this year with Button. Content strategy can be a good umbrella term for the activities of content design and UX writing. Larger teams might choose to differentiate more, staffing specialized strategists, content designers, and UX writers. For now, content design is the best option for us, where we are, and the context and organization in which we work. “There’s no ‘correct’ job title or description for this work. There’s not a single way you should contribute to your teams or help others understand what you do.”  — Metts & Welfle, Writing is Designing Words matter We’re documenting our name change publicly because, as our fellow content designers know, words matter. They reflect but also shape reality. We feel a bit self-conscious about this declaration, and maybe that’s because we are the newest guests at the UX party — so new that we are still writing, and rewriting, our name tag. So, hi, it’s nice to see you (again). We’re happy to be here.   Thank you to Michelle Heubusch, Gemma Petrie, and Katie Caldwell for reviewing this post.  [Less]
Posted over 3 years ago by glandium
Please partake in the git-cinnabar survey. Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git. Get it on github. These release notes are also ... [More] available on the git-cinnabar wiki. What’s new since 0.5.5? Updated git to 2.29.2 for the helper. git cinnabar git2hg and git cinnabar hg2git now have a --batch flag. Fixed a few issues with experimental support for python 3. Fixed compatibility issues with mercurial >= 5.5. Avoid downloading unsupported clonebundles. Provide more resilience to network problems during bundle download. Prebuilt helper for Apple Silicon macos now available via git cinnabar download. [Less]
Posted over 3 years ago by Karl Dubost
When being asked I believe there are good career opportunities for me at Company X what do you understand? Some people will associate this directly to climbing the hierarchy ladder of the company. I have the feeling the reality is a lot more ... [More] diverse. "career opportunities" mean different things to different people. So I asked around me (friends, family, colleague) what was their take about it, outside of going higher in the hierarchy. change of titles (role recognition). This doesn't necessary imply climbing the hierarchy ladder, it can just mean you are recognized for a certain expertise. change of salary/work status (money/time). Sometimes, people don't want to change their status but want a better compensation, or a better working schedule (aka working 4 days a week with the same salary). change of responsibilities (practical work being done in the team) Some will see this as a possibility to learn new tricks, to diversify the work they do on a daily basis. change of team (working on different things inside the company) Working on a different team because you think they do something super interesting there is appealing. being mentored (inside your own company) It's a bit similar to the two previous one, but there's a framework inside the company where you are not helping and/or bringing your skills to the project, but you go partially in another team to learn the work of this team. Think about it as a kind of internal internship. change of region/country Working in a different country or region in the same country when it's your own choice. This flexibility is a gem for me. I did it a couple of times. When working at W3C, I moved from the office on the French Riviera to working from home in Montreal (Canada). And a bit later, I moved from Montreal to the W3C office in Japan. At Mozilla too, (after moving back to Montreal for a couple of years), I moved from Montreal to Japan again. There are career opportunities because they allow you to work in a different setting with different people and communities, and this in itself makes the life a lot richer. Having dedicated/planned time for Conference or Courses Being able to follow a class on a topic which helps the individual to grow is super useful. But for this to be successful, it has to be understood that it requires time. Time to follow the courses, time to do the homework of the course. I'm pretty sure there are other possibilities. Thanks to everyone who shared their thoughts. Otsukare! [Less]
Posted over 3 years ago by Betsy Mikel
Photo by Amador Loureiro on Unsplash. The small bits of copy you see sprinkled throughout apps and websites are called microcopy. As content designers, we think deeply about what each word communicates. Microcopy is the tidiest of UI copy types. ... [More] But do not let its crisp, contained presentation fool you: the process to get to those final, perfect words can be messy. Very messy. Multiple drafts messy, mired with business and technical constraints. Here’s a secret about good writing that no one ever tells you: When you encounter clear UX content, it’s a result of editing and revision. The person who wrote those words likely had a dozen or more versions you’ll never see. They also probably had some business or technical constraints to consider, too. If you’ve ever wondered what all goes into writing microcopy, pour yourself a micro-cup of espresso or a micro-brew and read on! Blaise Pascal, translated from Lettres Provinciales 1. Understand the component and how it behaves. As a content designer, you should try to consider as many cases as possible up front. Work with Design and Engineering to test the limits of the container you’re writing for. What will copy look like when it’s really short? What will it look like when it’s really long? You might be surprised what you discover. Before writing the microcopy for iOS widgets, I needed first to understand the component and how it worked. Apple recently introduced a new component to the iOS ecosystem. You can now add widgets the Home Screen on your iPhone, iPad, or iPod touch. The Firefox widgets allow you to start a search, close your tabs, or open one of your top sites. Before I sat down to write a single word of microcopy, I would need to know the following: Is there a character limit for the widget descriptions? What happens if the copy expands beyond that character limit? Does it truncate? We had three widget sizes. Would this impact the available space for microcopy? Because these widgets didn’t yet exist in the wild for me to interact with, I asked Engineering to help answer my questions. Engineering played with variations of character length in a testing environment to see how the UI might change. Engineering tried variations of copy length in a testing environment. This helped us understand surprising behavior in the template itself. We learned the template behaved in a peculiar way. The widget would shrink to accommodate a longer description. Then, the template would essentially lock to that size. Even if other widgets had shorter descriptions, the widgets themselves would appear teeny. You had to strain your eyes to read any text on the widget itself. Long story short, the descriptions needed to be as concise as possible. This would accommodate for localization and keep the widgets from shrinking. First learning how the widgets behaved was a crucial step to writing effective microcopy. Build relationships with cross-functional peers so you can ask those questions and understand the limitations of the component you need to write for. 2. Spit out your first draft. Then revise, revise, revise. Mark Twain, The Wit and Wisdom of Mark Twain Now that I understood my constraints, I was ready to start typing. I typically work through several versions in a Google Doc, wearing out my delete key as I keep reworking until I get it ‘right.’ I wrote several iterations of the description for this widget to maximize the limited space and make the microcopy as useful as possible. Microcopy strives to provide maximum clarity in a limited amount of space. Every word counts and has to work hard. It’s worth the effort to analyze each word and ask yourself if it’s serving you as well as it could. Consider tense, voice, and other words on the screen. 3. Solicit feedback on your work. Before delivering final strings to Engineering, it’s always a good practice to get a second set of eyes from a fellow team member (this could be a content designer, UX designer, or researcher). Someone less familiar with the problem space can help spot confusing language or superfluous words. In many cases, our team also runs copy by our localization team to understand if the language might be too US-centric. Sometimes we will add a note for our localizers to explain the context and intent of the message. We also do a legal review with in-house product counsel. These extra checks give us better confidence in the microcopy we ultimately ship. Wrapping up Magical microcopy doesn’t shoot from our fingertips as we type (though we wish it did)! If we have any trade secrets to share, it’s only that first we seek to understand our constraints, then we revise, tweak, and rethink our words many times over. Ideally we bring in a partner to help us further refine and help us catch blind spots. This is why writing short can take time. If you’re tasked with writing microcopy, first learn as much as you can about the component you are writing for, particularly its constraints. When you finally sit down to write, don’t worry about getting it right the first time. Get your thoughts on paper, reflect on what you can improve, then repeat. You’ll get crisper and cleaner with each draft. Acknowledgements Thank you to my editors Meridel Walkington and Sharon Bautista for your excellent notes and suggestions on this post. Thanks to Emanuela Damiani for the Figma help. This post was originally published on Medium. [Less]