mr mr cote

Educational, Investigative, and Absurd Writings by M. R. Côté

A Vision for Engineering Workflow at Mozilla (Part Three)

| Comments

This is the last post in a three-part series on A Vision for Engineering Workflow at Mozilla. The first post in this series provided some background, while the second introduced the first four points of our nine-point vision.

The Engineering Workflow Vision (continued)

5. Reviews are straightforward and streamlined

The Engineering Workflow team has spent a lot of time over the last few years on review tools, starting with Splinter, moving into MozReview, and now onto Phabricator. In particular, MozReview was a grand experiment; its time may be over, but we learned a lot from the project and are incorporating these lessons not just into our new tools but also into how we work.

There are a lot of aspects to the code-review process. First and foremost is, of course, the tool that is used to actually leave reviews. One important meta-aspect of review-tool choice is that there should only be one. Mozilla has suffered from the problems caused by multiple review tools for quite a long time. Even before MozReview, users had the choice of raw diffs versus Splinter. Admittedly, the difference there is fairly minimal, but if you look at reviews conducted with Splinter, you will see the effect of having two systems: initial reviews are done in Splinter, but follow ups are almost always done as comments left directly in the bug. The Splinter UI rarely shows any sort of conversation. We didn’t even use this simple tool entirely effectively.

Preferences for features and look and feel in review tools vary widely. One of the sole characteristics that is uncontroversial is that it should be fast—but of course even this is a trade off, as nothing is faster than commenting directly on a diff and pasting it as a comment into Bugzilla. However, at a minimum the chosen tool should not feel slow and cumbersome, regardless of features.

Other aspects that are more difficult but nice to have include

  • Differentiating between intentional changes made by the patch author versus those from the patch being rebased
  • Clear and effective interdiff support
  • Good VCS integration

For the record, while not perfect, we believe Phabricator, our chosen review tool for the foreseeable future, fares pretty well against all of these requirements, while also being relatively intuitive and visually pleasing.

There are other parts of code review that can be automated to ease the whole process. Given that they are fairly specific to the way Mozilla works, they will likely need to be custom solutions, but the work and maintenance involved should easily be paid off in terms of efficiency gains. These include

  • Automated reviews to catch all errors that don’t require human judgement, e.g., linting. Even better would be the tool fixing such errors automatically, which would eliminate an extra review cycle. This feedback should ideally be available both locally and after review submission.
  • Reviewers are intelligently suggested. At the minimum, our module system should be reflected in the tool, but we can do better by calculating metrics based on file history, reviewer load and velocity, and other such markers.
  • Similarly, code owners should be clearly identified and enforced; it should be made clear if the appropriate reviewers have not signed off on a change, and landing should be prevented.

This last point segues into the next item in the vision.

6. Code is landed automatically

Mozilla has had an autoland feature as part of MozReview for about 2.5 years now, and we recently launched Lando as our new automatic-landing tool integrated with Phabricator. Lando has incorporated some of the lessons we learned from MozReview (not the least of which is “don’t build your custom tools directly into your third-party tools”), but there is much we can do past our simple click-to-land system.

One feature that will unlock a lot of improvements is purely automatic landings, that is, landings that are initiated automatically after the necessary reviews are granted. This relies on the system understanding which reviews are necessary (see above), but beyond that it needs just a simple checkbox to signal the author’s intent to land (so we avoid accidentally landing patches that are works in progress). Further, as opposed to Try runs for testing, developers don’t tend to care too much about the time to land a completed patch as long as a whole series lands together, so this feature could be used to schedule landings over time to better distribute load on the CI systems.

Automatic landings also provide opportunities to reduce manual involvement in other processes, including backouts, uplifts, and merges. Using a single tool also provides a central place for record-keeping, to both generate metrics and follow how patches move through the trains. More on this in future sections.

7. Bug handling is easy, fast, and friendly

Particularly at Mozilla, bug tracking is a huge topic, greater than code review. For better or worse, Bugzilla has been a major part of the central nervous system of Mozilla engineering since its earliest days; indeed, Bugzilla turns 20 in just a couple months! Discussing Bugzilla’s past, present, and future roles at Mozilla would take many blog posts, if not a book, so I’ll be a bit broad in my comments here.

First, and probably most obviously, Mozilla’s bug tracker should prioritize usability and user experience (yes they’re different). Mozilla engages not just full-time engineer employees but also a very large community with diverse backgrounds and skill sets. Allowing an engineer to be productive while encouraging users without technical backgrounds to submit bug reports is quite a challenge, and one that most high-tech organizations never have to face.

Another topic that has come up in the past is search functionality. Developers frequently need to find bugs they’ve seen previously, or want to find possible duplicates of recently filed bugs. The ideal search feature would be fast, of course, but also accurate and relevant. I think about these two aspects are slightly differently: accuracy pertains to finding a specific bug, whereas relevancy is important when searching for a class of bugs matching some given attributes.

Over the past couple years we have been trying to move certain use cases out of Bugzilla, so that we can focus specifically on engineering. This is part of a grander effort to consolidate workflows, which has a host of benefits ranging from simpler, more intuitive interfaces to reduced maintenance burden. However this means we need to understand specific use cases within engineering and implement features to support them, in addition to the more general concerns above. A recent example is the refinement of triage processes, which is helped along by specific improvements to Bugzilla.

8. Metrics are comprehensive, discoverable, and understandable

The value of data about one’s products and processes is not something that needs much justification today. Mozilla has already invested heavily in a data-driven approach to developing Firefox and other applications. The Engineering Workflow team is starting to do the same, thanks to infrastructure built for Firefox telemetry.

The list of data we could benefit from collecting is endless, but a few examples include * backout rates and causes * build times * test-run times * patch-review times * tool adoption

We’re already gathering and visualizing some of these stats:

Naturally such data is even more valuable if shared so other teams can analyze it for their benefit.

9. Information on “code flow” is clear and discoverable

This item builds on the former. It is the most nebulous, but to me it is one of the most interesting.

Code changes (patches, commits, changesets, whatever you want to call them) have a life cycle:

  1. A developer writes one or more patches to solve a problem. Sometimes the patches are in response to a bug report; sometimes a bug report is filed just for tracking.

  2. The patches are often sent to Try for testing, sometimes multiple times.

  3. The patches are reviewed by one or more developers, sometimes through multiple cycles.

  4. The patches are landed, usually on an integration branch, then merged to mozilla-central.

  5. Occasionally, the patches are backed out, in which case flow returns to step 1.

  6. The patches are periodically merged to the next channel branch, or occasionally uplifted directly to one or more branches.

  7. The patches are included in a specific channel build.

  8. Repeat 6. and 7. until the patch ends up in the mozilla-release branch and is included in a Release build.

There’s currently no way to easily follow a code change through these stages, and thus no metrics on how flow is affected by the various aspects of a change (size, area of code, author, reviewer(s), etc.). Further, tracking this information could provide clear indicators of flow problems, such as commits that are ready to land but have merge conflicts, or commits that have been waiting on review for an extended period. Collecting and visualizing this information could help improve various engineering processes, as well as just the simple thrill of literally watching your change progress to release.

This is a grand idea that needs a lot more thought, but many of the previous items feed naturally into it.


This vision is just a starting point. We’re building a road map for short-to-medium-term map, while we think about a larger 2-to-3-year plan. Figuring out how to prioritize by a combination of impact, feasibility, risk, and effort is no small feat, and something that we’ll likely have to course-correct over time. Overall, the creation of this vision has been inspiring for my team, as we can now envision a better world for Mozilla engineering and understand our part in it. I hope the window this provides into the work of the Engineering Workflow team is valuable to other teams both within and outside of Mozilla.

A Vision for Engineering Workflow at Mozilla (Part Two)

| Comments

In my last post I touched on the history and mission of the Engineering Workflow team, and I went into some of the challenges the team faces, which informed the creation of the team’s vision. In this post I’ll go into the vision itself.

First, a bit of a preamble to set context and expectations.

About the Vision

Members of the Engineering Workflow team have had many conversations with Firefox engineers, managers, and leaders across many years. The results of these conversations have led to various product decisions, but generally without a well-defined overarching direction. Over the last year we took a step back to get a more comprehensive understanding of the needs and inefficiencies in Firefox engineering. This enables us to lay out a map of where Engineering Workflow could go over the course of years, rather than our previous short-term approaches.

As I mentioned earlier, I couldn’t find much in the way of examples of tooling strategies to work from. However, there are many projects out there that have developed tooling and automation ecosystems that can provide ideas for us to incorporate into our vision. A notable example is the Chromium project, the open-source core of the Chrome browser. Aspects of their engineering processes and systems have made their way into what follows.

It is very important to understand that this vision, if not vision statements in general, is aspirational. I deliberately crafted it such that it could take many engineer-years to achieve even a large part of it. It should be something we can reference to guide our work for the foreseeable future. To ensure it was as comprehensive as possible, it was constructed without attention given to feasibility nor, therefore, the priority of its individual pieces. A road map for how to best approach the implementation of the vision for the most impact is a necessary next step.

The resulting vision is nine points laying out the ideal world from an Engineering Workflow standpoint. I’ll go through them one by one up to point four in this post, with the remaining five to follow.

The Engineering Workflow Vision

1. Checking out the full mozilla-central source is fast

The repository necessary for building and testing Firefox, mozilla-central, is massive. Cloning and updating the repo takes quite a while even for engineers located close by the central servers; the experience for more distant contributors can be much worse. Furthermore, this affects our CI systems, which are constantly cloning the source to execute builds and tests. Thus there is a big benefit to making cloning and updating the Firefox source as fast as possible.

There are various ways to tackle this problem. We are currently working on geo-distributed mirrors of the source code that are at least read-only to minimize the distance the data has to travel to get onto your local machine. There is also work we can do to reduce the amount of data that needs to be fetched, by determining what data is actually required for a given task and using that to allow shallow and/or narrow clones.

There are other issues in the VCS space that hamper the productivity of both product and tooling engineers. One is our approach to branching. The various train, feature, and testing branches are in fact separate repositories altogether, stemming from the early days of the switch to Mercurial. This nonstandard approach is both confusing and inefficient. There are also multiple integration “branches”, in particular autoland and mozilla-inbound, which require regular merging which in turn complicates history.

Supporting multiple VCSes also has a cost. Although Mercurial is the core VCS for Firefox development, the rise of Git led to the development of git-cinnabar as an alternate avenue to interacting with Firefox source. If not a completely de juror solution, it has enough users to warrant support from our tools, which means extra work. Furthermore, it is still sufficiently different from Git, in terms of installation at least, to trip some contributors up. Ideally, we would have a single VCS in use throughout Firefox engineering, or at least a well-defined pipeline for contributions that allows smooth use of vanilla Git even if the core is still kept in Mercurial.

2. Source code and history is easily navigable

To continue from the previous point, the vast size of the Firefox codebase means that it can be quite tricky for even experienced engineers, let alone new contributors, to find their way around. To reduce this burden, we can both improve the way the source is laid out and support tools to make sense of the whole.

One confusing aspect of mozilla-central is the lack of organization and discoverability of the many third-party libraries and applications that are mirrored in. It is difficult to even figure out what is externally sourced, let alone how and how often our versions are updated. We have started a plan to provide metadata and reorganize the tree to make this more discoverable, with the eventual goal to automate some of the manual processes for updating third-party code.

Mozilla also has not just one but two tools for digging deep into Firefox source code: dxr and searchfox. Neither of these tools are well maintained at the moment. We need to critically examine these, and perhaps other, tools and choose a single solution, again improving discoverability and maintainability.

3. Installing a development environment is fast and easy

Over the years Mozilla engineers have developed solutions to simplify the installation of all the applications and libraries necessary to build Firefox that aren’t bundled into its codebase. Although they work relatively well, there are many improvements that can be made.

The rise of Docker and other container solutions has resulted in an appreciation of the benefits of isolating applications from the underlying system. Especially given the low cost of disk space today, a Firefox build and test environment should be completely isolated from the rest of the host system, preventing unwanted interactions between other versions of dependent apps and libraries that may already be installed on the system, and other such cross-contamination.

We can also continue down the path that was started with mach and encapsulate other common tasks in simple commands. Contributors should not have to be familiar with the intricacies of all of our tools, in-house and third-party, to perform standard actions like building, running tests, submitting code reviews, and landing patches.

4. Building is fast

Building Firefox is a task that individual developers perform all the time, and our CI systems spend a large part of their time doing the same. It should be pretty obvious that reducing the time to build Firefox with a code change anywhere in the tree has a serious impact.

There are myriad ways our builds can be made faster. We have already done a lot of work to abstract build definitions in order to experiment with different build systems, and it looks like tup may allow us to have lightning-fast incremental builds. Also, the strategy we used to isolate platform components written in C++ and Rust from the front-end JavaScript pieces, which dramatically lowered build times for people working on the latter, could similarly be applied to isolate the building of system add-ons, such as devtools, from the rest of Firefox. We should do a comprehensive evaluation of places existing processes can be tightened up and continue to look for where we can make larger changes.

Stay tuned for the final part of this series of posts.

Phabricator and Lando Launched

| Comments

The Engineering Workflow team at Mozilla is happy to announce that Phabricator and Lando are now ready for use with mozilla-central! This represents about a year of work integrating Phabricator with our systems and building out Lando.

There are more details in my post to the dev.platform list.

A Vision for Engineering Workflow at Mozilla (Part One)

| Comments

The OED’s second definition of “vision” is “the ability to think about or plan the future with imagination or wisdom.” Thus I felt more than a little trepidation when I was tasked with creating a vision for my team. What should this look like? How do I scope it? What should it cover? The Internet was of surprisingly little help; it seems that either no one thinks about tooling and engineering processes at this level, or (perhaps more likely) they keep it a secret when they do. The best article I found was from Microsoft Research in which they studied how tools are adopted at Microsoft, and their conclusion was essentially that they had no overarching strategy.

Around six months later, I presented a Vision for Engineering Workflow at our fortnightly managers' meeting. But first, some context: a bit about Mozilla’s Engineering Workflow team, and about the challenges we face.

Engineering Workflow

The Engineering Workflow team was created in the Great Reorg of 2017, when, amongst other large changes, its predecessor, the Automation & Tools team (aka the A-Team) was split into two, with the part focussed more on test automation joining the newly formed Product Integrity org. The other half remained in the Engineering Operations org, along with a related team, managed by coop, that worked on the build and version-control systems. In February, these two teams were consolidated into a single team, with kmoir joining the team as a new manager while coop headed off to manage the Taskcluster team.

We named this new team “Engineering Workflow” to reflect that it is focussed on the first stages of the Firefox engineering pipeline, that is, tools and processes that most developers use on a day-to-day basis. Our mission is as follows:

The engineering workflow team exists to improve the quality, clarity, and efficiency of Firefox development through the integration and development of tools and automation.

More specifically, the major pieces of the engineering pipeline that we work on are

  • Tracking issues
  • Reviewing code
  • Landing code
  • Building Firefox

Just as importantly, there are many related systems that we don’t own. These include

  • Tests and test frameworks. As mentioned above, these are the responsibility of the Product Integrity org.
  • Release and update infrastructure. This is the domain of release engineering and release management.
  • Metrics related to product use. Although we are starting to collect our own metrics, data related to Firefox itself is collected and analyzed by the data and product teams.
  • Firefox Developer Experience (aka devtools). I mention this only because they have (or at least had) a similar name. This is the team that works on the developer tools that are shipped as part of Firefox.
  • Low-level tools. These tools are very product focussed, requiring intimate knowledge of the Firefox codebase and C++ development. This team is managed by Anthony Jones and is part of the Runtime Engineering group.
  • Products not built from mozilla-central. To allow us to focus (I seem to really love that word), we prioritize work to help developers working within the mozilla-central codebase. Many of our tools are also used by other teams (including ourselves!) but support requests from them are considered lower priority.

Of course, we can and do work with many of these other teams on joint ventures. Over time I would like to better coordinate our respective road maps to deliver even more impact to engineering at Mozilla.


Mozilla is a unique place. Not only are we a nonprofit that works in the open, but we develop a massive application with contributors, both paid and volunteer, located all around the world. This means we also face unique challenges when it comes to figuring out what tools and automation to integrate, build, and/or improve to maximize impact. I’ll touch on a few here.

Diverse workflows and strong preferences

Tales of “religious wars” within software develop stretch back decades, so it is no surprise that many Mozillians have strong opinions about the way they prefer to work. What is less common is that Mozilla has generally shied away from defining official (or even officially supported) tools and processes. I won’t get into the merits of this approach, but it does impact tooling teams, who have to either support multiple workflows in their tools or unilaterally decide to prioritize some over others.

A few examples:

  • git versus modern hg versus mq. Not only are developers split across two VCSes, but even within Mercurial users there are differences (though thankfully mq usage seems to be much lower than a few years ago).
  • microcommits and commit series. Some developers tend to create a single patch per bug. A good number create a few patches per bug, sometimes as followups but often as one chunk of work. And there are a small number who, at least at times, create long series of commits, sometimes on the order of 20 to 40. Furthermore, despite the growing popularity of the commit-series philosophy, including at Mozilla, proper support in review tools remains rare.
  • importance of features in code-review and issue-tracking apps. Unsurprisingly, as developers spend much of their days working with bugs and code changes, they tend to get opinionated as to how the tools could be made better. It’s tricky to know which features to prioritize when both improving and migrating tools.

I am happy to report, however, that there is more and more support for consolidating workflows at Mozilla.

Engineering scale

Firefox is a huge application. A full Mercurial clone currently takes up 3.6 GB of disk space. Without Mercurial metadata the codebase, including build, test, and third-party libs and apps, contains over 245 000 files in more than 17 000 directories totalling almost 20 million lines of code. There aren’t too many projects the size of Firefox, open source or otherwise.

Unsurprisingly, since Firefox remains very active, there are a lot of changes going into the codebase: about 180 per day in April 2018. Contrast this with about 25-30 per day going into the Linux kernel. This also doesn’t count pushes to the try server for testing works in progress. April saw 210 compute years in our CI system.

Finally, we have complicated security requirements. Mozilla is open by design, with many tools and processes exposed to the public (and the public Internet!). Our approach to governance does not restrict positions of authority and responsibility to employees. These complexities and subleties can be seen in BMO’s permission system, which is much more fine-grained than what is built into most issue-tracking and code-review tools.

All these factors create difficult problems when integrating third-party tools into our systems. In addition, although we do this less than we used to, our scale means that there are not always existing solutions out there that meet our needs, requiring us to write custom applications that need to be highly scalable and secure.


Related to the scale of Firefox development is its long legacy. The Mozilla Foundation is 20 years old, with the Netscape code dating back even further. Although Mozilla has grown dramatically as an entity, many workflows persist over the years, including the reviewing of patches in Bugzilla and the use of Mercurial queues. Understandably, when developers have used a certain workflow for many years, they are often skeptical of change. Yet newer contributors are more familiar with modern workflows, so modern tooling can help attract and retain both employees and volunteers, in addition to the various other advantages in terms of ergonomics, usability, and efficiency. Contending with these two perspectives can be difficult.

In addition to legacy workflows, we also have a number of legacy systems. Many of these systems continue to serve us well, and we are constantly making improvements to them. However, large-scale changes can be difficult, both because of the age of these systems and codebases, but also because over time they have been integrated with many other applications and used in ways we aren’t aware of and sometimes don’t expect. This makes planning challenging and requires a lot of communication.

Decision-making and responsibility

I’m happy to say that this set of challenges has seen the most improvement of the ones I’ve highlighted. I’ll mention them regardless as we can always be improving, and an understanding of our history helps.

Decision-making at Mozilla has been challenging for a number of reasons, mainly due to its history and rapid growth. In particular, there was a common view that we aimed for consensus on all major decisions, which, while well-intentioned, did not scale, and in fact was contradicted by both our management and module systems. This has led both to stalled decisions and sudden decisions that avoided discussions altogether. I’ve previously written about my perspectives and experiences in making decisions at Mozilla. Thankfully, as I also noted, this culture is changing, and making effective, reasoned decisions is getting easier.

Within my own team, or at least its previous incarnation as the A-Team, we experienced our own difficulties making decisions and prioritizing work. There were no clear lines of authority and responsibility when it came to tooling, which also contributed to our team becoming too service oriented. Again, this is changing for the better with existence of both Engineering Operations and Product Integrity, whose directors are peers of those of the product-focussed departments.

Thus ends my preamble on the context of developing a Vision for Engineering Workflow. In my next post, I’ll delve into the Vision itself.


| Comments

Having taken advantage of a Black Friday sale at and picked up a Raspberry Pi 3 Model B, I then needed something to do with it. I’m not sure how it popped into my head, but at some point I realized my bar absolutely needed a Times Square zipper-type LED display. I mean, it seems so obvious in retrospect.

At first I thought about making one but then I thought “hahah yeah right no”. Luckily Adafruit caters to people of a similar mindset, so I got myself a superfancy 64x32 RGB LED matrix. The assembly tested my very rusty soldering skills (and also made me realize how my near vision has degraded in the past ten years…), but it powered up just fine!

After that, I thought it would be a neat trick to let visitors draw on the matrix. Of course I don’t know who would actually bother doing that, at least more than once, but I think it would sound cool to mention at a party. Jenn Schiffer’s awesome Make 8-Bit Art! site seemed like a good starting point, so I grabbed the code and started poking around.

The first problem was that I needed to export a pixel-by-pixel version of the drawn image, such that each square maps to a single pixel that in turn will map to a single LED. So I needed a 64x32 image, but the PNGs generated by Make 8-Bit Art are scaled up much larger.

That didn’t end up being very difficult in the end. I just sample one pixel in the middle of each square and build that into a new image. I created a pull request with my modifications, although I don’t know how generally useful this would be.

Next was the task of creating a web service to take the resulting image and display it. I followed the Adafruit guide that recommends using the rpi-rgb-led-matrix library by hzeller and got the demo going. The web service itself was simple, as I only needed to support uploading a picture and sending it to the LED matrix, with code for the latter already existing in the rpi-rgb-led-matrix package.

However, writing to the matrix requires root privileges, and I didn’t want to run my web server as root. So instead, I made a very simple daemon that runs as root and takes commands over Redis. The Flask app now handles only file uploads and sending Redis messages to display an image or clear the matrix.

I found out later that the guide I was following was using an out-of-date fork of rpi-rgb-led-matrix. Adafruit had forked it to add Python support; however, it was done a bit hackily, so I had to run my daemon from the rpi-rgb-led-matrix directory. I just noticed that Adafruit has updated their guide to use the latest code in that library, which had more thorough Python support added some time ago, so I’ll be updating mine at some point. Hopefully this will let me run the daemon from its own directory.

I also set up a WSGI server and got it running with systemd and nginx (see the README for details). Unfortunately I had less luck running the daemon automatically. Running it via systemd would result in my Raspberry PI locking up. Running it from a terminal works fine. I’m guessing the difference is somehow environment related, but as of yet I haven’t been able to figure it out. For now, I run the daemon in a screen session. Maybe this is something else that will be fixed by converting to the latest rpi-rgb-led-matrix code.

Last was hacking up Make 8-Bit Art to use my new web service. I added some code to the pixel-by-pixel feature to use the Fetch API to send the created image up to the Pi and then display it. It’s hacky but works fine! Since this is almost certainly of little use to the upstream repository, I created a branch but haven’t bothered submitting a pull request.

Since getting this working, I’ve actually had a few people play with it, including my daughter. The display is really nice, with very vivid colours, so it’s an interesting combination of retro and modern technologies.

Here’s an example of something simple drawn in the app:

And the resulting image on the matrix:

It’s a really intense matrix, and I suck at photography, so here’s another shot, with the flash on, so you can see the matrix itself a bit better:

My next step is to simplify my fork of Make 8-Bit Art (I don’t need any of the other export options, maybe just “Save” and “Display” buttons). After that, I’d like to extend it to allow more than one image to exist on the Pi at a time. Then I could add a feature to my web service to choose from previously uploaded images, and even toi create animations.

All in all this has been a fun little project, and as with most, if not all, open-source software, I couldn’t have done it without a lot of help. Thanks to Adafruit, Jenn Schiffer, and hzeller in particular!

Lando Demo

| Comments

Lando is so close now that I can practically smell the tibanna. Israel put together a quick demo of Phabricator/BMO/Lando/hg running on his local system, which is only a few patches away from being a deployed reality.

One caveat: this demo uses Phabricator’s web UI to post the patch. We highly recommend using Arcanist, Phabricator’s command-line tool, to submit patches instead, mainly because it preserves all the relevant changeset metadata.

With that out of the way, fasten your cape and take a look:

More Lessons From MozReview: Mozilla and Microcommits

| Comments

There is a strong argument in modern software engineering that a sequence of smaller changes is preferable to a single large change. This approach facilitates development (easier to debug, quicker to land), testing (less functionality to verify at once), reviews (less code to keep in the reviewer’s head), and archaeology (annotations are easier to follow). Recommended limits are in the realm of 300-400 changed lines of code per patch (see, for example, the great article “How to Do Code Reviews Like a Human (Part Two)”).

400 lines can still be a fairly complex change. Microcommits is the small-patch approach taken to its logical conclusion. The idea is to make changes as small as possible but no smaller, resulting in a series of atomic commits that incrementally implement a fix or new feature. It’s not uncommon for such a series to contain ten or more commits, many changing only 20 or 30 lines. It requires some discipline to keep commits small and cohesive, but it is a skill that improves over time and, in fact, changes how you think about building software.

Former Mozillian Lucas Rocha has a great summary of some of the benefits. Various other Mozillians have espoused their personal beliefs that Firefox engineering would do well to more widely adopt the microcommits approach. I don’t recall ever seeing an organized push towards this philosophy, however; indeed, for better or for worse Mozilla tends to shy away from this type of pronouncement. This left me with a question: have many individual engineers started working with microcommits? If we do not have a de juror decision to work this way, do have a de facto decision?

We designed MozReview to be repository-based to encourage the microcommit philosophy. Pushing up a series of commits automatically creates one review request per commit, and they are all tied together (albeit through the “parent review request” hack which has understandably caused some amount of confusion). Updating a series, including adding and removing commits, just works. Although we never got around to implementing support for confidential patches (a difficult problem given that VCSs aren’t designed to have a mix of permissions in the same repo), we were pretty proud of the fact that MozReview was unique in its first-class support for publishing and reviewing microcommit series.

While MozReview was never designated the Firefox review tool, through organic growth it is now used to review (and generally land) around 63% of total commits to mozilla-central, looking at stats for bugs in the Core, Firefox, and Toolkit products:

To be honest, I was a little surprised at the numbers. Not only had MozReview grown in popularity over the last year, but much of its growth occurred right around the time its pending retirement was announced. In fact, it continued to grow slightly over the rest of the year.

However, we figured that, owing to MozReview’s support for microcommits, this wasn’t quite a fair comparison. Bugzilla’s attachment system discourages multiple patches per bug, even with command-line tools like bzexport. So we figured that, generally, a fix submitted to MozReview would have more parts than a corresponding fix submitted as a traditional BMO patch. Thus the bug-to-MozReview-request ratio would be lower than the bug-to-patch ratio. We ran a query on the number of MozReview requests per bug in about the last seven months. These results yielded further surprises:

About 75% of MozReview commit “series” contain only a single commit. 12% contain only two commits, 5% contain three, and 2.7% contain four. Series with five or more commits make up only 5.3%.

Still, it seems MozReview has perhaps encouraged the splitting up of work per bug to some degree, given that 25% of series had more than one commit. We decided to compare this to traditional patches attached to bugs, which are both more annoying to create and to apply:

Well then. Over approximately the same time period, of bugs with old-style attachments, 76% had a single patch. For bugs with two, three, and four patches, the proportions were 13%, 7%, and 1.5%, respectively. This is extremely close to the MozReview case. The mean is almost equal in both cases, in fact, slightly higher in the old-style-attachment case: 1.65 versus 1.61. The median in both cases is 1.

Okay, maybe the growing popularity of MozReview in 2017 influenced the way people now use BMO. Perhaps a good number of authors use both systems, or the reviewers preferring MozReview are being vocal about wanting at least two or three patches over a single one when reviewing in BMO. So we looked back to the situation with BMO patches in early 2016:

Huh. One-, two-, three-, and four-patch bugs accounted for 74%, 14%, 5%, and 2.6%, respectively.

For one more piece of evidence, this scatter plot shows that, on average, we’ve been using both BMO and MozReview in about the same way, in terms of discrete code changes per bug, over the last two years:

There are a few other angles we could conceivably consider, but the evidence strongly suggests that developers are (1) creating, in most cases, “series” of only one or two commits in MozReview and (2) working in approximately the same way in both BMO and MozReview, in terms of splitting up work.

I strongly believe we would benefit a great deal from making more of engineering’s assumptions and expectations clearer; this is a foundation of driving effective decisions. We don’t have to be right all the time, but we do have to be conscious, and we have to own up to mistakes. The above data leads me to conclude that the microcommit philosophy has not been widely adopted at Mozilla. We don’t, as a whole, care about using series of carefully structured and small commits to solve a problem. This is not an opinion on how we should work, but a conclusion on how we do work, informed by data. This is, in effect, a decision that has already been made, whether or not we realized it.

Although I am interested in this kind of thing from an academic perspective, it also has a serious impact to my direct responsibilities as an engineering manager. Recognizing such decisions will allow us to better prioritize our improvements to tooling and automation at Mozilla, even if it has to first precipitate a serious discussion and perhaps a difficult, conscious decision.

I will have more thoughts on why we have neither organically nor structurally adopted the microcommits approach in my next blog post. Spoiler: it may have to do with prevailing trends in open-source development, likely influenced by GitHub.

Phabricator and Lando November Update

| Comments

With work on Phabricator–BMO integration wrapping up, the development team’s focus has switched to the new automatic-landing service that will work with Phabricator. The new system is called “Lando” and functions somewhat similarly to MozReview–Autoland, with the biggest difference being that it is a standalone web application, not tightly integrated with Phabricator. This gives us much more flexibility and allows us to develop more quickly, since working within extension systems is often painful for anything nontrivial.

Lando is split between two services: the landing engine, “lando-api”, which transforms Phabricator revisions into a format suitable for the existing autoland service (called the “transplant server”), and the web interface, “lando-ui”, which displays information about the revisions to land and kicks off jobs. We split these services partly for security reasons and partly so that we could later have other interfaces to lando, such as command-line tools.

When I last posted an update I included an early screenshot of lando-ui. Since then, we have done some user testing of our prototypes to get early feedback. Using a great article, “Test Your Prototypes: How to Gather Feedback and Maximise Learning”, as a guide, we took our prototype to some interested future users. Refraining from explaining anything about the interface and providing only some context on how a user would get to the application, we encouraged them to think out loud, explaining what the data means to them and what actions they imagine the buttons and widgets would perform. After each session, we used the feedback to update our prototypes.

These sessions proved immensely useful. The feedback on our third prototype was much more positive than on our first prototype. We started out with an interface that made sense to us but was confusing to someone from outside the project, and we ended with one that was clear and intuitive to our users.

For comparison, this is what we started with:

And here is where we ended:

A partial implementation of the third prototype, with a few more small tweaks raised during the last feedback session, is currently on There are currently some duplicated elements there just to show the various states; this redundant data will of course be removed as we start filling in the template with real data from Phabricator.

Phabricator remains in a pre-release phase, though we have some people now using it for mozilla-central reviews. Our team continues to use it daily, as does the NSS team. Our implementation has been very stable, but we are making a few changes to our original design to ensure it stays rock solid. Lando was scheduled for delivery in October, but due to a few different delays, including being one person down for a while and not wanting to launch a new tool during the flurry of the Firefox 57 launch, we’re now looking at a January launch date. We should have a working minimal version ready for Austin, where we have scheduled a training session for Phabricator and a Lando demo.

Decisions, Decisions, Decisions: Driving Change at Mozilla

| Comments

As the manager responsible for driving the decision and process behind the move to Phabricator at Mozilla, I’ve been asked to write about my recent experiences, including how this decision was implemented, what worked well, and what I might have done differently. I also have a few thoughts about decision making both generally and at Mozilla specifically.

Please note that these thoughts reflect only my personal opinions. They are not a pronouncement of how decision making is or will be done at Mozilla, although I hope that my account and ideas will be useful as we continue to define and shape processes, many of which are still raw years after we became an organization with more than a thousand employees, not to mention the vast number of volunteers.

Mozilla has used Bugzilla as both an issue tracker and a code-review tool since its inception almost twenty years ago. Bugzilla was arguably the first freely available web-powered issue tracker, but since then, many new apps in that space have appeared, both free/open-source and proprietary. A few years ago, Mozilla experimented with a new code-review solution, named (boringly) “MozReview”, which was built around Review Board, a third-party application. However, Mozilla never fully adopted MozReview, leading to code review being split between two tools, which is a confusing situation for both seasoned and new contributors alike.

There were many reasons that MozReview didn’t completely catch on, some of which I’ve mentioned in previous blog and newsgroup posts. One major factor was the absence of a concrete, well-communicated, and, dare I say, enforced decision. The project was started by a small number of people, without a clearly defined scope, no consultations, no real dedicated resources, and no backing from upper management and leadership. In short, it was a recipe for failure, particularly considering how difficult change is even in a perfect world.

Having recognized this failure last year, and with the urging of some people at the director level and above, my team and I embarked on a process to replace both MozReview and the code-review functionality in Bugzilla with a single tool and process. Our scope was made clear: we wanted the tool that offered the best mechanics for code-review at Mozilla specifically. Other bits of automation, such as “push-to-review” support and automatic landings, while providing many benefits, were to be considered separately. This division of concerns helped us focus our efforts and make decisions clearer.

Our first step in the process was to hold a consultation. We deliberately involved only a small number of senior engineers and engineering directors. Past proposals for change have faltered on wide public consultation: by their very nature, you will get virtually every opinion imaginable on how a tool or process should be implemented, which often leads to arguments that are rarely settled, and even when “won” are still dominated by the loudest voices—indeed, the quieter voices rarely even participate for fear of being shouted down. Whereas some more proactive moderation may help, using a representative sample of engineers and managers results in a more civil, focussed, and productive discussion.

I would, however, change one aspect of this process: the people involved in the consultation should be more clearly defined, and not an ad-hoc group. Ideally we would have various advisory groups that would provide input on engineering processes. Without such people clearly identified, there will always be lingering questions as to the authority of the decision makers. There is, however, still much value in also having a public consultation, which I’ll get to shortly.

There is another aspect of this consultation process which was not clearly considered early on: what is the honest range of solutions we are considering? There has been a movement across Mozilla, which I fully support, to maximize the impact of our work. For my team, and many others, this means a careful tradeoff of custom, in-house development and third-party applications. We can use entirely custom solutions, we can integrate a few external apps with custom infrastructure, or we can use a full third-party suite. Due to the size and complexity of Firefox engineering, the latter is effectively impossible (also the topic for a series of posts). Due to the size of engineering-tools groups at Mozilla, the first is often ineffective.

Thus, we really already knew that code-review was a very likely candidate for a third-party solution, integrated into our existing processes and tools. Some thorough research into existing solutions would have further tightened the project’s scope, especially given Mozilla’s particular requirements, such as Mercurial support, which are in part due to a combination of scale and history. In the end, there are few realistic solutions. One is Review Board, which we used in MozReview. Admittedly we introduced confusion into the app by tying it too closely to some process-automation concepts, but it also had some design choices that were too much of a departure from traditional Firefox engineering processes.

The other obvious choice was Phabricator. We had considered it some years ago, in fact as part of the MozReview project. MozReview was developed as a monolithic solution with a review tool at its core, so the fact that Phabricator is written in PHP, a language without much presence at Mozilla today, was seen as a pretty big problem. Our new approach, though, in which the code-review tool is seen as just one component of a pipeline, means that we limit customizations largely to integration with the rest of the system. Thus the choice of technology is much less important.

The fact that Phabricator was virtually a default choice should have been more clearly communicated both during the consultation process and in the early announcements. Regardless, we believe it is in fact a very solid choice, and that our efforts are truly best spent solving the problems unique to Mozilla, of which code review is not.

To sum up, small-scale consultations are more effective than open brainstorming, but it’s important to really pay attention to scope and constraints to make the process as effective and empowering as possible.

Lest the above seem otherwise, open consultation does provide an important part of the process, not in conceiving the initial solution but in vetting it. The decision makers cannot be “the community”, at least, not without a very clear process. It certainly can’t be the result of a discussion on a newsgroup. More on this later.

Identifying the decision maker is a problem that Mozilla has been wrestling with for years. Mitchell has previously pointed out that we have a dual system of authority: the module system and a management hierarchy. Decisions around tooling are even less clear, given that the relevant modules are either nonexistent or sweepingly defined. Thus in the absence of other options, it seemed that this should be a decision made by upper management, ultimately the Senior Director of Engineering Operations, Laura Thomson. My role was to define the scope of the change and drive it forward.

Of course since this decision affects every developer working on Firefox, we needed the support of Firefox engineering management. This has been another issue at Mozilla; the directorship was often concerned with the technical aspects of the Firefox product, but there was little input from them on the direction of the many supporting areas, including build, version control, and tooling. Happily I found out that this problem has been rectified. The current directors were more than happy to engage with Laura and me, backing our decision as well as providing some insights into how we could most effectively communicate it.

One suggestion they had was to set up a small hosted test instance and give accounts to a handful of senior engineers. The purpose of this was to both give them a heads up before the general announcement and to determine if there were any major problems with the tool that we might have missed. We got a bit of feedback, but nothing we weren’t already generally aware of.

At this point we were ready for our announcement. It’s worth pointing out again that this decision had effectively already been made, barring any major issues. That might seem disingenuous to some, but it’s worth reiterating two major points: (a) a decision like this, really, any nontrivial decision, can’t be effectively made by a large group of people, and (b) we did have to be honestly open to the idea that we might have missed some big ramification of this decision and be prepared to rethink parts, or maybe even all, of the plan.

This last piece is worth a bit more discussion. Our preparation for the general announcement included several things: a clear understanding of why we believe this change to be necessary and desirable, a list of concerns we anticipated but did not believe were blockers, and a list of areas that we were less clear on that could use some more input. By sorting out our thoughts in this way, we could stay on message. We were able to address the anticipated concerns but not get drawn into a long discussion. Again this can seem dismissive, but if nothing new is brought into the discussion, then there is no benefit to debating it. It is of course important to show that we understand such concerns, but it is equally important to demonstrate that we have considered them and do not see them as critical problems. However, we must also admit when we do not yet have a concrete answer to a problem, along with why we don’t think it needs an answer at this point—for example, how we will archive past reviews performed in MozReview. We were open to input on this issues, but also did not want to get sidetracked at this time.

All of this was greatly aided by having some members of Firefox and Mozilla leadership provide input into the exact wording of the announcement. I was also lucky to have lots of great input from Mardi Douglass, this area (internal communications) being her specialty. Although no amount of wordsmithing will ensure a smooth process, the end result was a much clearer explanation of the problem and the reasons behind our specific solution.

Indeed, there were some negative reactions to this announcement, although I have to admit that they were fewer than I had feared there would be. We endeavoured to keep the discussion focussed, employing the above approach. There were a few objections we hadn’t fully considered, and we publicly admitted so and tweaked our plans accordingly. None of the issues raised were deemed to be show-stoppers.

There were also a very small number of messages that crossed a line of civility. This line is difficult to determine, although we have often been too lenient in the past, alienating employees and volunteers alike. We drew the line in this discussion at posts that were disrespectful, in particular those that brought little of value while questioning our motives, abilities, and/or intentions. Mozilla has been getting better at policing discussions for toxic behaviour, and I was glad to see a couple people, notably Mike Hoye, step in when things took a turn for the worse.

There is also a point in which a conversation can start to go in circles, and in the discussion around Phabricator (in fact in response to a progress update a few months after the initial announcement) this ended up being around the authority of the decision makers, that is, Laura and myself. At this point I requested that a Firefox engineering director, in this case Joe Hildebrand, get involved and explain his perspective and voice his support for the project. I wish I didn’t have to, but I did feel it was necessary to establish a certain amount of credibility by showing that Firefox leadership was both involved with and behind this decision.

Although disheartening, it is also not surprising that the issue of authority came up, since as I mentioned above, decision making has been a very nebulous topic at Mozilla. There is a tendency to invoke terms like “open” and “transparent” without in any way defining them, evoking an expectation that everyone shares an understanding of how we ought to make decisions, or even how we used to make decisions in some long-ago time in Mozilla’s history. I strongly believe we need to lay out a decision-making framework that values openness and transparency but also sets clear expectations of how these concepts fit into the overall process. The most egregious argument along these lines that I’ve heard is that we are a “consensus-based organization”. Even if consensus were possible in a decision that affects literally hundreds, if not thousands, of people, we are demonstrably not consensus-driven by having both module and management systems. We do ourselves a disservice by invoking consensus when trying to drive change at Mozilla.

On a final note, I thought it was quite interesting that the topic of decision making, in the sense of product design, came up in the recent CNET article on Firefox 57. To quote Chris Beard, “If you try to make everyone happy, you’re not making anyone happy. Large organizations with hundreds of millions of users get defensive and try to keep everybody happy. Ultimately you end up with a mediocre product and experience.” I would in fact extend that to trying to make all Mozillians happy with our internal tools and processes. It’s a scary responsibility to drive innovative change at Mozilla, to see where we could have a greater impact and to know that there will be resistance, but if Mozilla can do it in its consumer products, we have no excuse for not also doing so internally.

Phabricator Update

| Comments


  • Development Phabricator instance is up at, authenticated via
  • Development, read-only UI for Lando (the new automatic-landing service) has been deployed.
  • Work is proceeding on matching viewing restrictions on Phabricator revisions (review requests) to associated confidential bugs.
  • Work is proceeding on the internals of Lando to land Phabricator revisions to the autoland Mercurial branch.
  • Pre-release of Phabricator, without Lando, targeted for mid-August.
  • General release of Phabricator and Lando targeted for late September or early October.
  • MozReview and Splinter turned off in early December.

Work on Phabricator@Mozilla has been progressing well for the last couple months. Work has been split into two areas: Phabricator–Bugzilla integration and automatic landings.

Let me start with what’s live today:

Our Phabricator development instance is up at We’ve completed and deployed a Phabricator extension to use Bugzilla for authentication and identity; on our instance, this is tied to If you would like to poke around our development instance, please be our guest! Note that it is a development server, so we make no guarantees as to functionality, data preservation, and such, as with bugzilla-dev. Also, if you haven’t used bugzilla-dev in the last year or two (or ever), you’ll either need to log in with GitHub or get an admin to reset your password, since email is disabled on this server. Ping mcote or holler in #bmo on IRC. I’ll have a follow-up post on exactly what’s involved in using Bugzilla as an authentication and identity provider and how it affects you.

The skeleton of our new automatic-landing service, called Lando, is also deployed to development servers. While it doesn’t actually do any landings yet, the UI has been fleshed out. It pulls the current status of a “revision” (which is Phabricator’s term for a review request) and displays relevant details. It is currently pulling live data from This is what it looks like at the moment, although we will continue to iterate on it:

What we’re working on now:

The other part of Bugzilla integration is ensuring that we can support confidential revisions (review requests) in Phabricator tied to confidential bugs in a seamless way. The goal is to have the set of people who can view a confidential bug in Bugzilla be equal to the set of people who can view any Phabricator revisions associated with that bug. We knew that matching any third-party tool to Bugzilla’s fine-grained authorization system would not be easy, but Phabricator has proven even trickier to integrate than we anticipated. We have implemented the code that sets the visibility appropriately for a new revision, and we have the skeleton code for keeping it in sync, but there are some holes in our implementation that we need to plug. We’re continuing to dig into this and have set a goal to have a solid plan within two weeks, with implementation to follow immediately.

In parallel, within Lando we are working on the logic to take a diff from a Phabricator revision, verify the lander’s credentials and permissions, and turn it into a commit on the autoland branch of We have much of the first point done now, are consulting with IT on the best solution for the second, and will be starting work on the third shortly (which is actually the easiest, since we’re leveraging pieces of MozReview’s Autoland service).

Launch plans:

At the point that we have completed the Bugzilla-integration work described above, we’ll have what we need for a production Phabricator environment integrated with Bugzilla. This is planned for mid-August. We are calling this our pre-release launch, as Lando will not be complete, but we will be inviting some teams to try out Phabricator, to catch issues and frustrations before going to general release. Lando and the general rollout of Phabricator to all Firefox enginering will follow in late September or early October. We’ll have some brownbags to introduce Phabricator and our integrations, and we will ensure documentation is available and discoverable both for general Phabricator usage and our customizations, including automatic landings.

Due to the importance of the Firefox 57 release, Splinter and MozReview will remain functional but will be considered deprecated. New contributors should be directed to Phabricator to avoid the frustration of having to switch processes. Splinter will be turned off and MozReview will be moved to a read-only mode in early December.

More updates to follow!