Scaling software development without monorepos

Google, Twitter and Facebook all famously use monorepos for the bulk of their development. A monorepo is a single version control repository shared by all of an organisation's code. That's not to say that a monorepo should be just an unstructured mess of code in a single repository; that would be chaos. It's usually a collection of components - apps, services, libraries etc - all stored alongside each other in a single conceptual codebase.

A monorepo obeys two rules:

  • Whenever you build, test, run or release software, all the code used comes from the same version of the whole repo.
  • Code can only be pushed to the repo if all tests it could possibly affect have passed. Conceptually the entire repo is always passing all tests and is ready to release at any time.

Crucially, a monorepo bans any flexibility to pick and choose versions of libraries code depends on. This is seen as the big advantage of monorepos - avoiding dependency hell.

A typical dependency hell situation is this:

  • You depend on two libraries A and B.
  • The latest A depends on C version 1.1.
  • The latest B depends on C version 1.2.
  • You therefore can't use the latest A and latest B.

There may be a solution - downgrade A, or B, or both - or there may not.

Dependency hell is a problem, and one that becomes much more problematic as the organisation scales - as the number of libraries increases. Monorepos avoid dependency hell by enforcing that A and B will always depend on the same version of C - the latest version.

Another advantage is that monorepos can help avoid using stale code. Once you get your code into the monorepo, any future release of any product will be using that code. Of course, it's the same amount of effort to port code to use newer versions of dependencies, but that work has to be done before the new version can be pushed.

However, it's not without huge downsides.

Even if you can calculate which tests could possibly be affected, you can find yourself rerunning huge swathes of the organisation's tests to guarantee the codebase is always ready to release. To respond to this, orgs disregard extravagancies such as integration tests and mandate fast running unit tests only.

Making breaking changes to an API in a monorepo is hard, because to push code into the repo it already has to be passing all (unit) tests. There are several responses that drop out, all sensible but undesirable:

  • Only make backwards-compatible changes - bad, because we accrue debt and cruft, shims and hacks
  • Introduce feature flags - bad because we introduce codepaths that may mean combinations of flags run in production that haven't been tested
  • Take heroic steps to try to patch all the code in the organisation - bad, because this involves developers changing other team's code, which can lead to mistakes that may slip through code review

It can be extremely hard to utilise third-party libraries in a monorepo, because code that was developed with the assumption of versioned library releases is completely naïve of the breaking-changes issue.

Also, if things break that aren't cause by unit tests, finding what changed can be hard - everything is changing all the time.

Put simply, monorepos neglect how valuable it is to have fully tested, versioned releases of library code accompanied by CHANGELOGs describing what changed.

Versioned releases mean a developer using a library is decoupled from any breaking changes to a library. There can be multiple branches in development at once, say a 1.6 maintenance release and a 2.0, letting developers upgrade as time allows.

An alternative

I believe a better alternative to monorepos can be found using traditional component versioning and releasing.

Let's go back to the two problems we were trying to solve:

  1. We want to help solve dependency hell.
  2. We want to drive developers towards using up-to-date versions of libraries.

Rather than going to the effort of building a monorepo system (because tooling for these isn't readily available off-the-shelf), could we build tooling that tackles these problems, using a standard assumption that libraries will be released independently, their code fully tested?

Driving users to upgrade

Ensuring that developers work towards staying current with the latest versions of libraries is perhaps the easier problem.

If we let developers develop libraries which are released with semantic versions, we can build a system to keep track of which versions are supported.

I envisage this looking very much like requires.io (random example page), a system that lets Github users see if the open-source libraries they depend on are up-to-date.

Conceived as an internal release management tool, we simply let library maintainers set the status of each released version, as one of:

  • Up-to-date - green
  • Out-of-date - amber - prefer not to release against this
  • End-of-life - red - only release against this as last resort. You could have special red statuses for "insecure" and "buggy"

The system should be able to calculate, for any build of any library, whether it is up-to-date.

Something like semantic versioning would of course be recommended; in principle semantic versioning would make it possible to automatically update which versions are out-of-date.

With this system we can easily communicate to developers when they need to take action, without making it painful. Maintainers could quickly kill a buggy patch release by marking it "end-of-life".

Solving dependency hell

Dependency hell can be relieved by being more agnostic about the versions of code we can support against. This is much easier in dynamic languages such as Python that have strong introspection capabilities. This allows for code compatibility across a range of versions of libraries.

  • You depend on two libraries A and B.
  • The latest A depends on C version 1.0 to 1.1.
  • The latest B depends on C version 1.1 to 1.2.
  • You therefore can therefore use the latest A and latest B with C version 1.1.

This is so innate to Python that even pip, Python's package installer, doesn't currently fully resolve dependency graph conflicts - for each dependency, the first version specification encountered as pip walks the dependency graph is the only one guaranteed to be satisifed.

This kind of flexibility is not impossible in other languages, however. In C and C++ this is sometimes achievable through preprocessor directives. It's a little harder in Java and C# - mostly you'd have to explicitly expose compatible interfaces - but that is something that we're often doing anyway.

Even without this flexibility, you could perhaps create point release of a library to add compatibility with current versions of dependencies.

Here's my suggestion for our build tool:

  1. Libraries should be flexible about the release versions of dependencies they build against. (On the other hand, applications - the leaves of the dependency graph, that nothing else depends on - should pin very specific versions of dependencies.)
  2. If we're not running hordes of unit test on every single push, but instead running a full test suite only on a release, we can instead use some of those test farm resources to "try out" combinations of library dependencies. Even if it doesn't find solutions, it can give developers information on what breaks in different scenarios, before developers come to need it.

In short, we should try to encourage solutions to dependency hell problems to exist, and then precompute those solutions.

The build tool itself would effectively write the requirements.txt that describes what works with what.

Combining these

These ideas come together very nicely into a single view of reasoning about versioned releases of code.

  • You can see what versions of dependencies are available.
  • Query the system for dependency hell solutions.
  • See whether dependency hell solutions push you into the territory of having to use out-of-date code.
  • See where effort needs to be spent to add compatibility with or port to newer library versions.

Maybe this system could show changelog information as well, for better visibility of what is going on to cause version conflicts and test failures.

I can't say for sure whether this system would work, because as far as I know it has not yet been built. But given the wealth of pain I've always felt as a Python developer in organisations that are embracing monorepos, I long for the comfort and convenience of open-source Python development, where there's no monorepo pain. I hope we can work towards doing that kind of development at scale inside large organisations.

Comments

Comments powered by Disqus