A place to be (re)educated in Newspeak

Monday, January 06, 2020

The Build is Always Broken

Programmers are always talking about broken builds: "The build is broken", "I broke the build" etc. However, the real problem is that the very concept of the build is broken. The idea that every time an application is modified, it needs to be reconstructed from scratch is fundamentally flawed. A practical problem with the concept is that it induces a very long and painful feedback loop during development. Some systems address this by being wicked fast. They argue that even if they compile the world when you make a change, it's not an issue since it'll be done before you know it. One problem is that some systems get so large that this doesn't work anymore. A deeper problem is that even if you rebuild instantly, you find that you need to restart your application on every change, and somehow get it back to the stage where you were when you found a problem and decided to make a change. In other words, builds are antithetical to live programming. The feedback loop will always be too long. Fundamentally, one does not recreate the universe every time one changes something. You don't tear down and reconstruct a skyscraper everytime you need to replace a light bulb. A build, no matter how optimized, will never give us true liveness It follows that tools like make and its ilk can never provide a solution. Besides, these tools have a host of other problems. For example, they force us to replicate dependency information that is already embedded in our code: things like imports, includes, uses or extern declarations etc. give us the same file/module level information that we manually enter into build tools. This replication is tedious and error prone. It is also too coarse grain, done at the granularity of files. A compiler can manage these dependencies more precisely, tracking what functions are used where for example. Caveats: Some tools, like GN, can be fed dependency files created by cooperating compilers. That is still too coarse grain though. In addition, the languages these tools provide have poor abstraction mechanisms (compare make to your favorite programming language) and tooling support (what kind of debugger does your build tool provide?). The traditional response to the ills of make is to introduce additional layers of tooling of a similar nature, like Cmake. Enough!

A better response is to produce a better DSL for builds. Internal DSLs, based on a real programming language, are one way to improve matters. Examples are rake and scons, which use Ruby and Python respectively. These tools make defining builds easier - but they are still defining builds, which is the root problem I am concerned with here. So, if we aren't going to use traditional build systems to manage our dependencies, what are we to do? We can start by realizing that many of our dependencies are not fundamental; things like executables, shared libraries, object files and binaries of whatever kind. The only thing one really needs to "build" is source code. After all, when you use an interpreter, you can create only the source you need to get started, and then incrementally edit/grow the source. Using interpreters allows us to avoid the problems of building binary artifacts. The cost is performance. Compilation is an optimization, albeit an important, often essential, one. Compilation relies on a more global analysis than an interpreter, and on pre-computing the conclusions so we need not repeat work during execution. In a sense, the compiler is memoizing some of the work of the interpreter. This is literally the case for many dynamic JITs, but is fundamentally true for static compilation as well - you just memoize in advance. Seen in this light, builds are a form of staged execution, and the binary artifacts that we are constantly building are just caches. One can address the performance difficulties of interpreters by mixing interpretation with compilation. Many systems with JIT compilers do exactly that. One advantage is that we don't have to wait for the optimization before starting our application. Another is that we can make changes, and have them take effect immediately by reverting to interpretation, while re-optimizing. Of course, not all JITs do that; but it has been done for decades, in, e.g., Smalltalk VMs. One of the many beauties of working in Smalltalk is that you rarely confront the ugliness of builds. And yet, even assuming you have an engine with a JIT that incrementally (re)optimizes code as it evolves, you may still be confronted with barriers to live development, barriers that seem to require a build. Types. What if your code is inconsistent, say, due to type errors? Again, there is no need for a build step to detect this. Incremental typecheckers should catch these problems the moment inconsistent code is saved. Of course, incremental typecheckers have traditionally been very rare; it is not a coincidence that live systems have historically been developed using dynamically typed languages. Nevertheless, there is no fundamental reason why statically typed languages cannot support incremental development. The techniques go back at least as far as Cecil; See this paper on Scala.js for an excellent discussion of incremental compilation in a statically typed language. Tests. A lot of times, the build process incorporates tests, and the broken build is due to a logical error in the application detected by the tests. However, tests are not themselves part of the build, and need not rely on one - the build is just one way to obtain an updated application. In a live system, the updated application immediately reflects the source code. In such an environment, tests can be run on each update, but the developer need not wait for them. Resources. An application may incorporate resources of various kinds - media, documentation, data of varying kinds (source files or binaries, or tables or machine learning models etc.). Some of these resources may require computation of their own (say, producing PDF or HTML from documentation sources like TeX or markdown), adding stages that are seldom live or incremental. Even if the resources are ready to consume, we can induce problems through gratuitous reliance on file system structure. The resources are typically represented as files. The deployed structure may differ from the source repository. Editing components in the source repo won't change them in the built structure. It isn't easy to correct these problems, and software engineers usually don't even try. Instead, they lean on the build process more and more. It doesn't have to be that way. We can treat the resources as cached objects and generate them on demand. When we deploy the application, we ensure that all the resources are precomputed and cached at locations that are fixed relative to the application - and these should be the same relative locations where the application will place them during development in case of a cache miss. The software should always be able to tell where it was installed, and therefore where cached resources stored at application-relative locations can be found. The line of reasoning above makes sense when the resource is accessed via application logic. What about resources that are not used by the application, but made available to the user? In some cases, documentation and sample code and attached resources might fall under this category. The handling of such resources is not part of the application proper, and so it is not a build issue, but a deployment issue. That said, deployment is simply computation of a suitable object to serialized to a given location, and should be viewed in much the same way as the build; maybe I'll elaborate on that in separate post. Dealing with Multiple Languages. Once we are dealing with multiple languages, we may be pushed into using a build system because some of the languages do not support incremental development. Assuming that the heart of our application is in a live language, we should treat other languages as resources; their binaries are resources to be dynamically computed during development and cached.


  • Builds kill liveness.
  • Compilation artifacts are a form of cached resource, the result of staged execution.
  • To achieve liveness in industrial settings, we need to structure our development environments so that any staging is strictly an optimization
    • Staged results should be cached and invalidated automatically when the underlying basis for the cached value is out of date.
    • This applies regardless of whether the staged value is a resource, a shared library/binary or anything else. 
    • The data necessary to compute the cached value, and to determine the cache's validity, must be kept at a fixed location, relative to the application. 

It's high time we build a new, brave, build-free world.


Shalabh said...

Great write-up! I agree, the idea of 'building' our programs partitions the workflow into two phases - write, then re-run. Rather, we should always be in a single working phase where we tweak a part of the 'source' and, like a spreadsheet, the affected parts of the system are updated live.

Another problem with building these binaries is granularity agglutination. You modify one small function, but you now have a brand new binary and all 1000 tests on it will be rerun, even if only 1 executes the modified code path. This is even more fun with container sized black boxes. Would be nice if the system really tracked the fine grain dependencies from end to end.

Relatedly, I've been thinking we don't need just a single machine runtime+JIT to unify the phases in one place. Rather we need a distributed runtime since we want to be able to compose and run the source artifacts in different ways on multiple machines.

Gilad Bracha said...

Thanks for the insightful comments. I completely agree.

Dan said...

Having full and strictly enforced correctness in specifying and detecting dependencies is one of the linchpin in the design of a build tool that I helped maintained awhile back. You may want to read about it:


George said...

Gilad, Shalabh - if you haven't seen it already, you might be interested in Unison. It does away with builds - but in a rather different way to Smalltalk - and addresses Shalabh's frustrations re "You modify one small function, but you now have a brand new binary and all 1000 tests on it will be rerun, even if only 1 executes the modified code path."


Mario T. Lanza said...
This comment has been removed by the author.
Mario T. Lanza said...

At work I use JavaScript. Modules that are in development and unstable are written in unprocessed JavaScript. No build pipeline whatsoever. These modules are loaded on demand with RequireJS.

Once new types and concepts stabilize, I move said constructs into one of our standard libraries. The standard libraries are written in modern JavaScript and passes only through rollupjs (e.g. a one-step build).

Thus crux is I differentiate between high- and low-iteration areas of development to avoid impeding the build and it pretty well achieves what you're after.

Gilad Bracha said...

Dan; thanks, I'll take a look,

Gilad Bracha said...


Others have also mentioned Unison. I have looked at it in the past. I think it's an interesting approach, and it's great that they recognize many of the issues and eschew files, traditional builds etc. That said:

(a) As long as you are dealing with a single language, you can implement a clean solution. Smalltalk did that decades ago.
(b) I am not sure how Unison's approach works when you try to update an executing program. This is crucial, as liveness is one of my key goals.

I'll take another look and see what, if anything, I missed.

Gilad Bracha said...


As I noted in the comment above, working with one language is relatively easy. Javascript is a dynamic language and as such tends to have less of these issues: to a certain degree it follows the JIT based architecture I discuss in the post. However, it is far from providing a delightful live experience like Smalltalk; and it's a nasty language, which is why I see it as part of the problem rather than the solution.

Unknown said...

How about instead of references to files, dealing with GUIDs referring to objects in an immutable object store with a layer that manages both versioning and 'caching', swapping out the interpreted version for a compiled bytecode blob. Each 'code object' would just be a tuple of sourceGUID, binaryGUID, and since its all immutable the problems of dependencies, 'cache invalidation/recompilation', etc could at least be tractably managed. Swizzle in some software transactional memory and you could really get cooking with something interesting.

Gilad Bracha said...

Yes. On the other hand, in some ways what you describe isn't hugely different from what a Smalltalk system does. The objects are not immutable it's true. But because lookups are dynamic, dependencies are less of an issue (and in Self and Newspeak, where everything is a dynamic lookup, it's even better). The dependencies are more of an issue for static stuff like types. The fact that Unison uses hashes instead of names is an implementation detail.

The there are things like Gemstone, or Alex Warth's work on Worlds, which have other properties that are relevant.

The main point is that the user is never concerned with builds and the related dependencies. Until everyone designs their tools this way (whether they are PLs or documentation tools or something else) we need to manage the problem.