Monday, January 06, 2020

The Build is Always Broken

Programmers are always talking about broken builds: "The build is broken", "I broke the build" etc. However, the real problem is that the very concept of the build is broken. The idea that every time an application is modified, it needs to be reconstructed from scratch is fundamentally flawed. A practical problem with the concept is that it induces a very long and painful feedback loop during development. Some systems address this by being wicked fast. They argue that even if they compile the world when you make a change, it's not an issue since it'll be done before you know it. One problem is that some systems get so large that this doesn't work anymore. A deeper problem is that even if you rebuild instantly, you find that you need to restart your application on every change, and somehow get it back to the stage where you were when you found a problem and decided to make a change. In other words, builds are antithetical to live programming. The feedback loop will always be too long. Fundamentally, one does not recreate the universe every time one changes something. You don't tear down and reconstruct a skyscraper everytime you need to replace a light bulb. A build, no matter how optimized, will never give us true liveness It follows that tools like make and its ilk can never provide a solution. Besides, these tools have a host of other problems. For example, they force us to replicate dependency information that is already embedded in our code: things like imports, includes, uses or extern declarations etc. give us the same file/module level information that we manually enter into build tools. This replication is tedious and error prone. It is also too coarse grain, done at the granularity of files. A compiler can manage these dependencies more precisely, tracking what functions are used where for example. Caveats: Some tools, like GN, can be fed dependency files created by cooperating compilers. That is still too coarse grain though. In addition, the languages these tools provide have poor abstraction mechanisms (compare make to your favorite programming language) and tooling support (what kind of debugger does your build tool provide?). The traditional response to the ills of make is to introduce additional layers of tooling of a similar nature, like Cmake. Enough!

A better response is to produce a better DSL for builds. Internal DSLs, based on a real programming language, are one way to improve matters. Examples are rake and scons, which use Ruby and Python respectively. These tools make defining builds easier - but they are still defining builds, which is the root problem I am concerned with here. So, if we aren't going to use traditional build systems to manage our dependencies, what are we to do? We can start by realizing that many of our dependencies are not fundamental; things like executables, shared libraries, object files and binaries of whatever kind. The only thing one really needs to "build" is source code. After all, when you use an interpreter, you can create only the source you need to get started, and then incrementally edit/grow the source. Using interpreters allows us to avoid the problems of building binary artifacts. The cost is performance. Compilation is an optimization, albeit an important, often essential, one. Compilation relies on a more global analysis than an interpreter, and on pre-computing the conclusions so we need not repeat work during execution. In a sense, the compiler is memoizing some of the work of the interpreter. This is literally the case for many dynamic JITs, but is fundamentally true for static compilation as well - you just memoize in advance. Seen in this light, builds are a form of staged execution, and the binary artifacts that we are constantly building are just caches. One can address the performance difficulties of interpreters by mixing interpretation with compilation. Many systems with JIT compilers do exactly that. One advantage is that we don't have to wait for the optimization before starting our application. Another is that we can make changes, and have them take effect immediately by reverting to interpretation, while re-optimizing. Of course, not all JITs do that; but it has been done for decades, in, e.g., Smalltalk VMs. One of the many beauties of working in Smalltalk is that you rarely confront the ugliness of builds. And yet, even assuming you have an engine with a JIT that incrementally (re)optimizes code as it evolves, you may still be confronted with barriers to live development, barriers that seem to require a build. Types. What if your code is inconsistent, say, due to type errors? Again, there is no need for a build step to detect this. Incremental typecheckers should catch these problems the moment inconsistent code is saved. Of course, incremental typecheckers have traditionally been very rare; it is not a coincidence that live systems have historically been developed using dynamically typed languages. Nevertheless, there is no fundamental reason why statically typed languages cannot support incremental development. The techniques go back at least as far as Cecil; See this paper on Scala.js for an excellent discussion of incremental compilation in a statically typed language. Tests. A lot of times, the build process incorporates tests, and the broken build is due to a logical error in the application detected by the tests. However, tests are not themselves part of the build, and need not rely on one - the build is just one way to obtain an updated application. In a live system, the updated application immediately reflects the source code. In such an environment, tests can be run on each update, but the developer need not wait for them. Resources. An application may incorporate resources of various kinds - media, documentation, data of varying kinds (source files or binaries, or tables or machine learning models etc.). Some of these resources may require computation of their own (say, producing PDF or HTML from documentation sources like TeX or markdown), adding stages that are seldom live or incremental. Even if the resources are ready to consume, we can induce problems through gratuitous reliance on file system structure. The resources are typically represented as files. The deployed structure may differ from the source repository. Editing components in the source repo won't change them in the built structure. It isn't easy to correct these problems, and software engineers usually don't even try. Instead, they lean on the build process more and more. It doesn't have to be that way. We can treat the resources as cached objects and generate them on demand. When we deploy the application, we ensure that all the resources are precomputed and cached at locations that are fixed relative to the application - and these should be the same relative locations where the application will place them during development in case of a cache miss. The software should always be able to tell where it was installed, and therefore where cached resources stored at application-relative locations can be found. The line of reasoning above makes sense when the resource is accessed via application logic. What about resources that are not used by the application, but made available to the user? In some cases, documentation and sample code and attached resources might fall under this category. The handling of such resources is not part of the application proper, and so it is not a build issue, but a deployment issue. That said, deployment is simply computation of a suitable object to serialized to a given location, and should be viewed in much the same way as the build; maybe I'll elaborate on that in separate post. Dealing with Multiple Languages. Once we are dealing with multiple languages, we may be pushed into using a build system because some of the languages do not support incremental development. Assuming that the heart of our application is in a live language, we should treat other languages as resources; their binaries are resources to be dynamically computed during development and cached.

Summary


  • Builds kill liveness.
  • Compilation artifacts are a form of cached resource, the result of staged execution.
  • To achieve liveness in industrial settings, we need to structure our development environments so that any staging is strictly an optimization
    • Staged results should be cached and invalidated automatically when the underlying basis for the cached value is out of date.
    • This applies regardless of whether the staged value is a resource, a shared library/binary or anything else. 
    • The data necessary to compute the cached value, and to determine the cache's validity, must be kept at a fixed location, relative to the application. 

It's high time we build a new, brave, build-free world.