This repository has been archived on 2017-04-03. You can view files and clone it, but cannot push or open issues/pull-requests.
blog_post_tests/20110410161846.blog

31 lines
9.5 KiB
Plaintext

Analysis of scaling problems in build systems
<p>My post <a href="http://esr.ibiblio.org/?p=3089">SCons is full of win today</a> triggered some interesting feedback on scaling problems in SCons. In response to anecdotal assertions that SCons is unusably slow on large projects, I argued that build systems in general <a href="http://esr.ibiblio.org/?p=3089#comment-302953">must scale poorly if they are to enforce correctness</a>. Subsequently, I received a pointer to a very well executed <a href="http://esr.ibiblio.org/?p=3089#comment-303503">empirical study of SCons performance</a> to which I replied in the same fashion.</p>
<p>In this post, I intend to conduct a more detailed analysis of algorithmic requirements and complexity in an idealized build system, and demonstrate the implied scaling laws more rigorously. I will also investigate tradeoffs between correctness and performance using the same explanatory framework.</p>
<p><span id="more-3108"></span></p>
<p>An idealized build system begins work with four inputs: a set of <dfn>objects</dfn>, a <dfn>rule forest</dfn>, a set of <dfn>rule generators</dfn>, and a <dfn>build state</dfn>. I will describe each in turn.</p>
<p>An <dfn>object</dfn> is any input to or product of the build &#8211; typically a source or object file in a compiled language, but also quite possibly a document master in some markup format, or a rendered version of such a document, or even the content of a timestamped entry in some database. Each object has two interesting properties: its unique name and a version stamp. The version stamp may be a timestamp or an implied content hash.</p>
<p>A <dfn>rule forest</dfn> is a set of rules that connect objects to each other. Each rule enumerates a set of source objects, a set of target objects, and a procedure for generating the latter from the former. A build system begins with a set of explicit rules (which may be empty).</p>
<p>A <dfn>rule generator</dfn> is a procedure for inspecting objects to add rules to the forest. For example, we typically want to inspect C source files and add for each one a rule making the corresponding compiled object dependent on the C source and any header files it includes that are in the set of source objects.</p>
<p>A <dfn>build state</dfn> is a boolean relation on the set of objects. The value of the relation is true if the right-hand object is out-of-date with respect to the left-hand object. In some systems, the build state is implied by comparing object timestamps. In others, the build system records a mapping of of source-object names to version stamps for each derived object, and considers a derived object to be out of date with respect to any source name for which the current hash currently fails to match the recorded one.</p>
<p>We say that a build system <dfn>guarantees correctness</dfn> if it guarantees that the build state will contain no &#8216;true&#8217; entries on termination of the build. It <em>guarantees efficiency</em> if it never does excess work (corresponding to a DAG edge for which the corresponding value of the build state is false but the rule on the downstream site fires anyway).</p>
<p>The stages of an idealized build look like this:</p>
<p>1. Scan the object set to generate implied rules.</p>
<p>2. Stitch the rule forest into a DAG expressing the entire dependency structure of the system. Each DAG node is identified by and with an object name.</p>
<p>3. For each selected build target, recursively build it. The recursion looks like this: scan the set of immediate ancestor nodes A(x) of each target x; if the set is empty, you&#8217;re done. For each node y in A(x), check the build state to see if x is out-of-date with respect to y. If so, rebuild y. The dependency graph must be acyclic for this recursion to have a well-defined termination state.</p>
<p>Now that we understand the sequence of events, let&#8217;s consider the algorithmic complexity of the steps in the process.</p>
<p>Scanning to detect and record implied rules will be O(n) in the total size of the objects. Each object will need a name lookup for each reference to another object (such as an #include) that it contains. Thus O(n log n) in the number of objects, though a naive implementation could be O(n**2).</p>
<p>Stitching the rule forest into a DAG will also be minimally O(n log n) in the number of rules, because every object name occurring as a source will need to be checked against every object name occurring as a target to see if that target should be added to the source&#8217;s ancestor list. In typical builds where most source files are C sources and thus have one dependent which is a .o file, this implies O(n log n) in the number of source files. (Note: I originally estimated this as O(n**2), thinking of the naive algorithm.)</p>
<p>The recursive build may have a slightly tricky order but as a graph traversal should be expected to be O(n) in the number of DAG nodes.</p>
<p>We notice two things immediately. First, the dominating cost term is that of assembling the dependency DAG from the rule forest. Second &#8211; and perhaps a bit counterintuitively &#8211; the build system overhead will be relatively insensitive to whether we&#8217;re doing a clean build or many up-to-date derived objects already exist. (Total build time will be shorter in the latter case, of course.)</p>
<p>Now we have a cost model for an SCons-like build system that guarantees build correctness. As I orginally posted this, I thought that a complexity of O(n**2) in the number of objects was empirically confirmed by <a href="http://www.electric-cloud.com/blog/2010/07/21/a-second-look-at-scons-performance/">Eric Melski&#8217;s performance plot</a> &#8211; but, as it turns out, this curve could fit O(n log n) as well.</p>
<p>But Melski also shows that other build systems &#8211; in particular those using bare makefiles and two-phase systems like autotools that use makefile generators &#8211; achieve O(n) performance. What does this mean?</p>
<p>Our complexity analysis shows us two things: If you want to pull build overhead below O(n log n), you need to not incur the cost of stitching up the dependency DAG on each build, <em>and</em> you need to also not pay for implicit-dependency scanning on each build.</p>
<p>Handcrafted makefiles don&#8217;t do implicit-dependency scanning, avoiding O(n log n) overhead. They do have to stitch up the entire DAG, but on projects large enough to be an issue much of the overhead for that is dodged by partitioning into recursive makefiles. The build process stitches up DAGs for the rules each makefile, but this is a sum of quadratic-order costs for much smaller ns.</p>
<p>The problems with bare makefiles and recursive makefiles are well understood. You get performance, but you trade that for much higher odds that your rule forest is failing to describe the actual dependencies correctly. This is especially true when dependencies cross boundaries between makefiles. The symptoms include excess work during actual builds and (much more dangerously) failure to correctly rebuild stale dependents.</p>
<p>Two-phase build systems such as autotools and CMake attempt to recover correctness by bringing back implicit-dependency scanning, but also keep performance by segregating the O(n log n) cost of implicit-dependency scanning into a configuration pre-phase that generates makefiles. This sharply reduces the likelihood of error by reducing the number of dependencies that have to be hand-maintained. But it is still possible for a build to be incorrect if (for example) a source change introduces a new implicit dependency or deletes an old one.</p>
<p>Another well-known problem with two-phase systems is that build recipes are difficult to debug. When make throws an error, you get a message with context in the generated makefile, not whatever master description it was made from.</p>
<p>My major conclusion is that it is not possible to design a build system with better than O(n log n) performance in the number of objects without sacrificing correctness. If the build system does not assemble a complete dependency DAG, some dependencies may exist that are never checked during traversal. Belt-and-suspenders techniques to avoid this (for example in recursive makefiles) tend to force redundant builds of interior objects such as libraries, sacrificing efficiency.</p>
<p>A minor conclusion, but interesting considering the case that drove me to think about this, is this: SCons is just as bad, <em>but not necessarily worse</em>, than it has to be.</p>
<p>UPDATE: I originally misestimated the cost of DAG building as O(n**2). This weakens the minor conclusion slightly; it is possible that SCons is worse than it has to be, if there is a naive quadratic-time algorithm being used for lookup somewhere. Since it&#8217;s implemented in Python, however, it is almost certainly the case that the lookup is done through a Python hash with sub-quadratic cost.</p>
<p>Objections to the above analysis focused on exploiting parallelism are intelligent but don&#8217;t address quite the same case I was after. SCons and other build systems on its historical backtrail (such as waf and autotools) will <em>all</em> try to parallelize if you ask them to; unless there are <em>very</em> large differences in how well they do task partitioning, Amdahl&#8217;s-Law like constraints pretty much guarantee that this can&#8217;t change relative performance much. And in cases where the dependency notation tends to underconstrain the build (I&#8217;m looking at <em>you</em>, makefiles!) attempting to parallelize is quite dangerous to build correctness.</p>