This repository has been archived on 2017-04-03. You can view files and clone it, but cannot push or open issues/pull-requests.
blog_post_tests/20101125021029.blog

91 lines
10 KiB
Plaintext

If RCS can stand it, why can’t your system?
<p>I&#8217;ve written software for a lot of different reasons besides pure utility in the past. Sometimes I&#8217;ve been making an aesthetic statement, sometimes I&#8217;ve hacked to perpetuate a tribal in-joke, and at least once I have written a substantial piece of code exactly because the domain experts solemnly swore that job was impossible to automate (<a href="http://www.catb.org/~esr/doclifter/">wrong, bwahahaha</a>).</p>
<p>Here&#8217;s a new one. Today I released a program that is ugly and only marginally useful, but specifically designed to shame other hackers into doing the right thing.</p>
<p><span id="more-2762"></span></p>
<p>Those of you who have been following the saga of <a href="http://www.catb.org/~esr/reposurgeon/">reposurgeon</a> on this blog will be aware that it relies on the rise of git fast-import streams as a universal history-interchange format for VCSes (version-control systems). </p>
<p>Alas, support for this is still spotty. On git, where the format was invented, it&#8217;s effectively perfect. Elsewhere, bzr comes closest to getting it right with official import and export plugins but a weird <a href="http://esr.ibiblio.org/?p=2727">asymmetry</a> between export and import. Mercurial has a fairly solid third-party exporter, but its third-party importer is crap. Elsewhere the situation is worse; Subversion is typical in that it has a proof-of-concept third-party exporter that loses some metadata, and no importer.</p>
<p>And this is ridiculous, actually. It&#8217;s already generally understood that writing exporters to the stream format is dead easy &#8211; the problem there seems political, in that VCS devteams are perhaps a bit reluctant to support tools that make migration off their systems easier. But having written one myself for reposurgeon, I know that a high-quality importer (which encourages migration <em>towards</em> your VCS) is not all that difficult either. Thus, there&#8217;s no excuse, either technical or political, for any self-respecting VCS not to have an importer.</p>
<p>I decided to prove this point with code. So I dusted off the oldest, cruftiest version-control system still in anything resembling general use &#8211; Walter Tichy&#8217;s Revision Control System (RCS). And I wrote a lossless importer for it. Took me less than a week, mostly repurposing code I&#8217;d written for reposurgeon. The hardest part was mapping import-stream branches to RCS&#8217;s screwy branch-numbering system.</p>
<p>To appreciate how silly this was on any practical level, you need to know that RCS doesn&#8217;t have changesets (the ability to associate the same comment and metadata with a change to multiple files). I cheat by embedding changeset-oriented data as RFC822 headers in RCS comment fields. An exporter could be written to invert this and completely recover the contents of the import stream, and I&#8217;ve been communicating with the author of <a href="http://git.oblomov.eu/rcs-fast-export">rcs-fast-export.rb</a>; it may actually do this soon.</p>
<p>There is one circumstance under which rcs-fast-import might be useful; if you wanted to break a project repo into multiple pieces, blowing it apart into constituent RCS files and re-exporting separately from the cliques might be a way to do it. But mainly I wrote this as a proof of principle. If crufty old <em>RCS</em> can bear the semantic weight of an import stream, there is simply no excuse left for VCSes that claim to be modern production-quality tools to be behindhand on this. <em>None.</em></p>
<p>At this point there is an inevitable question burning in the minds of those of you who are moderately clued in about ancient VCSes. And that is: &#8220;What about SCCS?&#8221;</p>
<p>Ah, yes. The only VCS even cruftier and more ancient than RCS. Those of you <em>really</em> clued in about ancient version-control systems will have guessed the answer; they&#8217;re so similar that making rcs-fast-import speak SCCS if anyone ever wants that would be pretty trivial (in particular they have the same semantics of branching, which is the hard part). Actually the code is already factored to support this; out of 841 lines only 36 are the plugin class that exercises the RCS command set, and an SCCS plugin wouldn&#8217;t be more than a few lines longer.</p>
<p>But I targeted RCS partly because it&#8217;s still in actual use; some wiki engines employ it as a page-versioning backend because it&#8217;s fast, lightweight, and they neither want nor need changesets. In truth, if you have a directory full of documents each one of which you <em>want</em> to treat as an atomic unit, RCS still has utility (I use it that way). SCCS, on the other hand, survives if at all in a handful of creakingly ancient legacy installations.</p>
<p>(What made the difference? RCS was open-source from the get-go. SCCS wasn&#8217;t. We all know how that dance goes.)</p>
<p>Yes, 841 lines. 574 of them, 65%, stripped out of reposurgeon. Less than a week of work, crammed in around corners while I was busy with other things. It&#8217;s not complicated or tricky code. The trick is in having the insight that it&#8217;s <em>possible</em>. And a living rebuke to every modern VCS that hasn&#8217;t gotten its fast-import act this together yet. </p>
<p>One entertaining side-effect of this project is that I figured out, in detail, how CVS could have been written to not suck.</p>
<p>Those of you into VCS archeology will know that CVS was a layer over a RCS file store, a layer that tried to provide changesets. It was notoriously failure-prone in some important corner cases. This is what eventually motivated the development of Subversion by a group of former CVS maintainers. </p>
<p>Well&#8230;here I am, writing rcs-fast-import to make RCS hold the data for losslessly reconstructing import-stream changesets&#8230;and at some point I found myself doing a double-take because I had realized I had <em>solved CVS&#8217;s problems</em>. Here&#8217;s how I explained it to Giuseppe Bilotta, the author of rcs-fast-export:</p>
<blockquote><p>
Incidentally, a side effect of writing the importer was that I figured<br />
out how CVS should have been written so it wouldn&#8217;t have sucked :-) It<br />
had a tendency to break horribly near deletes, renames and copies;<br />
this is because the naive way to implement these (which they used)<br />
involved deleting, copying, and renaming RCS master files.</p>
<p>In fact, I figured out last night. while building my importer so it<br />
would round-trip, that you can have reliable behavior from changesets<br />
layered over RCS only if you *never delete a master*, not even to<br />
rename it. I know what the right other rules are, but it&#8217;s nearly<br />
twenty years too late for that to matter.</p>
<p>Sigh. If I had looked at this problem in 1985 I could have saved<br />
the world a lot of grief.
</p></blockquote>
<p>Giuseppe said:</p>
<blockquote><p>
> I&#8217;m very curious to hear about your solution about tracking these<br />
> operations. I mean, git doesn&#8217;t track them because it only cares about<br />
> contents and trees, but how would you do that in a file-based vcs?
</p></blockquote>
<p>Here&#8217;s how I explained it:</p>
<blockquote><p>
On delete, don&#8217;t delete the master. Enter a commit that&#8217;s an empty file<br />
and set the state to &#8220;Deleted&#8221;. (The second part isn&#8217;t strictly necessary,<br />
but will be handy if you need to do forensics because other parts of<br />
your repo metadata have been corrupted.)</p>
<p>On rename, don&#8217;t delete the master. Copy it to the new name, so the<br />
renamed file has the history from before the rename, but also *leave<br />
that history in the original*. Give the original a new empty commit<br />
as you did for delete, with a state of &#8216;Renamed&#8217;. If your RCS has<br />
a properties extension, give that commit a rename-target property<br />
naming what it was renamed to. Give the same commit on the master copy a renamed-from property referencing the source.</p>
<p>On copy, check out the file to be copied and start a new master with<br />
no history. If your RCS has a properties extension, give that commit<br />
a copied-target property naming what it was renamed to, and give the<br />
initial commit of the copy a copied-from property referencing the<br />
source.</p>
<p>On every commit, write a record to a journal file that looks like a<br />
git-fast-import commit record, except that the &lt;ref&gt; parts are RCS<br />
revision numbers.</p>
<p>You&#8217;re done. It may take you a bit to think through all the<br />
retrieval cases, but they&#8217;re all covered by one indirection through<br />
the journal file. </p>
<p>Don&#8217;t like the journal file? No problem, you just write a<br />
sequence-numbered tag to all files for each commit. This would be<br />
slower, though.</p>
<p>There are optimizations possible. Strictly speaking, if you have<br />
an intact chain of rename properties you can get away with not<br />
copying history to the target of a rename. </p>
<p>The key point is that once a revision has been appended to a specific<br />
master, you *never delete it*. Ever. That simple rule and some pointer<br />
chasing gives you a revision store that is robust in the presence of D,<br />
R, and C operations. Nothing else does.</p>
<p>Not by coincidence, modern DVCSes use the same strategy.
</p></blockquote>
<p>He did raise one interesting point:</p>
<blockquote><p>
> Considering the debates still going on the &#8220;proper&#8221; way to handle<br />
> renames and copies, I&#8217;m not sure it would have been accepted ;-)
</p></blockquote>
<p>But it turns out that has an answer, too:</p>
<blockquote><p>
The above scheme gives you git-like semantics. For bzr-like semantics,<br />
add one more wrinkle; a unique identity cookie assigned at master-creation<br />
time that follows it through renames. Your storage management doesn&#8217;t<br />
care about this cookie; only the merge algorithm cares.</p>
<p>All this is so simple [and actually implemented in rcs-fast-import] that<br />
I&#8217;m now quite surprised the CVS people got it wrong. I could ship a<br />
daemon that implemented these rules in a week, and could have done<br />
so in 1985 if I&#8217;d known it was important.
</p></blockquote>
<p>Sigh&#8230;sometimes the hardest part is knowing what to spend your thinking time on. I console myself with the thought that, after all, I have gotten this right some times that it mattered.</p>