This repository has been archived on 2017-04-03. You can view files and clone it, but cannot push or open issues/pull-requests.
blog_post_tests/20091115172417.blog

14 lines
3.4 KiB
Plaintext

The pragmatics of webscraping
<p>Here&#8217;s an amplification of my previous post, <a href="http://esr.ibiblio.org/?p=1387">Structure Is Not Meaning</a>. It&#8217;s an except from the <a href="http://home.gna.org/forgeplucker/">ForgePlucker</a> HOWTO on writing code to web-scrape project data out of forge systems.</p>
<blockquote><p>
Your handler class&#8217;s job is to extract project data. If you are lucky, your target forge already has an export feature that will dump everything to you in clean XML or JSON; in that case, you have a fairly trivial exercise using BeautifulStoneSoup or the Python-library JSON parser and can skip the rest of this section.</p>
<p>Usually, however, you&#8217;re going to need to extract the data from the same pages that humans use. This is a problem, because these pages are cluttered with all kinds of presentation-level markup, headers, footers, sidebars, and site-navigation gorp &#8212; any of which is highly likely to mutate any time the UI gets tweaked.</p>
<p>Here are the tactics we use to try to stay out of trouble:</p>
<p>1. When you don&#8217;t see what you expect, use the framework&#8217;s self.error() call to abort with a message. And put in <b>lots</b> of expect checks; it&#8217;s better for a handler to break loudly and soon than to return bad data. Fixing the handler to track a page mutation won&#8217;t usually be hard once you know you need to &#8211; and knowing you need to is why we have regression tests.</p>
<p>2. Use peephole analysis with regexps (as opposed to HTML parsing of the whole page) as much as possible. Every time you get away with matching on strictly local patterns, like special URLs, you avoid a dependency on larger areas of page structure which can mutate.</p>
<p>3. Throw away as many irrelevant parts of the page as you can before attempting either regexp matching or HTML parsing. (The most mutation-prone parts of ppsages are headers, footers, and sidebars; that&#8217;s where the decorative elements and navigation stuff tend to cluster.) If you can identify fixed end strings for headers or fixed start strings for footers, use those to trim (and error out if they&#8217;re not there); that way you&#8217;ll be safe even if the headers and footers mutate. This is what the narrow() method in the framework code is for.</p>
<p>4. Rely on forms. You can assume you&#8217;ll be logged in with authentication and permissions to modify project data, which means the forge will display forms for editing things like issue data and project-member permissions. Use the forms structure, as it is much less likely to be casually mutated than the page decorations.</p>
<p>5. When you must parse HTML, <a href="http://www.crummy.com/software/BeautifulSoup/documentation.html">BeautifulSoup</a> is available to handler classes. Use it, rather than hand-rolling a parser, unless you have to cope with markup so badly malformed that it cannot cope.</p>
</blockquote>
<p>Actual field experience shows that throwing out portions of a page that are highly susceptible to mutation is a valuable tactic. Also, think about where in the site a page lives. Entry pages and other highly visible ones tend to get tweaked the most often, so the tradeoffs push you towards peephole methods and not relying on DOM structure. Deeper in the site , especially on pages that are heavily tabular and mostly consist of one big form, relying on DOM structure is less risky.</p>