This repository has been archived on 2017-04-03. You can view files and clone it, but cannot push or open issues/pull-requests.
blog_post_tests/20090619121336.blog

65 lines
20 KiB
Plaintext

Responding to Eliezer Yudkowsky’s “Overcoming Bias”
<p>This was originally email to Eliezer Yudkowsky about his excellent and thought-provoking blog <a href="http://www.overcomingbias.com/">Overcoming Bias</a> (actually, the links here are to the <s>older</s> newer <a href="http://lesswrong.com/">Less Wrong</a>, <s>which has been re-indexed at Overcoming Bias</s>). Eliezer encouraged me to publish this commentary; I have provided HTML markup and fixed some typos. Warning: the following may be heavy sledding if you are not philosophically literate.</p>
<p><span id="more-1068"></span></p>
<p>Comments on OB posts up to #252. I&#8217;d have done the whole set, but I need to go fight a fire on one of my coding projects.</p>
<p>These are going to sound far more negative about your writing and thinking than I actually am, because I generally won&#8217;t need to comment on the majority of stuff I agree on unless I can say something funny or illuminating about it. Often this is not the case.</p>
<p><a href="http://yudkowsky.net/rational/the-simple-truth">The Simple Truth</a> is funny, but takes an awful lot of time and effort to work its way around to a position equivalent to the Peircean fallibilist formulation of operationalism. I think you should have chosen a more direct route to avoid fatiguing the reader.</p>
<p>I&#8217;ve noticed before, e.g. in your expositions of Bayesian theory that you have a tendency to run too long and beat the point to death. Alas, this is still true of <a href="http://yudkowsky.net/rational/bayes">An Intuitive Explanation of Bayes&#8217; Theorem</a> upon rereading. It is formally correct but a pedagogical disaster.</p>
<p><a href="http://yudkowsky.net/rational/virtues">The Twelve Virtues of Rationality</a> was still damn impressive the second time I read it. Possibly your best piece of writing.</p>
<p>In <a href="http://lesswrong.com/lw/go/why_truth_and/">Why truth? And&#8230;</a> you overcomplicate the exposition. All the motives you cite can be explained in a simple and unified way: we are truth-seeking animals because we are prediction-seeking animals because we are control-seeking animals because we are goal-seeking animals. Or, to go at it from the other end: we want things, so we seek to control our environment, which we can only do by correctly predicting what it will do if we kick it. The conscious feeling of curiosity and the belief that truth is morally important are just the normal operating noises of our adaptive machinery.</p>
<p>In <a href="http://lesswrong.com/lw/gp/whats_a_bias_again/">&#8230;What&#8217;s a bias, again?</a>, you appear to be missing an important perspective about <em>why</em> we have cognitive biases. Normally, &#8220;bias&#8221; generally turns out to be an evaluative shortcut that was useful in the environment of ancestral adaptation, but is now misapplied by brains trying to cope with far more complex and varied challenges.</p>
<p>In <a href="http://lesswrong.com/lw/gq/the_proper_use_of_humility/">The Proper Use of Humility</a> you write: &#8220;The temptation is always to claim the most points with the least effort. The temptation is to carefully integrate all incoming news in a way that lets us change our beliefs, and above all our actions, as little as possible.&#8221; Alas, you are so busy knocking over that bad social-status reasons for belief inertia that you slight a good one: Changing beliefs is not costless, and may commit you to a decision procedure that is too heavyweight to be worth some very marginal gain in utility. Physics example: if I am doing ballistics under conditions normal near the Earth&#8217;s surface, it is instrumentally rational for me to believe Newton&#8217;s laws rather than Einstein&#8217;s.</p>
<p>Here, and elsewhere, I observe in your thinking a kind of predisposition that is common in analytical philosophers. You underweight the degree to which computational costs are a factor in theory formation and selection; more seriously, you also underweight the role of motivation in theory-building. It is not that you are unaware of these factors, it is that you tend to miss predictive hypotheses that lean heavily on them in favor of more elaborate<br />
constructions that are no more predictive.</p>
<p>In <a href="http://lesswrong.com/lw/gu/some_claims_are_just_too_extraordinary/">Some Claims Are Just Too Extraordinary</a> you write: &#8220;What about the claim that 2 + 2 = 5? What about journals that claim to publish replicated reports of ESP?&#8221;. Bad move. These claims are epistemically of two very different kinds. You should have omitted the first.</p>
<p>In <a href="http://lesswrong.com/lw/hg/inductive_bias/">Inductive Bias</a>, you write: &#8220;A more general view of inductive bias would identify it with a Bayesian&#8217;s prior over sequences of observations&#8230;&#8221; Ding! You should be this pithy more often.</p>
<p>In <a href="http://lesswrong.com/lw/hn/your_rationality_is_my_business/">Your Rationality is My Business</a>, you write &#8216;The syllogism we desire to avoid runs: &#8220;I think Susie said a bad thing, therefore, Susie should be set on fire.&#8221;&#8216; Many years ago, I observed an implicit premise in the way many humans reason which I call the &#8220;Pressure Principle&#8221;. It goes like this: &#8220;The truth of the claim &#8216;I have a duty to do X&#8217; justifies others in using force to coerce me to do X.&#8221; I reject this principle. Sometimes, pointing out that it is the implicit ground of an argument can persuade people to reject that argument</p>
<p>In <a href="http://lesswrong.com/lw/hq/universal_fire/">Universal Fire</a>, you write &#8220;If a match stops working, so do you. You can&#8217;t change just one thing.&#8221; Indeed. My favorite example of this isn&#8217;t deCamp&#8217;s match, but E.E. &#8220;Doc&#8221; Smith&#8217;s &#8220;inertialess drive&#8221;. Instant death&#8230;</p>
<p>In <a href="http://lesswrong.com/lw/hs/think_like_reality/">Think Like Reality</a>: &#8220;You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality&#8217;s, and you are the one who needs to change.&#8221; Thanks for that; I haven&#8217;t laughed so hard in weeks</p>
<p>Ibid., &#8220;Surprise exists in the map, not in the territory.&#8221; Yes. Many years ago I published a similar maxim: &#8220;Paradoxes only exist in language, not reality.&#8221;</p>
<p><a href="http://lesswrong.com/lw/hu/the_third_alternative/">The Third Alternative</a>: &#8220;The last thing a Santa-ist wants to hear is that praise works better than bribes, or that spaceships can be as inspiring as flying reindeer.&#8221; &#8230; or that the result of attempting to &#8216;correct&#8217; market failure with political intervention is almost always a worse failure.</p>
<p>In <a href="http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/">Making Beliefs Pay Rent (in Anticipated Experiences)</a> you choose an amazingly circuitous and vague way of getting to a conclusion that is as correct as possible while being seriously misleading. The correct answer to the question &#8220;If a tree falls in a forest and no one hears it, does it make a sound?&#8221; is another question: &#8220;Why are you asking?&#8221; That is: what kind of prediction does your goal-seeking require?</p>
<p>You get the prediction part more or less right, but because your gut instinct is to tend to think of theories as cathedrals motivated only by the desire for Pure Truth (choose one per physical system, perfect correspondence required) you miss the fact that the appropriate microtheory of what &#8220;sound&#8221; is may <em>differ depending on the goals of the theorizer</em>. Instead, you hare off into a lot of fairly unnecessary pseudo-ontology and a random bash at postmodernism.</p>
<p>Note carefully that I am not arguing subjectivism or some sort of Feyerabendian conceptual anarchism here. The different microtheories of &#8220;sound&#8221; are commensurable, and can readily be fit into a consilient macrotheory; but any clear account of why we might choose either (an answer to the parable) requires an account of the chooser&#8217;s motivation.</p>
<p>In <a href="http://lesswrong.com/lw/i6/professing_and_cheering/">Professing and Cheering</a>. you write &#8220;That&#8217;s why it mattered to her that what she was saying was beyond ridiculous. If she&#8217;d tried to make it sound more plausible, it would have been like putting on clothes.&#8221; I found it extremely odd that you did not fully understand what you were seeing, but perhaps that is only because I am a neopagan myself and used to pulling similar maneuvers. Or maybe you have borderline Aspergers or something and are poorly equipped to process some kinds of neurotypical interaction, including this one (that&#8217;s my wife Cathy&#8217;s guess, and not a hostile one; she rather likes you).</p>
<p>Your lady panelist was performing a mindfuck. The intent of her speech acts was not to persuade anyone that she believed the Norse creation myth, it was to hold up a funhouse mirror to the religious cognitive style. The question her provocation was implicitly posing to the audience is &#8220;If you reject this as absurd, on what basis do you maintain your own equally poetic and absurd creation myth?&#8221;</p>
<p>I speak from the authority of direct personal experience here, as I have done the same sort of thing for the same reasons in pretty much the same way.</p>
<p>In <a href="http://lesswrong.com/lw/i8/religions_claim_to_be_nondisprovable/">Religion&#8217;s Claim to be Non-Disprovable</a> you write: &#8220;Back in the old days, saying the local religion &#8216;could not be proven&#8217; would have gotten you burned at the stake. This is not true in general. Actually, excepting political special cases like the Roman state cult of the Emperor as Sol Invictus, it is weakly true only of monotheisms and strongly true only of one family of monotheisms descended from or strongly influenced by Zoroastrianism (notably including Judaism, Christianity, and Islam).</p>
<p>Most religions outside this group don&#8217;t give a flying crap about the state of your beliefs or &#8220;proof&#8221; or objective correlatives as long as you maintain ritual cleanliness and do what is socially expected &#8211;<br />
religion as attire, in your terms.</p>
<p>In <a href="http://lesswrong.com/lw/is/fake_causality/">Fake Causality</a> you write about phlogiston as a paradigmatic example. Here&#8217;s a story about that:</p>
<p>In 1992 I was an invited speaker at the Institute for Advanced Study. Yes, this was five years before I was famous; what I was doing there was a seminar on advanced Emacsing. My sponsor, the astrophycist Piet Hut, took me around to meet a number of the stellar eminences at the Institute.</p>
<p>One of them was a cosmologist whose name I don&#8217;t remember. We chatted for a while &#8211; he was doing interesting work on the apparent quantization of red-shift distributions. Then I said to him: &#8220;Oh, by the way, I know what dark matter is made from.&#8221;</p>
<p>Eying me dubiously, he said &#8220;What?&#8221;</p>
<p>I said &#8220;Phlogiston.&#8221;</p>
<p>He damn near fell out of his chair laughing.</p>
<p>I disagree with <a href ="http://lesswrong.com/lw/iv/the_futility_of_emergence/">The Futility of Emergence</a>. There is a semantic difference between saying &#8220;Human intelligence is an emergent product of neurons firing&#8221; and &#8220;Human intelligence is an product of neurons firing.&#8221; The word &#8220;emergent&#8221; is a signal that we believe a very specific thing about the relationship between &#8220;neurons firing&#8221; and &#8220;intelligence&#8221;, which is that there is no possible account of intelligence in which the only explanatory units are neurons or subsystems of neurons.</p>
<p><a href="http://lesswrong.com/lw/jg/planning_fallacy/">Planning Fallacy</a> is the first post in which you have showed me something (reliable predictive superiority of the &#8220;outside view&#8221;) that I had no clue about before I read it. Well done.</p>
<p>In <a href="http://lesswrong.com/lw/jo/einsteins_arrogance/">Einstein&#8217;s Arrogance</a>, you write: &#8220;But from a Bayesian perspective, you need an amount of evidence roughly equivalent to the complexity of the hypothesis just to locate the hypothesis in theory-space.&#8221; Wow. Now that I think of it, there&#8217;s probably a law here: no theory can be confirmed (even in the loose, semi-definite fallibilist sense) by a communication with less Kolmgorov complexity than the theory itself.</p>
<p>In <a href="http://lesswrong.com/lw/k2/a_priori/">A Priori</a> you wander around the mulberry bush discussing justifications of Occam&#8217;s Razor, and inexplicably miss the obvious and simple one: computation has a cost. Whenever we multiply entities beyond necessity. we commit ourselves to a decision procedure that is wasteful and will be outcompeted by actors who spend the resources we waste on <em>solving other problems</em>. You continue to have a very curious blind spot towards analysis like this.</p>
<p>In <a href="http://lesswrong.com/lw/k4/do_we_believe_everything_were_told/">Do We Believe Everything We&#8217;re Told?</a>&#8230;sure. I had never actually encountered Spinoza&#8217;s account before, but it now seems obvious to me that it must be true. For, how else can we evaluate a proposition other than plugging it into the rest of our prediction generators and running them forward to see if there are inconsistencies? Descartes&#8217;s neutral &#8220;consider&#8221; is a classic mysterious answer to a mysterious question; he never unpacks it.</p>
<p>In <a href="http://lesswrong.com/lw/kf/selfanchoring/">Self-Anchoring</a>, you write &#8220;We can put our feet in other minds&#8217; shoes, but we keep our own socks on.&#8221; It would be astonishing only if this were not true. We solve the other-minds problem by mirroring our own; really, how else <em>could</em> we do it?</p>
<p>In <a href="http://lesswrong.com/lw/kg/expecting_short_inferential_distances/">Expecting Short Inferential Distances</a> you replicate part of my own thinking about the EEA basis of cognitive bias.</p>
<p><a href="http://lesswrong.com/lw/kv/beware_of_stephen_j_gould/">Beware of Stephen J. Gould</a>. I think it is relevant that Gould seems to have been a believing Marxist who took some pains not to bruit about that fact (the evidence is not entirely unequivocal but pretty strong). At least part of what he was doing with his dishonesty was waging a kulturkampf (virtuous in Marxist terms) against hereditarian thinking.</p>
<p>In <a href="http://lesswrong.com/lw/lf/purpose_and_pragmatism/">Purpose and Pragmatism</a> you write: &#8220;You find yourself in an unheard-falling-tree dilemma, only when you become curious about a question with no pragmatic use, and no predictive consequences. Which suggests that you may be playing loose with your purposes.&#8221; Well, yeah. Why didn&#8217;t you get here the first time you analyzed this parable?</p>
<p>In <a href="http://lesswrong.com/lw/lh/evaluability_and_cheap_holiday_shopping/">Evaluability (And Cheap Holiday Shopping)</a> you write: &#8220;If you have a fixed amount of money to spend &#8211; and your goal is to display your friendship, rather than to actually help the recipient &#8211; you&#8217;ll be better off deliberately not shopping for value. Decide how much money you want to spend on impressing the recipient, then find the most worthless object which costs that amount. The cheaper the class of objects, the more expensive a particular object will appear, given that you spend a fixed amount.&#8221; How delightfully evil. Now, the interesting utility-maximization question, given that we do our holiday shopping together, is: should I tell my wife this heuristic or not? The answer is not obvious&#8230;</p>
<p>In <a href="http://lesswrong.com/lw/lo/uncritical_supercriticality/">Uncritical Supercriticality</a>, you write &#8220;Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.&#8221; I think there is a good general rule, but it is vulnerable to misinterpretation. Buzz Aldrin was right: the correct response to the person who gravely insulted him was to smack the sorry little fucker a good one, <em>even though the insult was superficially framed as an argument</em>. Similarly, the correct response to a person who says &#8220;You do not own yourself, but are owned by society (or the state), and I am society (or the state) speaking.&#8221; is to injure him as gravely as you think you can get away with &#8212; because though this is formally framed as an argument, it is an assertion of a right to control you that is properly met with violence. (In fact, I think if you do <em>not</em> do violence in that situation you are failing in a significant ethical duty.)</p>
<p>Re <a href="http://lesswrong.com/lw/ls/when_none_dare_urge_restraint/">When None Dare Urge Restraint</a>: I am among those who fear (yes, &#8220;fear&#8221; is the correct and carefully chosen word) that the U.S. response to 9/11 was not nearly as violent and brutal as it needed to be. To prevent future acts of this kind, it is probably necessary that those who consider them should shit their pants with fear at the mere thought of the U.S.&#8217;s reaction. We did not achieve this, and I fear we are likely to pay for that failure in otherwise preventable mass deaths.</p>
<p>In <a href="http://lesswrong.com/lw/m1/guardians_of_ayn_rand/">Guardians of Ayn Rand</a>. you write &#8220;Actually, I think Shermer&#8217;s falling prey to correspondence bias by supposing that there&#8217;s any particular correlation between Rand&#8217;s philosophy and the way her followers formed a cult.&#8221; I don&#8217;t agree. There are specific features of the awful mess called &#8220;Randian epistemology&#8221; that are conducive to map/territory confusion, specifically the notion that the Law of the Excluded Middle is ontologically fundamental rather than a premise valid only for certain classes of reasoning.</p>
<p><a href="http://lesswrong.com/lw/m2/the_litany_against_gurus/">The Litany Against Gurus</a> reminds me of this:</p>
<blockquote><p>
To follow the path:<br />
look to the master,<br />
follow the master,<br />
walk with the master,<br />
see through the master,<br />
become the master.
</p></blockquote>
<p>From &#8220;How To Become A Hacker&#8221;. (Yes, I wrote it.)</p>
<p><a href="http://lesswrong.com/lw/m4/two_cult_koans/">Two Cult Koans</a>: Dang, you&#8217;re <em>good</em> at the Zen-pastiche thing. Have you read <a href="http://catb.org/esr/writings/unix-koans/">these?</a></p>
<p>In <a href="http://lesswrong.com/lw/m7/zen_and_the_art_of_rationality/">Zen and the Art of Rationality</a> you write &#8220;And yet it oftimes seems to me that my thoughts are expressed in conceptual language that owes a great deal to the inspiration of Eastern philosophy.&#8221; Well, sure: and the reason you&#8217;re attracted to that is because the good parts of Eastern philosophy that we&#8217;ve imported are all about something that is very important to you, but about which Western traditions lack language that is quite as precise and evocative: namely <em>directed change in the style of consciousness</em>.</p>
<p>In <a href=http://lesswrong.com/lw/mc/to_lead_you_must_stand_up/"">To Lead, You Must Stand Up</a>, you write: &#8216;I briefly thought to myself: &#8220;I bet most people would be experiencing &#8216;stage fright&#8217; about now. But that wouldn&#8217;t be helpful, so I&#8217;m not going to go there.&#8221;&#8216; Yup. Me too. But I think your exhortations here are nearly useless. Experience I&#8217;ve collected over the last ten years suggests to me that the kind of immunity to stage fright you and I have is a function of basic personality type at the neurotransmitter-balance level, and not really learnable by most people.</p>
<p>In <a href="http://lesswrong.com/lw/nh/extensions_and_intensions/">Extensions and Intensions</a> &#8211; er, I hope you are aware that you are restating basic General Semantics here.</p>
<p>In <a href="http://lesswrong.com/lw/nm/disguised_queries/">Disguised Queries</a>, you write: &#8220;The question &#8220;Is this object a blegg?&#8221; may stand in for different queries on different occasions.&#8221; Or, to put it more precisely, the correct microtheory depends on the motivation of the theorizer. We&#8217;ve been here before.</p>
<p>To be continued&#8230;</p>