Creative Commons

Friday, August 5, 2011

Essential Tools for Post Normal Science

We are in an era where the old dichotomy of knowledge and ignorance, and of facts and values is being transcended.

Science is no longer seen as the inevitable advance in certainty of our knowledge and control of the natural world. Science must now be judged by its capacity to anticipate uncertainty and emergence.

The new science must be based on assumptions of unpredictability, imperfect data and incomplete understanding. Uncertainty must not be banished but cultivated.

The new science must deploy an orientation to posing prior and posterior probabilities. The new science liberate fact to the power of new evidence.

The resurgence of Baye's Theorem offers an invaluable tool for Post Normal Science.

And if you are not thinking like a Bayesian, perhaps you should.

John Allen Paulos' review of Sharon McGrayne's book on Bayesian Theory is elegant and should cause us to be more hopeful about grappling in a world where reductionist analytic worldview is yielding to systemic, recursive and humanistic approaches.

Here is the review.


How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines and Emerged Triumphant From Two Centuries of Controversy

By Sharon Bertsch McGrayne

320 pp. Yale University Press. $27.50.

Published: August 07, 2011, New York Times

Sharon Bertsch McGrayne introduces Bayes's theorem in her new book with a remark by John Maynard Keynes: "When the facts change, I change my opinion. What do you do, sir?"

Bayes's theorem, named after the 18th-century Presbyterian minister Thomas Bayes, addresses this selfsame essential task: How should we modify our beliefs in the light of additional information? Do we cling to old assumptions long after they've become untenable, or abandon them too readily at the first whisper of doubt? Bayesian reasoning promises to bring our views gradually into line with reality and so has become an invaluable tool for scientists of all sorts and, indeed, for anyone who wants, putting it grandiloquently, to sync up with the universe. If you are not thinking like a Bayesian, perhaps you should be.

At its core, Bayes's theorem depends upon an ingenious turnabout: If you want to assess the strength of your hypothesis given the evidence, you must also assess the strength of the evidence given your hypothesis. In the face of uncertainty, a Bayesian asks three questions: How confident am I in the truth of my initial belief? On the assumption that my original belief is true, how confident am I that the new evidence is accurate? And whether or not my original belief is true, how confident am I that the new evidence is accurate? One proto-Bayesian, David Hume, underlined the importance of considering evidentiary probability properly when he questioned the authority of religious hearsay: one shouldn't trust the supposed evidence for a miracle, he argued, unless it would be even more miraculous if the report were untrue.

The theorem has a long and surprisingly convoluted history, and McGrayne chronicles it in detail. It was Bayes's friend Richard Price, an amateur mathematician, who developed Bayes's ideas and probably deserves the glory that would have resulted from a Bayes-Price theorem. After Price, however, Bayes's theorem lapsed into obscurity until the illustrious French mathematician Pierre Simon Laplace extended and applied it in clever, nontrivial ways in the early 19th century. Thereafter it went in and out of fashion, was applied in one field after another only to be later condemned for being vague, subjective or unscientific, and became a bone of contention between rival camps of mathematicians before enjoying a revival in recent years.

The theorem itself can be stated simply. Beginning with a provisional hypothesis about the world (there are, of course, no other kinds), we assign to it an initial probability called the prior probability or simply the prior. After actively collecting or happening upon some potentially relevant evidence, we use Bayes's theorem to recalculate the probability of the hypothesis in light of the new evidence. This revised probability is called the posterior probability or simply the posterior. Specifically Bayes's theorem states (trumpets sound here) that the posterior probability of a hypothesis is equal to the product of (a) the prior probability of the hypothesis and (b) the conditional probability of the evidence given the hypothesis, divided by (c) the probability of the new evidence.

Consider a concrete example. Assume that you're presented with three coins, two of them fair and the other a counterfeit that always lands heads. If you randomly pick one of the three coins, the probability that it's the counterfeit is 1 in 3. This is the prior probability of the hypothesis that the coin is counterfeit. Now after picking the coin, you flip it three times and observe that it lands heads each time. Seeing this new evidence that your chosen coin has landed heads three times in a row, you want to know the revised posterior probability that it is the counterfeit. The answer to this question, found using Bayes's theorem (calculation mercifully omitted), is 4 in 5. You thus revise your probability estimate of the coin's being counterfeit upward from 1 in 3 to 4 in 5.

A serious problem arises, however, when you apply Bayes's theorem to real life: it's often unclear what initial probability to assign to a hypothesis. Our intuitions are embedded in countless narratives and arguments, and so new evidence can be filtered and factored into the Bayes probability revision machine in many idiosyncratic and incommensurable ways. The question is how to assign prior probabilities and evaluate evidence in situations much more complicated than the tossing of coins, situations like global warming or autism. In the latter case, for example, some might have assigned a high prior probability to the hypothesis that the thimerosal in vaccines causes autism. But then came new evidence - studies showing that permanent removal of the compound from these vaccines did not lead to a decline in autism. The conditional probability of this evidence given the thimerosal hypothesis is tiny at best and thus a convincing reason to drastically lower the posterior probability of the hypothesis. Of course, people wedded to their priors can always try to rescue them from the evidence by introducing all sorts of dodges. Witness die-hard birthers and truthers, for example.

McGrayne devotes much of her book to Bayes's theorem's many remarkable contributions to history: she discusses how it was used to search for nuclear weapons, devise actuarial tables, demonstrate that a document seemingly incriminating Colonel Dreyfus was most likely a forgery, improve low-resolution computer images, judge the authorship of the disputed Federalist papers and determine the false positive rate of mammograms. She also tells the story of Alan Turing and others whose pivotal crypto-analytic work unscrambling German codes may have helped shorten World War II.

Statistics is an imperialist discipline that can be applied to almost any area of science or life, and this litany of applications is intended to be the unifying thread that sews the book into a coherent whole. It does so, but at the cost of giving it a list-like, formulaic feel. More successful are McGrayne's vivifying sketches of the statisticians who devoted themselves to Bayesian polemics and counterpolemics. As McGrayne amply shows, orthodox Bayesians have long been opposed, sometimes vehemently, by so-called frequentists, who have objected to their tolerance for subjectivity. The nub of the differences between them is that for Bayesians the prior can be a subjective expression of the degree of belief in a hypothesis, even one about a unique event or one that has as yet never occurred. For frequentists the prior must have a more objective foundation; ideally that is the relative frequency of events in repeatable, well-defined experiments. McGrayne's statisticians exhibit many differences, and she cites the quip that you can nevertheless always tell them apart by their posteriors, a good word on which to end.

John Allen Paulos, a professor of mathematics at Temple University, is the author of several books, including "Innumeracy" and, most recently, "Irreligion."
Sent from my BlackBerry® smartphone from Zain Kenya

No comments:

Post a Comment


Free sudoku by SudokuPuzz