I'm ten days late in posting on a great short take in The Economist ("The Perils of Prediction"), reviewing Nassim Nicholas Taleb's book, "The Black Swan: The Impact of the Highly Improbable"
...almost all forecasters work within the parameters of the Gaussian bell curve, which ignores large deviations and thus fails to take account of “Black Swans”. Mr Taleb defines a Black Swan as an event that is unexpected, has an extreme impact and is made to seem predictable by explanations concocted afterwards.
Taleb is correct: people like to create (and gravitate to) ex post facto explanations. We usually think of those as being of very little use (unless we're in the business of publishing sensationalist books). By definition, such explanations don't help to anticipate or deal well with the next unprecedented thing--even though people would like to think that they will. Unprecedented means, well... unprecedented. There will never be another 9-11, even though there will probably be other big nasty surprises that kill lots of people that we analogize to 9-11.
That's all by way of background to explain why we use a simple trick in our scenario workshops called "simulated hindsight". Short take: tap people's natural hunger for ex post facto sense-making stories explaining unexpected developments... using hypothetical future 'facts'.
Disoriented yet? :)
Here's the slightly longer take: Assume it's 2012. Put away this morning's newspaper, as well as all of your assumptions and prognostications about what might happen in July and August, and in 2008 and 2009, and so on. That's all part of 'history' now. It's your job to explain it.
Be in 2012. Assume that the world has already turned out in a certain way that I've spelled out in a tightly-crafted one-page document we call an 'endstate'. Put on your historian 'hat' (or your research analyst 'hat' or reporter/news-anchor 'hat'). Your job is to tell us how the world got to be the way it is now in 2012. What were the major turning points since 2007? What developments led to what other ones? What things didn't happen that some people thought were virtually certain? What were some of the pivotal surprises that drove things to turn out the way they did?
Using a set of 150 discrete, short descriptions of things that might or might not have happened between June, 2007 and "now" (2012)--we call them 'events'--construct a story of how we got "here". Several other teams will do the same with their own separate 'endstates'. Pay no attention to them. They'll get to tell their 'history' stories too and then we'll compare and contrast them and look for the key points of intersection and divergence.
That's simulated hindsight and I've seen it work powerfully to unlock the thinking of hundreds of executives (probably thousands by now, come to think of is), enabling them to contemplate how various kinds of surprises might change their business.
...humans have an uncontrollable urge to be precise, for better or (all too often) worse. That is a fine quality in a watch-repair man or a brain surgeon, but counter-productive when dealing with uncertainty... Why, [Taleb] asks, do we take absence of proof to be proof of absence? ...Mr Taleb argues convincingly that the spectacular collapse in 1998 of Long-Term Capital Management was caused by the inability of the hedge fund's managers to see a world that lay outside their flawed models. And yet those models are still widely used today...
...corporate “scenario planners” are better than they used to be at thinking about Black Swan-type events... [Taleb] suggests concentrating on the consequences of Black Swans, which can be known, rather than on the probability that they will occur, which can't (think of earthquakes). But he never makes professional predictions because it is better to be “broadly right rather than precisely wrong”.
All of which helps to explain why--despite their unquestionable value in an organization--financial executives, engineers, and those with talent for managing the details of day-to-day operations tend to be much less comfortable confronting the broad implications of longer-term uncertainties or dealing with the imprecision inherent in potentially sudden, unprecedented change. I'm not that good at the other stuff. Driving across the Golden Gate bridge recently, I gave thanks that we've got our domains of complementary expertise.
Perhaps the most difficult pitfall to avoid is when the predictor fails to take into account a factor that may not even exist at the time of making the prediction.
This take on the special problems of forecasting technology evolution:
In 1967, the 100-year-old company Keuffel & Esser was commissioned to study the future. A major failure of its analysis was not seeing that its own flagship product would become obsolete in just a few years. K&E was the country's leading slide-rule manufacturer, and it was blindsided by the product it failed to see, the electronic calculator.
George Mason University's Stephen Fuller called up a PowerPoint slide predicting that in 2057, the average annual household income for the region will be $1,307,000. Whoo hoo! That sounded great. Then he pointed out that in 50 years, the average Washington area house will cost a whopping $14,061,000.
This description of the "butterfly effect" in perhaps its most classic form:
A little discrepancy in the pattern of air flowing more than 4,000 miles away had made the difference between an accurate forecast and a bust. The change in the winds in Alaska had displaced storms in the southeast by several hundreds of miles-endangering people living near Orlando, not New Orleans.
This semi-prescient August, 1998 assessment of economic models designed to predict currency crashes (right on the cusp of LTCM's almost world-cataclysmic implosion):
Investment banks and academic economists are building complicated models to predict currency crashes. Don't expect them to work.
And finally, this amusing if cautionary catalogue of bad predictions by Cynthia Crossen in the WSJ last January 8th (subscribers only):
"The giant airplane of 300- to 400-passenger capacity, while technically possible," wrote a U.S. aviation official in 1944, "appears to offer little economic advantage and to involve a great sacrifice of convenience for the traveler, owing to the inevitable reduction in scheduling frequency which results from using such large units."
In 1937, Hadley Cantril, a psychology professor at Princeton University, studied the relative prophetic ability of various types of people. He sent a questionnaire asking for predictions about world affairs to several hundred people... The two groups who were most confident that their predictions were correct were the bankers and the Communists.