The butterfly effect is a deceptively simple insight extracted from a complex modern field. As a low-profile assistant professor in MIT's department of meteorology in 1961, [the late Edward] Lorenz created an early computer program to simulate weather. One day he changed one of a dozen numbers representing atmospheric conditions, from .506127 to .506. That tiny alteration utterly transformed his long-term forecast, a point Lorenz amplified in his 1972 paper, "Predictability: Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?"
In the paper, Lorenz claimed the large effects of tiny atmospheric events pose both a practical problem, by limiting long-term weather forecasts, and a philosophical one, by preventing us from isolating specific causes of later conditions... It is extremely hard to calculate such things with certainty... Realistically, we can't know. "It's impossible for humans to measure everything infinitely accurately," says Robert Devaney, a mathematics professor at Boston University. "And if you're off at all, the behavior of the solution could be completely off." When small imprecisions matter greatly, the world is radically unpredictable.
Moreover, Lorenz also discovered stricter limits on our knowledge, proving that even models of physical systems with a few precisely known variables, like a heated gas swirling in a box, can produce endlessly unpredictable and non-repeating effects.
"Lorenz went beyond the butterfly," says Kerry Emanuel, a professor in the department of earth, atmospheric, and planetary sciences at MIT. "To say that certain systems are not predictable, no matter how precise you make the initial conditions, is a profound statement." Instead of a vision of science in which any prediction is possible, as long as we have enough information, Lorenz's work suggested that our ability to analyze and predict the workings of the world is inherently limited.
What few articles on this subject touch upon, but I find endlessly fascinating, is how this undeniable, well-grounded scientific insight interacts with human nature -- specifically our ingrained need/desire to feel knowledgeable and in control. (I use the term 'we' in both the individual and corporate sense here.)
It has been my experience that few individuals and very few (if any) organizations ever err on the side of humility in this regard. Perceptual failures are seldom in the direction of assuming less than is actually predictable. More often, the response is something like this:
yes I know about the butterfly effect, chaos theory and all that... but it's kinda academic... and you don't know my boss... you see, I/he/she/they/we need to predict XYZ anyway...
because my/our future depends on it... so just give us a number... give us your best guess about the probability... please?
People fall back on a prediction/probability paradigm either because it's what they know (or have the planning tool-set to address) or because, even knowing that approach is flawed, they find it more comfortable... and comforting (or politically expedient in a particular organizational culture) to pretend that what is fundamentally uncertain is perhaps predictable after all. The chance of rain is 62.5%...