I venture back into blogging after a long hiatus which was, thankfully, busy, productive and stimulating. Why? This piece, by Richard Fernandez over at the Belmont Club, referencing this anonymously authored August 7th Economist article (by a 'risk manager at a large global bank'), entitled 'Confessions of a Risk Manager'.
Fernandez writes [emphasis added]:
The signs were there, but they couldn’t interpret them, even when strange and unusual things were happening. Part of the problem with grasping the significance of the anomalies was that the risk hunters were prisoners of their analytic models. What cues they picked up were interpreted in the context of their mental frameworks. They “saw” through the prism of their models. And their models did not account for the existence of the monster that was now closing in on them...
The scenario work I've done with clients for years is rooted in precisely the precept Fernandez begins with: that the mental models or frameworks that people build up over a lifetime, a career and/or a role strongly color what they believe is possible and therefore the kinds of potential problems and opportunities they are willing to consider. I mean literally color. Fernandez' analogies to invisible spectra are apt. It's not that the spectra aren't there. It's just that I'm not equipped to see ultraviolet, infrared or to hear a dog whistle.
I should note that, while the Economist article is about 'seeing' risk, the concept applies equally well to innovation. More on that in future posts. It's where I've been spending most of my time and energy recently.
Mental models are seldom examined. You and I may talk about our views on particular future events, but it takes a lot of time and effort (plus a common 'language' for discussing it) to talk in terms of entire models. And mental models that are not examined and described -- at any level -- tend to lead to decisions that don't fit reality. (Think blindfolded child attempting to hit the party pinata and hitting grandma instead).
Mental models are also fractal. My own colors the decisions I make, but I also go about my day constrained, to a greater or lesser degree by the mental models of the groups to which I belong (family, company, church, country, etc.)
I'm reluctant to conclude that individuals' mental models are always able to change faster than those of groups. (We all know some stubborn stick-in-the-mud; perhaps he's the one staring at us in the mirror). Anecdotally, however, I know of more individuals who have had a 'Damascus-road' 180-degree 'a-ha!', 'boy I've been stupid, haven't I?' moment in some part of their lives than I do larger groups or companies that have done the same. (When that does happen, e.g., Bill Gates' famous 'Internet' memo ten or so years ago, it tends to be just a powerful individual who changed his mind, not evidence of the group turning quickly together.)
Finally, mental models not challenged systematically and fundamentally on a pro-active basis (e.g., with scenarios) tend to change only defensively and slowly (in response to hard, external truths; often at great cost). That's painful. As Fernandez writes:
It is remarkable how the advent of the current financial crisis structurally resembles the intelligence failures leading up to 9/11: the same misinterpretation of warning signs, the same blindness to threats now evident in retrospect. The same belief in a rapid resolution and a belief that the normal would soon be back.
In other words, we all sense that things have changed fundamentally (in financial markets and geopolitics) but we probably won't be able to develop our new mental models (at least collectively, agreeing on which ones are 'real') for many years -- not without a lot of deliberate thought-work, that is. Nor will we be able to perceive ongoing change properly (assessing both direction and magnitude) or develop said models in a way that enables us to act with precision, resolve and clarity. For years. That's a bit scary. It's also humbling (at least it should be).
Here's another thought that I should probably 'unpack' more fully in another post. Mental models are reflected in, but less often challenged by analytical tools. Oh sure, insights can most definitely be had with them, but I'm talking about much bigger constructs. Sophisticated computer models, for the most part, tend to only amplify the force and reach of our brains. They do not, as a general rule, make them better able to 'think different' (ly) and develop new mental models for seeing the world -- for assessing the big risks in truly complex adaptive systems or for finding innovations.