this post is about measuring ignorance.
yes, that sounds weird, but in systems engineering it’s a very big deal now. as our complex engineered systems progress in their reliability, we are trying as hard as possible to make sure that we can anticipate all the ways they can possibly fail. essentially, we are everyday trying to imagine the most outlandish environmental/geopolitical/economic/technical disasters that can happen, put them all together at the same time, and determine whether our systems can withstand that.
naturally, one expects our military to do this as a matter of course: but your humble civil engineer? not sure that’s crossed too many folks’ minds. at least not until Fukushima, Deepwater Horizon, and now Hurricane Sandy occurred. now, everyone are wondering how in the world we allowed our systems to perform so poorly under those circumstances.
the problem is not necessarily the safety or reliability of the systems: how often do you and i in the US plan our day around the scheduled load shedding at just about coffee hour? or purchase bottled water to shower in because the water’s too dirty to touch? even on the DC Metro or MARC trains, the [relatively] frequent delays are not an important consideration in my daily planning.
the problem is generally that we couldn’t possibly anticipate the things that cause our systems to deviate from intended performance. and short of prophetic revelation, there’s not a good way to do that.
there are, however cool ways to explore possibilities at the edge of the realm of possibility.
i have in mind things like robust decision making, info-gap analysis, portfolio evaluation, and modeling to generate alternatives. some of these tools are older than i am (modeling to generate alternatives) but are only recently finding wider application via the explosion of bio-inspired computing (e.g., genetic algorithms, particle swarm optimization, ant colony optimization, genetic computing, etc.); while others are becoming established tools in the lexicon of risk and systems analysis even as we speak.
for example, info-gap analysis avoids restricting ourselves to decision contexts in which externally valid models can be identified in order to predict the future. instead, info-gap computes the robustness and opportuneness of a strategy in light of one’s wildest dreams [or nightmares] about the future. in this way, one can be surprised not only about how bad things might turn out, but also how good they might be.
i personally am more partial to robust decision making, as it uses mathematics and terminology i am a bit more familiar with. robust decision making enables us to evaluate circumstances in which either we can agree on an externally valid model of the future or we would like to discuss a range of competing interpretations and assumptions. one starts with a set of alternatives, and iterates through the futures that the strategies under consideration might suggest. after a set of futures has been identified, the areas of future vulnerability are identified for the strategies being evaluated. portfolio evaluation shares many similarities with robust decision making, in that the stakeholders project differing interpretations that may imply competing priorities for the decision context.
while in all of these techniques, it is generally agreed that one of the major weaknesses of existing risk and decision theory is the reliance on probability models that represent possible futures, i can’t give up my addiction for Bayesian statistics. at least when i decide i need rehab, there seems to be a sound selection of medications to choose from.