Leaving aside the issue of Biblical prophecy, ‘Act of God’ events are notoriously difficult to forecast - almost by definition. In much of my corporate scenario planning work, we've explicitly excluded consideration of such events on the theory that what cannot be anticipated or acted upon is of little practical use in planning. But as the property and casualty insurance and re-insurance industries know only too well, there can be immense value in marginally better methods. In one of my more recent business continuity planning (BCP) engagements, we tempered the exclusionary view somewhat, looking at classes of failure modes (aka, 'Acts of God'). That enabled us to circumvent the paralysis of considering thousands of highly specific but individually unlikely events that were nonetheless highly probable in aggregate.
The difficulty of forecasting such events has made them fertile ground for experiments with market-based knowledge aggregation methods, (e.g., prediction markets.) Prediction markets have been used to forecast the weather as well as some kinds of natural disasters, e.g., Newsfutures markets for earthquakes and hurricanes. Their record during the 2004 hurricane season seemed reasonable, but it's not clear that they did much more than re-process news coming out of the National Weather Service.
During the 2003 Policy Analysis Market kerfuffle, there was much discussion about using them to anticipate terrorist activity – not exactly an act of God, but sufficiently surprising and impactful to inhabit the same class of forecasting problems.
What got me thinking further in this vein was watching two shows on the Discovery Channel the other night - both of them on floods. The Jamestown, Pennsylvania floods (one horrific one in the 19th Century and two others in the 20th) were relatively sudden - transpiring over just a few hours, and doing most of their killing and property damage within minutes: not unlike the December, 2004 tsunami, which I wrote about here and here last January. In Jamestown, post mortem analyses made clear that collectively, a few scattered individuals possessed enough pertinent information to have forecast the disasters had they possessed a mechanism to aggregate their hunches. One commentator noted that floods in Jamestown have a tendency to recur every 40 years or so - something he ascribed not to weather fluctuations but to inter-generational amnesia. That is, not studying and incorporating lessons from recent history.
The epic North Dakota floods of April, 1997 were different in pace if not in effect. Accurate prediction in that case was also hampered by ineffective data aggregation - coming in from a variety of sources (too few in hindsight), and data interpretation (narrowly held and fatally myopic in hindsight.) But those floods transpired in slow motion - over the course of several months, with the most critical events happening over a few weeks. That's just the kind of pace that seems to suit prediction markets rather well: not so drawn out as to drive up opportunity costs for scarce capital during long periods of minimal trading activity, but not so sudden as to short-circuit the process of drawing in marginal knowledge from an array of unexpected sources. (I touched on this issue of implausibly fast markets in a short review of Marc Steiger's Earthweb that I wrote back in January.)
I don't know of a prediction market for floods - aside from the indirect information thrown off by traditional commodity markets for regionally intensive crops in flood-prone areas. Given that the Northern Hemisphere is just now entering flood season, and the fact that flooding is one of the most damaging types of natural disasters, this might be a timely topic for one of the public prediction market outfits to run with. Thoughts?