To understand why such preventive measures might be useful, it helps to take a step back and notice a general trend toward building ever larger and more complex networks. In recent years, hundreds of millions of people have rushed to join online social networks, while billions more rely on e-mail and cellphones to stay connected to friends and coworkers all day, every day. Technologists wax lyrical about "Metcalfe's Law," which posits that a network's "value" increases in proportion to the square of the number of people or devices in it. And system designers revel in the ability of networks to improve a system's overall efficiency by dynamically distributing computer-processing load, power generation, or financial risk, as the case may be.

Rather than waiting until the next cascade is imminent, and then following the usual modus operandi of propping up the handful of firms that seem to pose the greatest threat, it may be time for a new approach: preventing the system from becoming overly complex in the first place.

It may be true, in fact, that complex networks such as financial systems face an inescapable trade-off - between size and efficiency on one hand, and global stability on the other. Once they have been assembled, in other words, globally interconnected and integrated financial networks just may be too complex to prevent crises like the current one from reoccurring.

Answering these questions properly requires us to grapple with what is called "systemic risk." Much like the power grid, the financial system is a series of complex, interlocking contingencies. And in such a system, the biggest risk of all - that the system as a whole might fail - is not related in any simple way to the risk profiles of its individual parts. Like a downed tree, the failure of one part of the system can trigger an unpredictable cascade that can propagate throughout the entire system.

Although these explanations can help account for how individual banks, insurers, and so on got themselves into trouble, they gloss over a larger question: how these institutions collectively managed to put trillions of dollars at risk without being detected. Ultimately, therefore, they fail to address the all-important issue of what can be done to avoid a repeat disaster.

Over the past year we have experienced something similar in the financial system: a dramatic and unpredictable cascade of events that has produced the economic equivalent of a global blackout. As governments struggle to fix the crisis, experts have weighed in on the causes of the meltdown, from excess leverage, to lax oversight, to the way executives are paid.

ON AUG . 10, 1996, a single power line in western Oregon brushed a tree and shorted out, triggering a massive cascade of power outages that spread across the western United States. Frantic engineers watched helplessly as the crisis unfolded, leaving nearly 10 million people without electricity. Even after power was restored, they were unable to explain adequately why it had happened, or how they could prevent a similar cascade from happening again - which it did, in the Northeast on Aug. 14, 2003.

In all the excitement, however, we tend to overlook a fact that should be obvious - that once everything is connected, problems can spread as easily as solutions, sometimes more so. Thanks to globally connected transportation systems, epidemics of disease like SARS, avian influenza, and swine flu can spread farther and faster than ever before. Thanks to the Internet, e-mail viruses, nasty rumors, and embarrassing truths can spread to colleagues, loved ones, or even around the world before remedial action can be taken to stop them. And thanks to globally connected financial markets, a drop in real-estate prices in California can hurt the retirement benefits of civil servants in the UK.

Traditionally, banks and other financial institutions have succeeded by managing risk, not avoiding it. But as the world has become increasingly connected, their task has become exponentially more difficult. To see why, it's helpful to think about power grids again: engineers can reliably assess the risk that any single power line or generator will fail under some given set of conditions; but once a cascade starts, it's difficult to know what those conditions will be - because they can change suddenly and dramatically depending on what else happens in the system. Correspondingly, in financial systems, risk managers are able to assess their own institutions' exposure, but only on the assumption that the rest of the world obeys certain conditions. In a crisis it is precisely these conditions that change in unpredictable ways.

No one, for example, anticipated that an investment bank as old and prestigious as Lehman Brothers could collapse as suddenly as it did, so nobody had that contingency built into their risk models. And once it did fail, then just as the failure of a single power line increases the stress on other parts of the system, leading to further "knock on" failures, so too did Lehman's unlikely collapse render other previously unlikely failures suddenly much more likely.

Risk managers have started to pay more attention to systemic risk of late, but unfortunately they haven't made nearly enough progress. A 2006 report co-sponsored by the Federal Reserve Bank of New York and the National Academy of Sciences concluded that even defining systemic risk was beyond the scope of any existing economic theory. Actually managing such a thing would be harder still, if only because the number of contingencies that a systemic risk model must anticipate grows exponentially with the connectivity of the system.

So if the complexity of our financial systems exceeds that of even the most sophisticated risk models, how can government regulators hope to manage the problem?