(I haven't read the Gordon or Krugman articles yet, but coincidentally, links to them showed up in my RSS reader on Slashdot just a few minutes ago and I printed both of them out.)

I've been thinking about this in another context on and off for a long time (years), and the recent article here on The Great Filter and the Fermi Paradox

http://io9.com/5970501/the-great-filter-theory-suggests-humans-have-already-conquered-the-threat-of-extinction

set me off again. I wonder if my professional interests in technological scalability and my dilettante interests in economics and the Great Silence might be all connected.

In my professional work, particularly with large distributed systems and supercomputing, I frequently see issues with scalability. Often it becomes difficult to scale up performance with problem size. Cloud providers like Google and Amazon.com have addressed many problems that we thought were intractable in the past, as has the application of massively parallel processing to many traditional supercomputer applications. But the ugly truth is that cloud/MPP really only solves problems that are "embarrassingly parallel", that is, that naturally break up into many mostly independent parts.

(I've written at length about this problem in

http://coverclock.blogspot.com/2009/11/post-modern-deck-construction.html

which likely falls under the tl;dr category.)

Many problems will remain intractable because they fall under the NP category, that is, the only algorithms that are known to solve them run in "non-polynomial time", which is to say, they scale, for example, exponentially with problem size. There are lots of problems that are in the NP category. Lucky for all of us, encryption is in the P category, while cryptographic code breaking is (so far) NP. True, codes become easier to break as processing power increases, but adding a few more bits to the key increases the work necessary to crack them exponentially.

I've been think about what problems in economics are in fact NP. For example, it could be that strategies necessary to more or less optimally manage an economy are fundamentally NP. This is one of the reasons that pro-free-market people give for free markets, where market forces encourage people to "do the right thing" independent of any central management. It really is a kind of crowd-sourced economic management system.

But suppose that there's a limit - in both a computation and time sense - to how well an economy can work as a function of the number of actors (people, companies) in the economy relative to its resources. Maybe there's some fundamental limit by which if a civilization hasn't achieved interstellar travel, it becomes impossible for them to do so. This can be compared to the story of Pacific islanders who got stuck on their island when they cut down the last tree; no more big ocean going canoes.

(Libertarian economist Tyler Cowen has argued in his book THE GREAT STAGNATION that the U.S. has historically taken advantage of "low hanging fruit", like cheap energy and education, to grow its economy, and that those day may be over.)

Henry Kissinger once famously said "Every civilization that has ever existed has ultimately collapsed." I wonder if this is the result of fundamental non-scalable economic principles, and is in part the explanation for the Fermi Paradox.