Posted 21 December 2002 - 01:47 AM

OmniDo wrote:

In my previous post, my intent was to outline that creation of something greater than the sum of its parts is a logical contradiction and warrants definition as "Nearly impossible." Perhaps objective impossibility is too strong a word, but nonetheless it is logically valid. Let me attempt to explain...

On the contrary, a computer cannot do that which we cannot, it can merely do what we can at a greater degree of speed. Speed is the issue here.

While indeed humans fashoned a computer that defeated the worlds greatest ... didnt "Beat" him at all. The computer merely "out-thought" him in terms of speed.

There is no evidence that any machine could ever acquire "Superior-than-human intelligence", insofar as the human being could not endow themselves with the same degree of intelligence as a machine. Granted, it would take far longer, but if we are talking about efficiency and "superiority", then the human has the machine beat, hands down.

Stop for a moment and think about how much energy it takes to sustain a human, indeed to cultivate and educate a human, versus a machine. Machines at present require orders of magnitude more power to operate than a human mind, but also they yield orders of magnititude greater numbers of calculations.

Once humans begin to augment themselves with artificial systems, organic bio-chips or whatever sci-fi equivalent, we too will become faster than our former selves, with increased abilities and capacities. Still, all we will have done is make ourselves faster, not "Superior".

Perhaps the definition of "Superior" needs to be better defined. If superior is to equate to speed, then yes, it is possible to build superior-intelligence. But from the attitudes of many posts, it has been the perceived intent that "superior" meant "Beyond human cognition" or "beyond human capacity for understanding" which is in my opinion, totally bunk.

As Mind and Mangala have pointed out above, humanity has already created many devices which extend and amplify our preexisting methods of information gathering, storage, retrieval, analysis, and so on. Evolution has consistently produced better biological designs and features throughout history, slowly adding on layers of specialized cognitive and anatomical machinery, to produce the wealth and diversity of life we see today; all this, and at the beginning our solar system was nothing more than an instellar dust cloud. Are you sure you don't mean something else, here?How about future computers designed to integrate organically with the human brain? How about cerebral implants with information processing structures as complex and effective as the neural tissue, allowing human beings to intuitively understand a wider range of patterns, retain more memory, immediately notice salient details in a huge project or society, quickly power their way through long chains of sophisticated reasoning, etc? It can be tempting to hastily classify human intelligence as having crossed some sacred threshold which will forever allow us to be on par with even the greatest of future superintelligences, but this is due to the fact that humans don't focus on or demand the execution of superintelligently-challenging tasks of each other or themselves. We can look at a really smart human and say "hey, she's impressive", but that's because the adaptations responsible for our subjective psychological reaction are specifically tuned to living in a human society. All of human society is built around humanity's characteristic level of intelligence, and it's such a self-reinforcing memeplex that it's hard to see the profound qualities that all humans lack. The difference in brilliance between an idiot and Einstein can be isolated to microscopic neurological and genetic differences, and investigating the causes that produce these differences and attempting to exploit them will likely create a fascinating new branch of research just prior to the Singularity. The components that make up our "intelligence" are just a jumble of evolved, content rich neurological adaptations with a dose of self-rewiring plasticity thrown in (also an adaptation), and there's nothing preventing us from eventually adding in additional functionality to create individuals who think qualitatively better than even the most intelligent humans. Such transhuman individuals could potentially walk into a broad range of laboratories or research institutions, point out the obvious, leave, and repeat indefinitely, contributing more to technological progress in general than a 1000 humans ever could. An expert in a field (who's "expertness" can be narrowed down to a few differences in the interneuronal connection map) can solve a problem that a thousand newbies could never solve, and a transhuman expert could think thoughts entirely outside of the human sphere of experience and solve a much wider range of more complex problems immediately, in the same sense that homo sapiens thinks thoughts outside of homo habilis's sphere of experience. The difference is not simply in speed, but qualitatively better observation skills, analysis, innovation, creativity - whatever human skill it is, there are neural processes responsible for it, and these process will be analyzed, enhanced, and run as computer code, opening the door up for further cycles of enhancement.True; since chess is a game that emerged specifically in human culture, and requires the broad range of content-rich human cognitive and perceptive mechanisms to do well in, humans still excel at chess over software programs. But Deep Blue and friends aren't AIs, just glorified search trees. The algorithmic complexity of the human brain far exceeds that of these chess playing programs, so that should be factored into our metric of superiority as well. Projecting and analyzing combinatorally explosive games requires lots of specialized cognitive machinery, and we don't know quite enough about ours yet to build a machine that thinks at the same smartness level as we do. But when we do, the impact will be quite huge...What's our definition of "intelligence", anyway? It truly does have something to do with speed - responding quickly to wider ranges of threats is one of the main reasons that intelligence evolved in the first place. You can point at a human's brain, looking very closely at all the machinery, and say; "Why is this so impressive? This organism simply uses the same biochemical and physical laws as a common slug, they're just using a lot in one place." The human brain is based on fundamental evolutionary principles that originally evolved many millions of years ago, but the continued layering of heuristics and plasticity eventually produced beings that can ponder the universe in a qualitatively different way than software programs or chimps. This isn't about a popularity contest between software programs and human beings, but fundamental facts about the way minds work.But right now their calculations are very simple and of limited use. This is mostly because computer programming is an activity completely outside of the usual activities that humans are ancestrally familiar with, and specialize in, so we aren't very good at programming anything besides relatively simple code structures. (Especially relative to a hypothetical human with evolved modules specialized for programming computers.) Evolution, having a load of time to work with, has done better so far. But instead of comparing present day computers to present day humans, why can't we compare humans in general (the complexity and intelligence of which is effectively static and doesn't improve much within the life of the individual), and a serious artificial intelligence or upload (with complete self-understanding and self-access, the ability to make arbitrary mental revisions and improvements, instantaneous access to all the information on the Internet, automated cognitive processes for compiling and supercompiling, the ability to split ver consciousness into multiple processing streams, the ability to copy verself at will, freedom from distraction or rationalization, and so on, and so on, and so on...) If you define intelligence as "the capability to solve complex problems in a complex environment", then these future entities will far outscore humans in terms of being able to handle societal, emotional, cognitive, and environmental complexity, among others.So you're saying that humans, a random consequence of a blind design process, will always be equally intelligent as anything carefully engineered and optimized, or designed by better blind processes, or adapted to a wider range of thinking styles, or endowed with augmentive thinking processes, or any other massive insights that anyone might come up with. I don't buy it. If you were plopped in the middle of a transhuman society as a new social agent, you could easily feel very incompetent at every task they considered important, being completely incapable of understanding or appreciating their art, science, culture, or whatever analogous pursuits that these transhumans engage in. If they experienced distaste towards you due to this, and their method for doing such was something recognizable to you, then you might attract a lot of social ridicule living in such a community. (Although it's unlikely that real transhumans would operate socially in the same way we do, such as ridicule of less competent individuals, or other stuff like that which screams "evolved!")Could you understand the aesthetic meaning of a 50 dimensional alien art exhibit, intuit the culmative behavioral patterns of an animal with quadrillions of moving parts, or participate suavely in a "party" with augmented humans who are enhanced such that they can execute and comprehend a whole new range of body language and facial expression previously unavailable to baseline humans?