What are these?

We all have basic assumptions about the world, human nature, and the relationship between the two. We are taught certain perspectives as children, and this recieved wisdom forms the common ground for communication. Ultimately, when we see the whole picture, our major disagreements are squabbles over details. Should gays be allowed to marry? We assume here a common understanding of what “marriage” means. Should we raise or lower taxes? We assume the legitimacy of government, and of taxes at all!

What happens when the disagreement occurs at an even more basic level? Like, whether or not our civilization is even a good thing?

The case is complex, but in truth no more complex than our “common ground” of unexamined, recieved wisdom. In many cases, it is much less complex. But it is different. Since forming these ideas, I have faced an increasing obstacle in communication. Unspoken, differing assumptions force me routinely to return to the same arguments again and again. So I resolved some time ago to crystalize my philosophy into a single, comprehensive work, which could from a base for further communication.

There have been several failed attempts at this, the most recent being “The Anthropik Canon.” The Thirty Theses recycles much of my previous work, but extends and elaborates on all of it, as well. This is my latest attempt to develop a comprehensive treatment of my core philosophy, reduced to thirty pronouncements which I individually defend.

You are also watching the writing of an “open source” book in real time. These will become the rough drafts to a final book version that will be published by the Tribe of Anthropik and distributed online, including through this website. Your comments, criticisms and questions about these entries will be addressed and incorporated into the final work.

Jason Godesky

Technoshaman, Tribe of Anthropik

28 July 2005

Thesis #1: Diversity is the primary good.

Humans are social animals, and also capable of abstract, independent thought. The combination requires some form of social standards. Bees think with a single hive mind, and solitary animals do not encounter one another often enough to require a rigid system of morality and ethics. Without social norms, however, human society would break down. We have evolved in such societies, and require other humans to live. A single human, on his own, has little chance of survival.

Some rules are nearly universal, such as the injunction against murder. Society cannot long endure if everyone is murdering one another. Other taboos are less common; theft, for example, is generally found only in those societies where resources are limited in some regard. Rules of morality and ethics vary widely from culture to culture, adapted to given circumstances. Our ethics and morality are another means we have of adapting to new and different environments.

Basic rules of behavior are required for our survival, and conscience is an adaptation we have evolved to continue our existence. Such a conscience must at once be deeply felt, and culturally constructed. It must be adapted to those rules, taboos, and guidelines a given society requires in a given place and time, but be too deeply felt to be ignored. The human brain is incredibly malleable, made to be adapted to the cultural context it finds itself in. Enculturation is a powerful process which should never be underestimated. What you learn as a child can never be completely shaken; it becomes an inextricable part of who you are, as intrinsic to your being as your DNA.

As necessary as ethics may be, that does not make them correct. Nor does the depth of our conviction. I, like most Westerners, feel a very strong revulsion at the thought of pedophilia, for example. Yet, in the cultural context of the Etoro, the Marind-ani, and 10–20% of all Melanesian tribes , it is the only acceptable form of sex. While I cringe at the thought, I have no argument that it is “wrong” beyond my gut feeling of disgust — a result of my enculturation. As much as I prefer monogamous, heterosexual relationships, it was monogamous heterosexuals who committed the Holocaust. There is no similar act in Melanesian history.

The arbitrary nature of such ethical rules led many of our early ancestors to posit the final authority for such decrees with divine will. This is good and that is not because the gods said so, end of story. This made things nice, neat and easy. In the early days of polytheism, this worked nicely. Worshippers of Apollo and Ra alike could live in peace with one another. Most polytheists were willing to accept the gods of another as equally real as their own pantheon. Religious wars and intolerance were quite uncommon; after all, what’s one more god? Early religion was inextricably bound to politics, and so ancient states would enforce worship of the state gods — often including the emperor or king — alongside one’s own gods. Usually, this was not a problem; again, what’s one more god? Even monolatry — the worship of a single god, amidst the acknowledgement of many — was not much of a problem. Ra is my god and Apollo is yours, but we’re both worshipping the sun. I worship the ocean, and you worship the harvest, but both are equally real.

It was the emergence of monotheism that first posed a serious challenge. If only one god exists, then all other gods are false. If this is also combined with a charitable disposition towards the rest of mankind, crusades, missionaries, and other attempts to save the heathens from their error ensue. In a world where morality is determined by the will of the gods, such a conflict comes to a head.

If morality follows from divine will, are there no ethics for atheists? And what of the heathens? Yet, these individuals still have pangs of conscience as acute — and sometimes more — than their monotheistic cousins. This led to many philosophers trying to find some other basis for ethics, besides divine will. Such philosophies generally come in one of three types.

The first harks back to the old days of the divine will; deontological ethics focuses on duties we are required to either fulfill or refrain from. The seminal figure of this school is Immanuel Kant, who formulated the categorical imperative. Kant argued that an act is ethical if it could be done by everyone without breaking down society. This was later refined by Sir David Ross with his prima facie values — things that simply are good without question. Individual acts can then be judged by how well they comply to those values. The past fifty years have seen the re-emergence of “virtues,” as found in ancient philosophy. The four Stoic virtues of temperence, fortitude, justice and prudence work in a manner similar to Ross’s values — acts may be judged by how well they cling to these virtues.

Both of these systems share the same flaw as the ancient systems of ethics; they cannot exist apart from divine revelation. Even if there is such a god handing down such ethical systems, how can we ever be sure which of us has the “true” revelation? Every culture has different values, virtues, morals and ethics. Each believes that its way is the right way. Simply reiterating that position is not sufficient, and all claims to the superiority of one’s own scripture require one to first accept the superiority of one’s own scripture.

Unlike the foregoing systems, however, consequentialist ethics like John Stuart Mill’s theory of Utilitarianism do the best job of creating an ethical system independent of divine powers. Utilitarianism tries to maximize the utility — roughly, the “happiness” — of all parties involved. An action is “right” insofar as it makes everyone more satisfied, more happy, than they were before. This is not simple hedonism, as the welfare of all must be considered — your family, your friends, your society. Sitting at home tripping on acid is not an ethical action in Utilitarianism, for as much as it may raise your own utility, it carries with it a slight negative impact on everyone in the form of your support for a global network of drug dealers and smugglers connected to various forms of crime, oppression and terrorism.

Utilitarianism is often disparaged in philosophical circles, with counter-examples as the following. Take a thousand people, and some magical means of measuring utility numerically. One of them is extremely annoying. Killing him would drop his own utility from its current “100” to zero, while raising everyone else’s from “100” to “101.” That means that the overall effect of utility would be 999–100=899. Ergo, killing annoying people is a very good thing!

Obviously, Utilitarianism needs some other goal that mere “happiness,” but what? Once again, we run up against the wall of needing to decipher the divine will. Everyone has their own ideas, beliefs, dogmas and scriptures. How can we possibly know what the gods desire of us?

Perhaps one good start is to stop pouring over the texts they supposedly inspired, and instead look to the only thing we know for certain came from them (if they exist at all): the world around us. It turns out the universe has been screaming a single, consistent value at us from the beginning of time.

From a single, undifferentiated point of energy, the universe unfolded into hundreds of elements, millions of compounds, swirling galaxies and complexity beyond human comprehension. The universe has not simply become more complex; that is simply a side-effect of its drive towards greater diversity.

So, too, with evolution. We often speak of evolution couched in terms of progress and increasing complexity. There is, however, a baseline of simplicity. From there, diversity moves in all directions. If evolution inspired complexity, then all life would be multi-celled organisms of far greater complexity than us. Instead, most organisms are one-celled, simple bacteria — yet, staggeringly diverse. As organisms become more complex, they become less common. The graph is not a line moving upwards — it is a point expanding in all directions save one, where it is confined to a baseline of simplicity. From our perspective, we can mistake it for “progress” towards some complex goal, but this is an illusion. Evolution is about diversity.

Physics and biology speak in unison on this point; if there are gods, then the one thing they have always, consistently created is diversity. No two galaxies quite alike; no two stars in those galaxies quite alike; no two worlds orbiting those stars quite alike; no two species on those worlds quite alike; no two individuals in those species quite alike; no two cells in those individuals quite alike; no two molecules in those cells quite alike; no two atoms in those molecules quite alike. That is the pre-eminent truth of our world. That is the one bit of divine will that cannot be argued, because it is not mediated by any human author. It is all around us, etched in every living thing, every atom of our universe. The primacy of diversity is undeniable.

With that, we can suppose another form of consequentialist ethics, like Mill’s Utilitarianism, but with a different measure of “good.” It is not happiness, but diversity that should be our measure. Diversity of life, of thought, of action.

So, killing the annoying person becomes “bad”; as annoying as he is, he adds diversity to the group. Nor does this give license to everything under the cause of increasing diversity. Our own civilization is a unique data point, but its existence requires the expansion of its markets and influence. It gobbles up other cultures to create new customers. Though it is itself another point of diversity, it requires many other points to be sacrificed. Its overall effect, like sitting at home on acid, is profoundly negative.

Thesis #2: Evolution is the result of diversity.

The concept of progress is actually rather new. Most prehistoric and ancient peoples saw history as a constantly repeating cycle, incompatible with any notion of advancement or degradation. The first conceptions of linear time are found only in the historical era. Confucius, the Greeks and the Jews all believed that the world was, in fact, becoming worse. In this, they did concieve of history as linear, but as the opposite of progress. The Greeks held that the first, “Golden Age” had been the best era, with each succeeding age diminished from its predecessor’s glory. In Judaism, the “Fall of Man” in Genesis paints humanity in a fallen, exiled state. Later Jewish prophets outlined a messianic and eschatological timeline which extended this into an on-going societal free-fall that would end only by divine intervention with the Messianic Age. This final hope of the Messianic Age sowed the first seeds of the idea of progress.

In many ways, we can thank Christianity for the concept. In reconciling their belief in Jesus as the messiah, and the very obviously unfulfilled predictions of the Eschaton and the Messianic Age, Christians began to develop a more progressive concept of history. Their Christology immediately separates history into “before Christ” and “after Christ.” They mark the passage of years as Anno Domini-the “Year of Our Lord.” Since the New Covenant is, in the Christian mind, immediately superior to the Old — as Paul argues in his Letter to the Galatians — we already have fitted all of history into a broad sweep of progress. The condition of mankind was improved by the life of Christ. History has progressed.

The concept proved adaptable to changing memetic environments. The Enlightenment was a response to the superstitious worldview that preceded it, and like so many philosophical responses, was prone to attempts to counter-balance its opponents by going equally far in the opposite direction. The Enlightenment defined humanity as unique for its faculty of Reason, and celebrated that Reason as the seat of mankind’s “redemption” from its state of ignorance and savagery. The Enlightenment promised an optimistic future, where humanity triumphed over every obstacle in its way thanks to the unstoppable power of Reason. As E.O. Wilson described it in Consilience:

Inevitable progress is an idea that has survived Condorcet and the Enlightenment. It has exerted, at different times and variously for good and evil, a powerful influence to the present day. In the final chapter of the Sketch [for a Historical Picture of the Progress of the Human Mind], “The Tenth Stage: The Future Progress of the Human Mind,” Condorcet becomes giddily optimistic about its prospect. He assures the reader that the glorious process is underway: All will be well. His vision for human progress makes little concession to the stubbornly negative qualities of human nature. When all humanity has attained a higher level of civilization, we are told, nations will be equal, and within each nation citizens will also be equal. Science will flourish and lead the way. Art will be freed to grow in power and beauty. Crime, poverty, racism and sexual discrimination will decline. The human lifespan, through scientifically based medicine, will lengthen indefinitely.

Though the Enlightenment placed its faith in Science, rather than in deities, this belief in progress remains no less a leap of faith for it. The idea of progress — particularly of humanity’s constant self-improvement through the application of Reason — became as fundamental a belief for the secular humanists as the redeeming power of Christ was for the Christians they proceeded. The beliefs fulfilled similar needs, as well, by promising similar outcomes — even if brought about by entirely different processes. Both comforted their believers with the promise that the current misery was only temporary, and that a new, better day was waiting on the horizon for those who soldiered on.

Little wonder, then, that when Darwin challenged the conceit of our species’ superiority by suggesting we were mere animals, those that did not reject the evidence entirely instead comforted themselves with the myth of progress. In the popular mind, the word “evolution” became nearly a synonym for “progress,” the process by which species “improve” themselves. In fact, evolution has nothing to do with “progress” at all.

Evolution, technically defined, is merely a change in allele frequency in a population over time. In one generation, 15% have a given gene; in the next, it is only 14.8%. Iterated over generations, this may lead to the complete extinction of the allele. The idea of evolution predates Darwin, as such change is immediately observable and undeniable. Darwin made two contributions to this; the first was defining the first mechanism for evolution in the process of natural selection, the second his contention that such evolution satisfactorily explains the origin of species.

Since the Neolithic, herders have practiced artificial selection with their livestock. If a given cow produces more milk than the others, or is more docile and easy to control, then you simply give that cow more time with the bulls, so that she will have more children. The next generation of the herd will have more docile cows that produce more milk. The herder has artificially selected for traits he desires. Over enough generations, this could lead to the entire herd being docile and producing more milk.

Darwin’s concept of natural selection merely suggests that this can also happen without the conscious guidance of a herder. A giraffe with a slightly longer neck may be able to reach foliage in trees more easily. He will be better and more easily fed, giving him more time to dally with the ladies and concieve young, who are also more likely to have slightly longer necks. Over enough generations, this could easily explain the modern state of the giraffe, the same as artificial selection sufficiently explains the state of the modern cow herd. The difference being, no single entity was consciously guiding the giraffes to that end.

The seeds of these thoughts were planted during Darwin’s time aboard the Beagle. During this time, he visited the Galapagos Islands, and noted both the similarities and differences of birds on those islands to birds on the mainland. He noted the similarities suggesting they had once been a single species, and the differences specifically adapted to the Galapagos’ unique ecology. Darwin allowed the implications of his natural selection to play out. If two populations of a given species are separated, each will continue changing with each generation, but now separated, their changes will diverge. Over sufficient generations, the two groups will become too divergent to interbreed any longer. Two new species will have formed.

In its truest essence, then, evolution is nearly irrefutable. “Survival of the fittest,” is a true shorthand, if we understand “fittest” to refer to the ability to produce young, as well as being severely restricted to a given locale. In this case, it becomes a tautology; if a creature possesses some trait that will make it more likely to have young, then it is more likely to have young. The controversy comes from the implication of this statement. If true (and how could it not be?), then all the diversity of life can be accounted for in a natural fashion. Gods can still be invoked if one insists; evolution could be seen as G-d’s paintbrush, or Genesis as a poetic account of evolution, as all but the most hardline, fundamentalist Christians believe, but they are not necessary. The existence of life itself is no longer a proof for the existence of G-d.

Evolution, then, is simply a consequence of diversity. All organisms are subject to “dumb luck,” and untold heritages of the world were pre-emptively snuffed out by rocks falling at the most inopportune moments. Yet, the diversity of populations of organisms played with the probability of that dumb luck. Falling stones did not kill the swift and the slow in equal measure. Trees with flame-retardant seeds inherited the earth after enough forest fires had gone through. Evolution happens, as the inevitable consequence of a diverse world. As Dawkins abstracted it in The Selfish Gene, the diversity of possible chemical reactions meant that, eventually, a reaction would occur that reproduced itself. Such a reaction would have a higher probability of occuring again, as it was no longer relying on pure chance to do so. Anything that reproduces itself — even ideas — are subject to natural selection and evolution.

What, then, is the “goal” of evolution, if we can speak of such a thing? The marriage of evolution and progress has left many with the notion that evolution is driving towards some endpoint, that we are progressing ever closer to some perfect state. Usually, this is formulated as evolution’s drive towards greater complexity. Such a “drive” towards complexity, however, is ultimately a mirage, an illusion created by the unique myopia of our scale.

There is a certain baseline of simplicity for all things. No atom can be simpler than hydrogen, for example. There is a baseline for DNA where, if it were any simpler, it would not be able to reproduce itself, and thus would no longer be DNA. There is a baseline, somewhere around the complexity of the virus — whether above or below is a matter of some debate — where any more simplicity would yield something no longer alive. From this baseline, there is nowhere to go but up. Diversity spreads out in all possible directions. There is infinite diversity in the space that is equally simple, hugging close to the baseline. Diversity also moves up, towards more complex. If we were to graph such dispersion, it would not look like an arrow shooting up into the stratosphere of complexity; it would be a hemisphere against a solid floor, with its radius constantly growing.

The evidence for this view is clear and intuitive. If evolution drives ever greater complexity, rather than simply diversity, why then is the vast majority of life on earth single celled? Instead, this distribution of life — with almost all of it existing at lower orders of complexity, and the numbers of species diminishing as we climb into greater levels of complexity — is exactly the hemisphere of diversity. Nowhere do we see the straight line of “progress,” unless we track only our own, specific evolutionary path, and ignore everything else. If we stare at the radius pointing straight up and ignore the rest of the hemisphere, then, and only then, can we convince ourselves that evolution is about “progress.”

Consider the case of the Neandertal. Larger, stronger and faster than normal humans, our success (and their failure) was once attributed to their inferior intellect. In fact, their brains were noticeably larger than our own. While this may simply be a matter of ennervating muscle tissue, it means their physical faculties were at least the equal of our own, if not superior. Culturally, the only evidence of adaptation to changing stimulus we have in the Paleolithic is the Châtelperronian toolset, an ingenious integration of Acheulean and Mousterian technology. It is not found associated with “modern” humans, however, but with Neandertals. With their intellectual abilities in greater doubt, many turned to Bergman’s Rule to explain their demise: Neandertals were cold-adapted, and could not survive in the changing climate of the end of the Pleistocene. However, Neandertals have been found throughout the Middle East in areas which, while once colder than they are now, were never so cold as to justify the idea that Neandertals were doomed by their cold adaptation.

There is yet no angle to the Neandertals’ extinction besides sheer, dumb luck that does not present a host of problems. It seems, regardless of which attribute we value most, Neandertals were at least our equals, and perhaps even our betters. Their extinction, and our success, may be a case of evolution picking the worse candidate; it may simply be randomly choosing between two equally qualified candidates. What it seems very strongly to not represent is a case of “progress.” Instead, it is simply change.

This highlights one of the last important traits of evolution: its ambivalence. A friend of Darwin’s once tried to develop a system of ethics based on the conviction that, while evolution is inevitable, it is also a monstrous process, and that which helps it along is itself immoral. I argue that evolution can, indeed, be monstrous, but is not always so. Like everything else, good and evil are matters of proximity. Evolution sometimes makes things better; sometimes, it makes them worse. Evolution is driven by diversity, and in general creates even more diversity, but it is also blind and unconscious. It operates on immediate results, leaving long-term errors to be resolved by time. It is a process of continual trial and error, as it allows long-term mistakes to correct themselves with self-destruction. Thus, at any given point, we must be careful to declare anything an evolutionary “success” by its current survival — as it may just as easily be a terrible mistake in the midst of eliminating itself.

Thesis #3: Humans are products of evolution.

As we saw in the second thesis, natural selection is a tautology: anything that possesses some trait that makes it more likely to propogate itself, is more likely to propogate itself. Played out over a sufficiently long timeline, this can easily explain the origin of species. It was an explosive idea; not because it was theoretically lacking, nor even for lack of evidence. It was not even explosive for what it ruled out. Rather, it was explosive for what it allowed: namely, a world with no intelligent designer. The opposition came primarily from the most fundamentalist of religious organizations. Evolution does not preclude the existence of G-d, but neither does it require it. It was this that made it “evil,” because it removed the existence of life itself as a proof for the existence of G-d.

Yet it was not evolution in general that bothers these religious zealots. Many are even willing to concede “microevolution,” or the change of species over time. The laser-like focus of their ire has always been human evolution in particular.

This is not without reason, of course. These same religions teach a myth of humanity as a higher, nobler order of creation. Jews, Christians and Muslims all share the Genesis account, where humanity was the crown of creation — something made in G-d’s own image. “Then God said, ‘Let us make man in our image, in our likeness, and let them rule over the fish of the sea and the birds of the air, over the livestock, over all the earth, and over all the creatures that move along the ground.’” (Genesis 1:26) In Islam (7:11–18) — as well as in Christian folklore and exegesis — Lucifer and his angels are cast from heaven because they refuse to bow to humanity, and accept their primacy as the greatest of G-d’s creation, superior even to the angels.

Such beliefs are widespread, if not universal. In Iroquois belief, humans were descended from the superhuman, utopian Sky People, while mere beasts already existed in the world. The Australian Aborigines believed humans were the children of the Morning Star and the Moon. The Sun Mother “made them superior to the animals because they had part of her mind and would never want to change their shape.” The Ju’/hoansi also make humanity special; first in our ability to master fire, and then in the fear that fire inspired in other animals, separating us from the rest of creation.

Ultimately, such stories are merely another iteration of ethnocentrism and tribalism, writ large. Rather than simply suggesting that one’s own group is superior to all others, this suggests that one’s own species is superior to all others. Such sentiments serve the same evolutionary function: they help maintain group cohesion. Enlightened self-interest and intolerable arrogance both serve equally well to keep individuals from straying off and dying alone in the wilderness. Social life is not always easy, and interpersonal problems arise even in the most idyllic of societies. When these things happen, a personal commitment to the group becomes necessary. Ethnocentrism is a universal among all human cultures; it helps keep them together as a culture. That said, its evolutionary usefulness speaks nothing to the sentiment’s basis in reality. It is a useful belief to hold, but is it true?

Starting with the Renaissance, our mythology of self-importance took a series of hard blows. First, Copernicus published his Revolutions of the Celestial Bodies posthumously, shattering the geocentric theory that the earth lay at the center of the universe. Copernicus’ heliocentric theory has been heralded as the beginning of the scientific revolution; indeed, it is from the title of his book that the term “revolution” took on its current meaning of an overthrow of established ways, ideas and governments. Galileo proved that not all heavenly bodies orbited the earth when he observed the largest four of Jupiter’s moons — known now as the Galilean moons. He was placed on trial for his heresy; on the possible threat of torture and execution, Galileo recanted, though legend says that he whispered under his breath, “E pur si muove!” — “But it does move!”

Just as we began to accept that the planet made for us was not the center of the universe, Darwin closed the vise even more, facing us with the idea that we were animals like any other, no better and no worse. Neither gods nor kings, angels nor demons, not the children of Sky People or the Divine Sun, but mere beasts as any other. Darwin challenged our dominion by suggesting that we were products of evolution, rather than the crown of creation. Ultimately, this is the root of the argument over evolution: are humans mere animals, or are we something better?

We’ve grasped at a lot of straws to prove that we’re special. The first was the soul. Of course, we can’t even prove we have souls, much less that other animals don’t, so the modern, scientific mind has locked onto a related concept: intelligence. The problem is that this supposedly unique human trait is not uniquely human. We’ve found significant intelligence among nearly all the great apes, dolphins , parrots , and crows. This intelligence even extends to tool use and communication , other traits we have variously used to define our unique status as “higher than the animals.”

Perhaps, then, we can find the key to our uniqueness in culture? When we define culture tautologically, then yes, of course, only humans have culture. But if we choose not to define “culture” as “what humans do,” but instead “things we learn,” then suddenly we see quite a few animal cultures. We know there are orangutan cultures , chimpanzee cultures , and even though he can’t prove it , George Dyson just can’t shake the notion of interspecies co-evolution of languages on the Northwest Coast.

During the years I spent kayaking along the coast of British Columbia and Southeast Alaska, I observed that the local raven populations spoke in distinct dialects, corresponding surprisingly closely to the geographic divisions between the indigenous human language groups. Ravens from Kwakiutl, Tsimshian, Haida, or Tlingit territory sounded different, especially in their characteristic “tok” and “tlik.”

Which brings us to communication. Surely humans are unique in language? Again, it all depends on how niggardly we define the word. It makes sense to consider only verbal communication, and so eliminate the complexity of bees’ dances and the pheramone waltz of ant colonies, but we routinely understate the complexity and nuance of chimpanzee calls , bird song , and other animal communication in order to elevate our own achievements. We denigrate these means of communication by insisting on the difference of our particular languages’ use of discrete elements and grammar, or by pointing out that chimpanzees do not use the same range of sounds humans do (though, no language uses the full range of possible human sounds, either). These criteria of “language” are selected specifically to dance around the fact that other animals also have very complicated means of communication, sufficiently complicated to bear some comparison to a crude, simple human language.

In each of these regards — intelligence, culture and language — humans have achieved a degree of nuance and sophistication that surpasses everything else in the animal kingdom. We are not the only intelligent creatures in the world, but we are certainly the most intelligent. We are not alone in possessing culture, but our cultures are the most far-reaching. All animals communicate, but ours is more nuanced and complex than any other. These are differences of degree, not kind. We are not unique in our possession of these traits, only in how much we have of them.

Every species is unique in some regard. They must be, in order to be species. If there was no trait that differentiated us from chimpanzees, then we would not be humans — we would be chimpanzees. That does not mean that any one of our unique traits are unique in the entire universe. Nor do these unique traits make us a different order of being, any more than the unique attributes of chimpanzees make them a different order of being.

The evidence for human evolution is incontrovertible. It is easy to see how insectivorous rodents simply moved their eye sockets forward to gain binocoluar vision and depth perception to climb up trees and exploit the insect colonies there. It is easy to see the changes in their physiology as some of them adapted to eat fruit. It is simple to trace the development of the great apes as they adapted to life in small communities, the rise of Australopithecus as a grasslands scavenger, and the development of our own genus as we came to rely on hunting. Darwin despaired of a “missing link,” a phrase still exploited by creationists. That link is no longer missing — we have an entire fossil continuum clearly outlining the descent of man.

Humans are quite clearly the products of evolution, like every other organism on this planet. Each of us is heir to a genetic heritage stretching back to the dawn of life a billion years ago. We are not gods or kings enthroned by a despotic, short-sighted deity, separated from our domain by the insulation of superiority. We are not damned to an icy tower under the burden of rulership, cut off from all life. We are part of this world, through and through. In a very real sense, everything that lives are siblings to one another, all descended from that first self-propogating protein. We are bound to one another in mutual dependence in complex networks and feedback systems, a system screaming with life. We are not apart from this. We can partake fully in what it means to live — and all it will cost is our illusion of dominion.

Thesis #4: Human population is a function of food supply.

Thomas Malthus was one of the most influential thinkers of all time. His father knew Hume and Rousseau, and his own paper — An Essay on the Principle of Population — forever changed the way we think about populations and food supplies. It has informed food security policies worldwide, and provided the basic underpinnings of our modern concern with overpopulation. In The Origin of Species, Darwin called his theory of natural selection an application of the doctrines of Malthus in an area without the complicating factor of human intelligence. Yes, Malthus’ work has been a major underpinning and influence on everything since. It’s a shame he was so incredibly wrong.

Malthus’ case is simple: population grows “geometrically” (exponentially), but food supply only grows arithmetically. So Malthus warned of a coming crisis where we would not be able to feed our burgeoning population — the “Malthusian catastrophe.” Of course, the failure of such a catastrophe to come to pass took a lot of wind out of Malthus’ sails. Malthusianism was declared dead after the 1960s and 1970s saw the greatest increases in human population ever seen, accompanied with higher calories per capita, thanks to the abundance of the Green Revolution. Cornucopians rejoiced as they saw the evidence come in that increasing population meant increasing prosperity for all: the realization of Jeremy Bentham’s credo, “the greatest good for the greatest number.”

If it seems too good to be true, that’s because it is. Even Bentham knew that the two factors needed to be balanced against one another, and that increasing one necessarily meant decreasing the other. As Garrett Hardin refuted it in his classic article, “The Tragedy of the Commons“:

A finite world can support only a finite population; therefore, population growth must eventually equal zero. (The case of perpetual wide fluctuations above and below zero is a trivial variant that need not be discussed.) When this condition is met, what will be the situation of mankind? Specifically, can Bentham’s goal of “the greatest good for the greatest number” be realized? No — for two reasons, each sufficient by itself. The first is a theoretical one. It is not mathematically possible to maximize for two (or more) variables at the same time. This was clearly stated by von Neumann and Morgenstern, but the principle is implicit in the theory of partial differential equations, dating back at least to D’Alembert (1717–1783). The second reason springs directly from biological facts. To live, any organism must have a source of energy (for example, food). This energy is utilized for two purposes: mere maintenance and work. For man maintenance of life requires about 1600 kilocalories a day (“maintenance calories”). Anything that he does over and above merely staying alive will be defined as work, and is supported by “work calories” which he takes in. Work calories are used not only for what we call work in common speech; they are also required for all forms of enjoyment, from swimming and automobile racing to playing music and writing poetry. If our goal is to maximize population it is obvious what we must do: We must make the work calories per person approach as close to zero as possible. No gourmet meals, no vacations, no sports, no music, no literature, no art...I think that everyone will grant, without argument or proof, that maximizing population does not maximize goods. Bentham’s goal is impossible.

So why were the Cornucopians so right, and Malthus so wrong? Because Malthus got the entire problem almost completely backwards — and it has remained backwards ever since.

Science has never been as unbiased as it would like to be — how could it? Skewing results is easily noticed, and rightfully condemned — as happened with such forgeries as Piltdown Man. Much more insidious is a lack of curiousity. We do not question recieved wisdom, and what we do not question we cannot understand. From Genesis 1:28 to the present day, we’ve viewed population growth as an inherent property of human nature. It has gone unquestioned. Certainly an Anglican country parson like Malthus would not question it. Malthus’ problem was how to feed so many people — a problem that could only be solved by misery, vice (i.e., contraception) or moral restraint (i.e., abstinence). The country parson, naturally, favored the same kind of abstinence programs in favor by the United States’ current conservative regime.

This is entirely backwards. What are all these people made of, fairy dust and happy thoughts? No, they are made of proteins — of food! Without a sufficient food supply, such a population cannot be achieved. We understand this as a basic biological fact for every other species on this planet, that population is a function of food supply. Yet we continue to believe that the magic of free will exempts us from such basic biological laws.

The usual counter-argument goes something like this: Humans are different from other animals. We can think. We can rationally observe the situation, and decide for ourselves how many children to have. While this is certainly true of individuals, groups are governed by much more deterministic criteria. For every individual who decides to be responsible and only have 2.1 children, another will take advantage of the space that individual has opened by having seven. The variation in values, thought patterns, beliefs and feelings of social responsibility ensure that the fertility rates of a group will rise to the carrying capacity possible, regardless of the intelligent, responsible choices of others in the community. Charles Galton Darwin, the grandson of that Charles Darwin, said, “It may well be that it would take hundreds of generations for the progenitive instinct to develop in this way, but if it should do so, nature would have taken her revenge, and the variety Homo contracipiens would become extinct and would be replaced by the variety Homo progenitivus.”

Education is often proposed as a solution, but Garrett Hardin already offered the best counter-argument to that strategy, again in “The Tragedy of the Commons”:

The long-term disadvantage of an appeal to conscience should be enough to condemn it; but it has serious short-term disadvantages as well. If we ask a man who is exploiting a commons to desist “in the name of conscience,” what are we saying to him? What does he hear? — not only at the moment but also in the wee small hours of the night when, half asleep, he remembers not merely the words we used but also the nonverbal communication cues we gave him unawares? Sooner or later, consciously or subconsciously, he senses that he has received two communications, and that they are contradictory: 1. (intended communication) “If you don’t do as we ask, we will openly condemn you for not acting like a responsible citizen”; 2. (the unintended communication) “If you do behave as we ask, we will secretly condemn you for a simpleton who can be shamed into standing aside while the rest of us exploit the commons.” Every man then is caught in what Bateson has called a “double bind.” Bateson and his co-workers have made a plausible case for viewing the double bind as an important causative factor in the genesis of schizophrenia. The double bind may not always be so damaging, but it always endangers the mental health of anyone to whom it is applied. “A bad conscience,” said Nietzsche, “is a kind of illness.”

We can see this problem of overpopulation and education as a case of the Prisoner’s Dilemna. The best case scenario is cooperation; if neither prisoner confesses, both go off free. If we are all responsible, then we can save ourselves from self-destruction. But this is not what usually happens. The fear of abandonment prompts players to pre-emptively abandon the other. The question becomes a simple one of game theory, and the challenge to stop overpopulation by education, a contradiction of human nature.

All of this, however, is theoretical. This hypothesis is easy to test: calculate carrying capacity, and compare it to actual human population numbers. This is precisely what Russell Hopfenberg of Duke University did in his 2003 study, “Human Carrying Capacity is Determined by Food Availability.” As you might imagine from such a title, he found that the numbers lined up almost perfectly.

There is a significant complication in this, however, which critics of this stance are eager to point out. The First World is facing a population growth decline — the world’s richest nations are growing by the smallest percentages. Italy has been very concerned with its low growth rate, only 0.11% according to a 2003 estimate. Italy has the 201st highest population growth , and the 100th highest agricultural growth . Meanwhile, Singapore has the sixth highest population growth rate, and the 147th highest agricultural growth rate — out of 147.

If population is a function of food supply, why is the most significant growth taking place in those areas producing the least food?

The answer, I think, lies in globalization. How much of what you ate today came from your own bioregion? Unless you do a significant amount of your grocery shopping at Farmers’ Markets or eat only USDA-certified organic food, probably not a lot. In 1980, the average piece of American fresh produce was estimated to have traveled 1,500 miles before it was consumed. Interestingly, those same countries which produce so much food but don’t see it translate into their population, are also the heaviest exporters , and the impoverished countries with significantly rising growth rates are often the recipients. When the First World rushes in with foreign aid, food, and humanitarian aid to a desert area in the midst of a famine, we serve to prop up an unsustainable population. That drives a population boom in an area that already cannot support its existing population. The result is a huge population dependent on outside intervention that itself cannot be indefinitely sustained. Eventually, that population will crash once outside help is no longer possible — and the years of aid will only make that crash even more severe. In the same way that the United States’ policy of putting out all forest fires in the 1980s led to an even worse situation in its forests, our benevolence and good intentions have paved the way to a Malthusian hell.

Another part of the answer lies in our ecological footprint. In the passage above, Garrett Hardin made the distinction between the calories it takes to maintain a human body, and the “work calories” humans use to do anything else. While it is certainly true that population is a function of food supply, standard of living — how many work calories we recieve, in addition to mere maintenance — is an important factor in that equation. Not only how much food is available, but how much food each individual demands. The dwindling First World has the largest ecological footprint ; the growing Third World has the smallest. Italy comes in at #25 with 5.51 hectares per person (1996); Somalia is #114 with 0.97.

This is ultimately why education appears to have an effect on population: because higher education raises the standard of living, increasing the ecological footprint so that fewer people can live off the same amount of food, reducing the population. However, the problem we face is not one of Malthusian catastrophe. If we could not feed our population, we would not have such a population in the first place. The problem is the ecological consequences of such resource exploitation. Expanding ecological footprints do nothing to lessen this. Also, this trend can only continue so far, because the First World needs the Third. Our prosperity comes from the triumph of the corporate model, but the corporation itself runs on externalized costs. Our economy could never function if we had to pay the full and total cost for the luxuries we enjoy. Consider simply our oil costs — never mind the way it is built in to, say, our food. The Arab population oppressed under Saudi rule pays the balance for our cheap oil. Low prices at WalMart are made possible by cheap Third World labor . It is a grim economic reality that, given ten apples and ten people, for one person to have nine apples, the other nine must split one between them. In the conclusion to their 1996 study on ecological footprint, Wackernagel and Rees stated, “If everybody lived like today’s North Americans, it would take at least two additional planet Earths to produce the resources, absorb the wastes, and otherwise maintain life-support.” Since we have but one earth, this conclusion can also be spun around in the form that each of us essentially has three slaves whose existence is one of constant misery for our benefit.

Intelligence does not exempt us from basic biological laws — just as it has not exempted dolphins, crows or chimpanzees. Groups reproduce to the best of their ability, and the carrying capacity — their food supply — creates the ceiling of that ability. Populations will rise to their carrying capacity, and no further — even human populations. So Malthus has the problem entirely backwards. The problem is not how to feed so many people; of course we have the means to feed them, because if we didn’t, the population would not exist. The problem is the implications of so many people.

Every year, there is a certain amount of energy generated by the sun. This energy radiates in all directions, so there is only a small given percentage of it that falls on the earth. The total amount of solar energy available to our planet per time unit has a hard limit — what is called the photosynthetic capacity of the planet. This energy can be used in any number of ways. Plants turn solar energy into sugar; animals turn plant sugar into kinetic energy. Animals can eat other animals, and obtain the energy stored in their bodies, which they obtained from plants, which they obtained from the sun. But none of these conversions are perfect, and some energy is lost in each one; this is why an animal that eats other predators is almost unheard of. Also, each individual likely used some of the energy, before it was taken by the next link in the chain. As animals, we are always at least one step removed — and as omnivores, we’re just as often two steps removed. Also, we’re only one of millions, if not billions of species, all sharing the same, set amount of energy from the sun.

With the agricultural revolution, we found a way to convert biomass into human flesh, by reducing biodiversity in favor of our own foods. We increased the percentage of the planet’s photosynthetic capacity that we recieved. Solar energy that fell on an acre of forest would be divided amongst all the creatures, plant, animal and otherwise, that lived there. Solar energy that fell on an acre of wheat would go exclusively to humans. Our carrying capacity increased; not just that we had more food, but in more abstract terms, we were helping ourselves to more energy. Our population increased, so we cultivated more land. We had more people, so obviously we needed more food. We cultivated more land, and occasionally improved our technology to increase our yields per acre, but more food simply led to more people. Who required more food ... the Food Race. But lurking high above our heads was an absolute limit: photosynthetic capacity.

In the 1960s, we saw the latest, greatest “win” in the Food Race: the Green Revolution applied the potential of petroleum to farming, allowing for vastly increased yields. We found a bit of a “cheat” to the natural order in fossil fuels. Now, we can burn through decades of solar energy every day to escape the limits of photosynthetic capacity. Essentially, we burn our past and take credit against our future in order to ensure our continued, exponential growth.

The Green Revolution set our carrying capacity to — well, whatever we wanted it to be. The population responded accordingly, with a huge initial jump, slowing as it reaches its asymptote. The scientists say that asymptote lies at 9 billion, and who am I to disagree? It seems like a perfectly reasonable figure. The population growth curve fits exactly what you would expect for a population adjusting to a suddenly raised carrying capacity — a huge jump, peaking relatively early, and extinguishing as it reaches the new “stable.”

Of course, it’s unlikely that this will remain the case for long. The Food Race goes on. 9 billion people will leave millions — billions, even — starving. Those people need to be fed. We need another “win” in the Food Race!

But 9 billion people is not sustainable. 6.4 billion is not sustainable. There is no sustainble solution for so many people. Only the Green Revolution can feed that many, and the Green Revolution is inherently unsustainable, because it relies on the consumption of a non-renewable resource.

The human race currently consumes some 40% of the earth’s photosynthetic capacity. This monopoly on the earth’s resources is having a devastating effect. We are seeing the extinction of some 140 species every day, some thousands of times higher than the normal background rate. Today, right now, we are seeing extinction rates unparalleled in the history of the earth. We are undeniably in the midst of the seventh mass extinction event in the history of the earth — the Holocene Extinction. Unlikely previous extinction events, however, this one is driven by a single species.

This is the true danger of overpopulation, not our inability to feed a growing population. As much as we would deny it, we depend on the earth to live. Dwindling biodiversity threatens the very survival of our species. We are literally cutting the ground out from under our feet.

Increasing food production only increases the population; our current attitudes about food security has locked us into what Daniel Quinn called a “Food Race,” by comparison to the Arms Race of the Cold War. Garrett Hardin began his famous article with this dilemna, and I’ll close with his assessment:

In our day (though not in earlier times) technical solutions are always welcome. Because of previous failures in prophecy, it takes courage to assert that a desired technical solution is not possible. Wiesner and York exhibited this courage; publishing in a science journal, they insisted that the solution to the problem was not to be found in the natural sciences. They cautiously qualified their statement with the phrase, “It is our considered professional judgment....” Whether they were right or not is not the concern of the present article. Rather, the concern here is with the important concept of a class of human problems which can be called “no technical solution problems,” and more specifically, with the identification and discussion of one of these. It is easy to show that the class is not a null class. Recall the game of tick-tack-toe. Consider the problem, “How can I win the game of tick-tack-toe?” It is well known that I cannot, if I assume (in keeping with the conventions of game theory) that my opponent understands the game perfectly. Put another way, there is no “technical solution” to the problem. I can win only by giving a radical meaning to the word “win.” I can hit my opponent over the head; or I can falsify the records. Every way in which I “win” involves, in some sense, an abandonment of the game, as we intuitively understand it. (I can also, of course, openly abandon the game — refuse to play it. This is what most adults do.) The class of “no technical solution problems” has members. My thesis is that the “population problem,” as conventionally conceived, is a member of this class. How it is conventionally conceived needs some comment. It is fair to say that most people who anguish over the population problem are trying to find a way to avoid the evils of overpopulation without relinquishing any of the privileges they now enjoy. They think that farming the seas or developing new strains of wheat will solve the problem — technologically. I try to show here that the solution they seek cannot be found. The population problem cannot be solved in a technical way, any more than can the problem of winning the game of tick-tack-toe.

Thesis #5: Humans are neither good nor evil.

Are humans essentially good, or essentially evil? This is one of the most basic, perennial questions in philosophy. Many identify our individual answers to this question as determing our political spectrum — conservatives believe humans are inherently evil, and require strict rules to make them good, while liberals believe humans are inherently good, and must simply be free to act on such goodness. Both positions are unrealistic. Humans are products of evolution, and evolution is unconcerned with such abstractions as “good” or “evil.” As Aristotle said, humans are social animals. We are neither “good” nor “evil.” We are only inherently social.

From the beginning of our civilization, our vision of ourselves has suffered from a sort of schizophrenia, pulled between these two unrealistic poles of good and evil. Plato posited that we each had an angelic spirit in our mind, and a bestial demon in our belly, with all our actions, emotions, and passions torn between them. This provides a foreshadowing of Descartes’ dualism , which remains a powerful idiom today, even though modern medicine has conclusively proven the strong interdependence of mind and body. Though I doubt it was a conscious modelling, it would be a mistake to overlook the obvious philosophical heritage this provides to Freud’s formulation of the id, ego and superego. This dichotomy was only made more severe by the influence of Zoroastrianism. Once adopted by Judaism prior to the splintering of Christianity, and later Islam, this vision of the universe at war between good and evil was combined with the ancient Greek concept of macrocosm and microcosm to only further this “bizarre superstition.” Even Jesus makes reference to this idea in the gospels with, “The spirit is willing, but the flesh is weak.” (Matthew 26:41) In this vision, humanity itself is neither good nor evil, but only because each individual human is a spiritual battleground between the two. It is a vision of human nature that is not inherently good, nor inherently evil, but instead, inherently schizophrenic. Though widely accepted, it is a rather crude attempt to reconcile “the better angels of our nature” with the ugly facts of our history. Descartes’ dualism, once fundamental to the early practice of medical science, has since become an impediment. Neurology, psychiatry and biopsychology have all highlighted how closely knit the mind and the body are. In fact, any separation is now recognized as utterly lacking in any basis in reality.

Another concept, equally ancient, dismisses such ambivalence by simply claiming that humans are inherently evil. Perhaps the earliest formulation of this came from Plato, who argued that men act ethically only for fear of punishment. This sits well with the concept of “original sin” we find in the Abrahamic traditions. In Christianity, the inherent sinfulness of humanity necessitated the sacrifice of Christ, and subsequently, obedience to Holy Mother Church. On the other side, it is argued that altruism is an illusion , because every seemingly altruistic act is motivated by some selfish desire, even if it is only a desire for a feeling of self-fulfillment. Dawkins’ central thesis in The Selfish Gene is an argument grounding this concept in biology: that altruism arises as a genetic strategy of propogating itself.

This vision of humanity found its ultimate fulfillment in the work of Thomas Hobbes. “Bellum omnium contra omnes” — Hobbes’ “war of all, against all” — was the first word on the “state of nature.” It was a hypothetical then, a possible time when humans may have existed without government. Philosophers were only beginning to consider the possibility of the scientific method, and Hobbes was a strong proponent of the superiority of philosophical thought experiments. Anthropological data was only beginning, and even what little there was, was generally of the form of imperial apologia, describing the horror of barbaric pagan ways, and how desperately they needed the salvation of Christendom and European civilization. Hobbes’ “state of nature” owed much to the Christian conept of the inherent sinfulness of humanity, and much to the trauma of his own childhood. His mother went into labor prematurely when she became panic-stricken with news of the Spanish Armada’s approach, leading Hobbes to later remark, “Fear and I were born twins.” The individual human in the “state of nature” was, in Hobbes’ philosophy, a solitary predator whose cruelty was matched only by his cowardice. The result of such “anarchy,” in the traditional, pejorative sense of the word, was a life that was “solitary, poor, nasty, brutish, and short.”

This idea of human nature is more often associated with the right side of the political spectrum. It argues that humanity is inherently evil, and that a just society is only possible when humans are compelled to act justly by the threat of force. This idea underlies our concepts of law, justice, and punishment at a very basic level. One might consider rhetoric of “deterrance” as a euphemism for this philosophy of terrorizing others into compliance. Hobbes is a powerful underlying current in the philosophy of the neoconservatives.

In counterpoint to this is the view that humans are inherently good. We might find faint echoes of this in Abrahamic mythology of humanity as the “crown of creation,” but Christianity has traditionally emphasized the fallen nature of humanity, over its exalted nature. The concept that human nature is essentially good is much more modern, finding its roots primarily in the changing strategies of colonial apologia in the 1600s and 1700s.

Where Hobbes’ “state of nature” was supported by the tales of cruel heathens and their primitive ways, with the obvious call to colonize those lands and save the savages by giving them Christ’s redemption and civilization’s benefits, by the time of Jean-Jacques Rousseau, imperial apologists had turned to a different strategy. Evoking the imagery of an Edenic existence, they wove a myth of the “Noble Savage.” The term “noble savage” first appeared in English with John Dryden in 1672, though it originated earlier, in 1609, with Lescarbot’s Histoire de la Nouvelle France. Lescarbot noted that among the Mi’kmaq, everyone was allowed to hunt — an activity enjoyed only by Europe’s nobility. This led Lescarbot to remark that “the Savages are truly noble,” thus referring to nobility of birth, rather than nobility of character. However, to trace the etymology of a popular phrase is a very different problem from the history of that idea it expresses. In this new form of apologia, indigenous peoples are presented as innocent, unspoiled by civilization. They are innocent, honest, healthy, moral people living in harmony with nature and one another. The savage is like the child, innocent of the “real world” and all its concommitant iniquities. And just as children must be protected by their parents, so too must these innocent savages be protected by more mature, worldly European powers.

In The Myth of the Noble Savage, Ter Ellingson argues that the myth of the noble savage was never widely believed — a straw man made to be universally debunked. She points to the racist work of John Crawfurd in 1859 popularizing the concept, attributing it to Rousseau to give it intellectual weight. I haven’t read Ellingson’s account, so I can’t speak much to it except that it seems to contradict the entire body of Romantic thought. Though Crawfurd may have been the first to introduce the racist messages of the “Noble Savage” myth of “ein Volk, ein Land,” the two ideas have become inextricably linked in Romantic philosophy. It became a primary basis for Nazi ideology in the 1920s.

Yet, these ideas contradict Rousseau’s own argument in many ways. The myth of the “Noble Savage” states that savages are innately good because of their race. Rousseau argues that all humans are innately good, regardless of race, and that we are “corrupted” by civilization.

This myth has been thoroughly debunked by writers, philosophers and anthropologists, who highlight the darker side of “savage” life. In War Before Civilization, Lawrence Keeley highlights the violence of Neolithic and horticultural “primitives,” and shows that, per capita, they experience more violent casualties from war than civilizations do. Another favorite criticism is the “overkill theory,” but this particular argument is deeply flawed: though humans were no doubt involved in the extinction of the megafauna, our contribution was likely no greater than any other alpha predator would have made. Tribal societies suffer from the same ethnocentrism as all other human societies. Tribal societies are not idyllic utopias, and their members are not angels. In the “state of nature,” humans are not always and invariable “good.” These arguments are sufficient to prove Rousseau wrong about the essential nature of our species.

If, then, Hobbes is wrong to project his own fear to the entire species, and Rousseau is wrong to project his idealism the same way, where does that leave the truth of who we are? If we are neither good nor evil, what are we? What manner of creature has evolution created in us?

In my study, I have identified several characteristics that I would call the essential hallmarks of “human nature.” If I had to sum them up into a single, pithy slogan, I would take Aristotle’s: humans are social animals.

Society. Humans are social animals. In rare and extraordinary circumstances, in areas barely fit for human habitation, there have been collapses of even the simplest forager societies, such as among the Ik. This is an exceptional extreme of social collapse. In general, humans need some sort of society to survive. Culture. Culture is not unique to humans, but we have certainly emphasized it to an unprecedented degree. Our brains are hard-wired to recieve culture. The acculturation process can stir us as powerfully as genetic impulses. This is highlighted as simply as the old (useless) debate on “nature versus nurture.” To consider an analogue from the world of technology, Herbert Simon helped write the General Problem Solver (GPS) in 1957. Prior to this, programs were written to solve specific problems. This was perhaps the first instance of a more generalized approach: the GPS could be fed information on specific problems, and then solve them. It is the difference between a machine that is hard-wired to do a specific task, and a machine that can be programmed to do any number of tasks. This is the difference culture makes; it allows for another layer, and gives humans an adaptive edge. It also means that we have much less of an essential “nature” than other animals, since we more closely resemble Rousseau’s “tabula rosa.” Egalitarianism. There are ambiguously gendered humans. This in itself shows a degree of sexual dimorphism among the lowest in the entire animal kingdom. Males are not significantly larger than females, and morphological differences are minimal, particularly when compared to many of our closest primate cousins. Male baboons are three times the size of females, and mandrill males sport distinctive coloring that make them almost look like an entirely different species. Sexual dimorphism throughout the animal kingdom is correlated with gender equality. Emperor penguins have as little sexual dimorphism as we, and they split child-rearing responsibilities evenly. This physical evidence strongly suggests that gender equality is part of human nature. Egalitarianism in general is supported by a total lack of evidence for any form of hierarchy in our species, except in cases of exceptional abundance and surplus (that is, after the Neolithic, except for the singular exceptions of the Kwakiutl and the burial sites of Sungir). This is further corroborated by the universality of egalitarianism among modern foragers. Even in hierarchical societies, in all times and places, there is a universal aspiration towards more egalitarian forms of society — even where population pressure and complexity will not allow for egalitarianism. Thus, it seems that we should consider egalitarianism part of human nature. Technology.. The genus Homo suffers from one of the most ridiculous distinctions in all of biology, thanks to the powerful force of anthropocentrism: we are defined by our tool use. Though other primitivist writers define themselves by a rejection of technology, even the most primitive societies use tools of some kind. Tool use, though, is a very different proposition from an almost messianic belief in the power of technology to save us from all problems. Technology is morally ambivalent, capable of good or evil depending on how it is used. Yet the creation and use of tools of some kind is a universal human trait, and one that figures prominently in our evolution. The creation of the first stone tools is strongly correlated to the exponential increases in cranial capacity that defines Homo habilis from Australopithecus afarensis. It is also strongly correlated to handedness (a rather unique quirk we possess in the animal kingdom), and another crucial aspect of human nature: Language. Though humans are not unique in their use of an advanced and nuanced communication system, there is little that can compare to the complexity of human language. Much of the human brain is hard-wired to use some kind of language. There is a “universal grammar” born instinctively in every human child. All human societies have some kind of language. The implications of this are far-reaching, from abstract thought to Wittgenstein’s philosophies. Story-telling. Australopithecines were almost certainly scavengers, competing in the African savanna — an environment where the emergence of “super-predators” had given rise to one of the most competitive ecosystems in the history of the planet. They could hardly compete with some of the other scavengers, such as hyenas and vultures, and so developed tools to get to a kill site first, grab the meat, and get out before other scavengers arrived. As tool use became more sophisticated, early humans began to hunt for themselves. This innovation required a range of skills, including story telling. Tracking has a great deal to do with weaving a story. The tracks, scat and other signs are, themselves, meaningless, unless one can weave that evidence into a narrative of the animal’s state, size and progression. This combines with human’s capacity for language and abstract thought to create a creature that tells stories. Scientific explanations of the Big Bang and evolution are as much stories as ancient myths and legends. Any narrative that links elements in a linear, causal line is a story. This article is a story.

What does this say to the essential question of whether humans are “good” or “evil”? Nothing. Humans are neither. We are not good, we are not evil, and we are not torn between the two. There are characteristics of human nature, but none of those characteristics can truly be called “good” or “evil.” We are what we are, and nothing more. We live more easily, and more fully, when we work with that rather than against it. That nature, though, is neither “good” nor “evil” — it simply is.

Thesis #6: Humans are still Pleistocene animals.

In 1833, Charles Lyell introduced the name “Holocene,” or “Recent Whole,” for our current geological epoch, stretching back only 10 or 12 thousand years. This makes the Holocene an incredibly young geological epoch, the shortest by far. The International Geological Congress in Bologna adopted the term in 1885, and it has been the accepted terminology ever since. The preceding geological epoch was the “last ice age,” the Pleistocene. It lasted for two million years, and while it was marked by significantly advanced glaciation, this was not the unremitting state of affairs. The Pleistocene had regular interglacial periods, during which the weather would turn warmer and the glaciers would temporarily recede — just like today. These interglacials typically lasted an average of 10 — 20 thousand years — just like ours. In short, the “Holocene” is not a new geological epoch, as much as we might think that the grandeur of human civilization’s appearance should be reflected in the ages of the earth. It is a perfectly typical interglacial. The Pleistocene — the “last ice age” — never ended. We’re still in it — we’re simply in a bit of a warm spell.

If anything, our current interglacial is most remarkable for its brevity. If it ended this week and the glaciers returned (and, while The Day After Tomorrow certainly pressed the point too far, these things do happen very suddenly ), it would be marked as the shorter side of normal. In fact, it would have ended some 5,000 years ago — an interglacial of just 5 to 7 thousand years — were it not for the ecological devastation of the Agricultural Revolution. The first farmers were responsible for massive deforestation, and raising huge herds of livestock that polluted the atmosphere with incredible amounts of methane — enough to hold the glaciers in check. For 5,000 years, our civilization has lived on borrowed time, extending our “Holocene” by balancing the earth’s natural cooling trend against our reckless environmental abuse. The Industrial Revolution was not a change in kind, but in scale — a significant increase in our ability to harm the earth’s ecosystems, destroying all semblance of balance that our previous rampages had so precariously struck.

Amazingly, much of the reporting on Ruddiman’s findings, like the FuturePundit entry cited above, argue that this is evidence that humans should try to engineer the planet’s climate. Our agricultural civilization is utterly dependent on the peculiar climate of the Holocene interglacial, this is true. It is a unique product of that climate, and if that climate ends, so will it. In the same fashion, humans are children of the Pleistocene. It is our home, through and through. We have changed far too little in the past 10,000 years to be well-adapted to the epochal changes in our lifestyle that we have seen. We are maladapted to our cultural context. The ecological damage we have done for these past millennia have only extended this state of affairs. Civilization may not be able to survive the end of the Holocene interglacial, but humanity certainly can. We are Pleistocene animals.

The Pleistocene was preceded by the Pliocene, an epoch cooler and drier than the preceeding Miocene. Temperatures and rainfall were similar to that of today; in most regions, this meant a colder, drier climate. This was the case in Africa, where jungles shrank and grasslands took their place. Our ancestors were those primates who did not retreat with the jungle, but instead attempted to make their living in the wide, open grasslands. It was in this new challenge that our ancestors, the australopithecines, first defined themselves: by walking upright.

Habitual bipedality is unique in the order primates, though certainly not across the animal kingdom. Australopithecine anatomy shifted to accomodate a vertical, rather than horizontal, alignment. Greater height gave australopithecines the ability to see farther over the grasses, and it gave them a new mode of locomotion in walking.

Walking has unqiue advantages. It is not by any means the fastest mode of transport. Most animals can run faster than humans. However, such locomotion is supported primarily by powerful muscles. This means they tire quickly. Cheetahs can run at over 110 km/hr (70 mph), but it cannot sustain this speed for very long. Most cheetahs will stalk their prey closely, but the final chase will rarely last more than one minute. Walking is very different. Walking does not rely on muscle, but on bone. Walking is a controlled fall, which shifts the body’s weight onto the leg bones, thanks to the locked knee. This means that there is less energy involved in each individual step a bipedal human takes, compared to most quadrupedal animals. Humans may not move as quickly, but they can move more often. The result is an animal that won’t run as quickly, but at the end of the day can cover much more ground.

This tells us something about the changing diet of australopithecus. Many other apes are opportunistic scavengers, and sometimes even hunters. However, this is rarely their primary sustenance. The innovation of walking suggests that australopithecines were relying more on meat than their ancestors had.

The superpredators of Africa had created a harsh Darwinian niche for scavengers, leading to powerful packs of hyenas and flocks of vultures that could easily overpower australopithecines. Instead, australopithecines adopted a strategy of finding the kill site first, getting to it first, grabbing their meat, and retreating before other, more powerful scavengers showed up. Walking upright allowed them to see farther across the grasslands, but a kill site could be anywhere. The more ground a scavenger covers in a day, the more likely that scavenger is to stumble upon a kill site. Scavengers don’t necessarily need to be fast — the dead rarely outrun them — they just need to keep moving as long as possible and cover as large a range as possible. The larger their daily range, the higher their chances of finding a kill site. That’s precisely what walking allows for, and australopithecine anatomy was built for nothing quite so perfectly as walking.

We retain those traits even today, which is precisely what makes walking such an important activity. Thomas Jefferson remarked, “Walking is the best possible exercise. Habituate yourself to walk very far.” For more than 99% of our history, humans have been foragers — which meant, more than anything else, walking. While foragers work markedly less than we do, that work consisted almost exclusively of walking: up to four hours every day. The effects of the automobile in the 1950s not only gave us dating , it also destroyed our communities. Resources were no longer grouped together, as walking from place to place became impossible and automobiles became a requirement for existence. Face-to-face interaction died off , and so did the habit of walking — resulting in our current obesity crisis. This doesn’t mean that cars and dating are bad — what it means is that we now live in a context to which we are not adapted.

* * *

Two million years ago, the Pliocene became colder and drier still, as the Pleistocene began. The last of these walking australopithecines, Australopithecus afarensis, was nearly identical to the first member of our own genus, Homo habilis, save in one, crucial regard: Homo habilis’s skull was twice the size of the australopithecus’.

Thanks mostly to anthropocentrism, our genus, Homo, suffers from what may well be the single most ridiculous defining criteria in all of science: we use tools. Of course, we have found tool use in other animals (as we touched on in thesis #3), and it is entirely likely that various australopithecines used wooden tools at least as complicated as those fashioned by modern-day chimpanzees or crows. Chimpanzees have even been observed with the rare stone tool. But the primary reason that this distinction is so laughable as a biological genus is that it is entirely behavioral, and utterly divorced from biology!

That is not to say that our tool use isn’t important. Quite the opposite. The explosion in cranial capacity that separates the two contemporary hominid genera seems quite significant. It is very clearly tied to tool use, for while australopithecines may well have fashioned any manner of wooden tools, we only find stone tools associated with Homo habilis.

The Oldowan tool set is the oldest set of technology we know of. It emerged 2.4 million years ago, as the long cooling of the Pliocene — the era of the australopithecines — gave way to the deeper cold of the Pleistocene — the era of our own genus. The making of these stone tools required changes in Homo habilis’s brain structures. We find the first evidence for handedness among these earliest members of our genus. We have also learned that handedness, tool use, and language are all linked functions in the human brain. Even if Homo habilis could not speak, the neurological foundations for it were laid with tool use.

These tools made Homo habilis a more efficient scavenger. With choppers and other stone tools, Homo habilis could butcher a dead animal more quickly, allowing them to clear out of the kill site more quickly, giving them an evolutionary edge. Yet for all its importance, the Oldowan tool kit changed little in the million years that it was used by Homo habilis and the myriad species thrown together into the waste-basket called “Homo erectus.” These tools made our genus a far more efficient scavenger. The greater amounts of meat this afforded provided the protein for the explosion in cranial capacity that marked the seperation of the hominid genera.

One of the various “Homo erectus” species developed the Achulean tool set; others learned how to use and control fire . Hominids became better scavengers. Now they might have used their weapons to scare off other scavengers, rather than butchering quickly and running from the site. They may have begun to prey upon that gray area ever carnivore treads. No predator will pass up a perfectly good, recent kill — and many scavengers are more than willing to finish off a wounded animal. Or, with sufficient coordination and/or weaponry, a hale and healthy animal. It was in the “Homo erectus” period that hominids transitioned from scavengers, to hunters.

Through the 1960s and 1970s, the “Man the Hunter” theory dominated thinking on this topic, explaining human evolution in terms of hunting practices. It was closely linked to thinking on “killer apes,” and a generally Hobbesian view of human nature, painting humans as inherently violent killers. It drew ire from feminists who charged that it neglected the role of females in evolution, while other researchers hoped for evidence to distance human nature from such a grim, violent picture. That theory has declined in recent years, largely due to political correctness.

The feminist critique is rather weak. Since every female has a father, any strong natural selection exerted on one gender will easily cause changes throughout the species, in both genders. Any strong natural selection exerted on women will show up in the male population, as well. There is a much stronger criticism in the analyses of forager diets showing that they rely much more on plants than animals. Richard Lee showed that foragers relied more on plant matter than meat, leading some to refer to “gatherer-hunters” rather than “hunter-gatherers.” However, critics of Lee highlighted his complete reliance on the Ju/’Hoansi, who have an atypical love affair with the mongongo nut. More cross-cultural studies found that forager diets correlated to latitude: foragers closer to the equator ate more plants, foragers closer to the poles ate more meat. They also found significantly more meat than Lee: near 100% for such polar extremes as the Inuit, but only 14% of forager cultures in total got even half of their diet from plants. Despite this solid refutation, much is still made of Lee’s findings. An emerging concensus supports this “gatherer-hunter” model, though nearly all arguments for it are based on political correctness.

For the opponents of the “Man the Hunter” theory, acquiescing that hunting was an important part of human evolution is to normalize and excuse violence. It rests on an idea that is very old in Hinduism and Buddhism, which has only in recent decades formed vegetarian thought in the West: the idea of meat-eating as an inherently violent act. The presumption of this argument is that violence is only violence if enacted upon animals; that one cannot by violent towards plants. There is an assumption in this that while animals are alive, plants really aren’t. This is also a very old idea. The name “animal” derives from the Latin animus or spirit, because animals are animated — moved by a spirit — while plants are not. Even in shamanic and animistic schemes, animal life is often elevated above plant life.

The underpinnings for this belief have little basis in fact. As animals, animals are closer to us, and thus enjoy some special concern from us for their proximity. At its base, this is simply one more concentric circle in the widening ripples of anthropocentrism. As Giulianna Lamanna highlighted in her article, “The Hypocrisy of Vegetarianism,” there is even some intriguing indications of the possibility that plants may even feel in some strange way. Violence against a carrot is every bit as much violence, as violence against a cow.

Yet the proponents of “Man the Hunter” have predicated it upon an inherently evil and violent human nature; its detractors have predicated it upon an inherently good and gentle human nature. Both are idealized and misguided. We do not think of other predators as evil or violent, do we? Do we conceive of lions, or sharks, or bears, or spiders in such ways? Predators are important parts of the natural world. The return of the wolves to Yellowstone restored the park’s ecology which had been thrown out of balance by the predator’s departure.

We have already seen that both views of humans as good and humans as evil are overly simplistic (thesis #5). The issue of humanity and hunting is a fine example of such an issue that cuts both ways. Tracking requires careful observation, but even that alone is insufficient. Careful observation yields only an assemblage of data points. The tracker must assemble those points into a narrative, to weave a story around that data that not only says where the animal was and what it did, but predicts where it is going, as well. The needs of the tracker provide the natural selective pressure for human cognition as we know it.

But hunting is never a sure thing. Sometimes you bag yourself a big, juicy kill, and sometimes you come home empty-handed. Skill has a lot to do with it — but so does luck. Among foragers, it’s been calculated that on any given hunt, a hunter only has a 25% chance of making a kill. Yet our ancestors not only derived most of their protein from meat, they derived most of their daily energy from meat, as well. How did they do this, if they only ate one day out of four? While the probability that one hunter will fail on a given day might be 0.75, the probability that four hunters that all go out on the same day will all fail to catch something is 0.316. In other words, if four hunters all agree to share whatever they kill between them, then there is generally a 68% chance that all four of them will eat that day — where alone, their chances drop to 25%.

The risks involved in hunting made cooperation an important human strategy. Unlike other primates, our bonds formed into small, open, cooperative, egalitarian groups. The adoption of human society to mitigate hunting risks emphasized that any hunter could be the one bringing home dinner that night, and ultimately the conviction that everyone has value to the group. Sharing evolved not as a virtue, but as a necessity. In forager groups today, sharing is not considered “nice,” it’s simply expected as a social baseline, and as a requirement for survival.

Hunting inhabits a morally ambiguous position, then. The act itself is violent, yet its risks gave us the very notion of society and its attendant virtues of sharing, cooperation, and compassion — the very same virtues vegetarians seek to promote by denying that very thing that created them. The risks of hunting instilled in our ancestors their first sense of wonder and reverence. They saw the animals they killed not as trophies as we might, but as sacrifices necessary for survival. They worshipped the animals they consumed, using the narrative cognition tracking bestowed upon them to yield the first philosophy and religion humans would ever have. As shamans charted the expanses of human consciousness, art, music and science followed. The first hominids made their lives as communal scavengers, but as they learned to hunt, they became human.

* * *

But man does not live by meat alone, but by every nut, berry, tuber and leafy green that comes from the hand of woman. While the supposition that foragers were “gatherer-hunters” is little more than political correctness projecting itself back into our evolutionary history, neither can we ignore the importance of gathered foodstuffs. Foragers did divide labor roughly along gender lines, with males usually taking up most of the hunting, for obvious, biological reasons. Even though it was hunting that provided not only the protein our bodies required, but also most of the energy we used, it would be a mistake to discount the role of women.

Besides energy and protein, our bodies require smaller amounts of vital micronutrients. We do not need them in large quantities, but we do very much need them. Without sufficient vitamin A, children go blind. Insufficient vitamin D leads to rickets. If you don’t get enough vitamin C, you’ll come down with a case of scurvy. Wild edible plants provided these in abundances our modern domesticates cannot hope to match. Two cups of dandelion leaves contain more vitamin C than four glasses of orange juice; dandelions have more beta carotine than carrots, and more potassium than potatoes or spinach — alongside healthy doses of iron and copper. You’ll find wild edibles replete with quantities of vitamins, minerals, omega 3 fatty acids and all manner of other nutrients that float in our public consciousness precisely because our modern diet so clearly lacks them.

The line between food and medicine was not so clear, either. Common, broadleaf plantain is, along with dandelion, probably one of the most nutritious plant in the world, but plantain is also a powerful pain-killer, as well as having anti-toxic, anti-microbial, and anti-inflammatory properties. When ingested, it is a demulcent, a diuretic, and an expectorant. By the same token, dandelions can be used as a general tonic that helps strengthen the liver, gall bladder, pancreas, spleen, stomach, and intestines. They improve bile flow and reduce inflammation in cases of hepatitis and cirrhosis.

Women did not simply gather side dishes crucial to nutrition and survival; they provided medicines that not only cured sickness, but improved health, as well. Where male hunters cultivated spacial perception and risk-sharing strategies, could it have been the needs of female gatherers that gave us much of our abilities for memory and memorization?

* * *

As Paleolithic foragers, humans were beginnning to develop a new strategy to survive the Pleistocene. Many animals learn a great deal, and use this to supplement their instincts. Orangutans have identifiable cultures, and similar observations have been made of chimpanzees. Humans took this to an extreme, with very few inborn instincts. Instead, our brain became hard-wired not for any specific behavior set, but for recieving culture. In the acculturation process, we learn the rules and taboos of the culture we are born into, and incorporate them on a very deep level. Things that disgust us, for example — particularly food and sex taboos — are usually very arbitrary, yet we feel them so deeply that they are often mistaken for natural, universal truths.

We might think of this innovation in similar terms to the early history of computing. Early computers, or Turing machines, were made to perform a specific task. The innovations of von Neumann, Simon and others led to computers that were made to run arbitrary programs. Most animals have a much larger repository of instincts than we do, and learn much less. This leads to species-wide behavior patterns. Humans, on the other hand, owe much more of their behavior to culture than instinct. This means that culture can provide another layer of adaptation that can change much more quickly than evolution. It gives humans a competitive edge, by allowing us to adapt to any new environment with incredible speed and ease. When combined with our omnivorism opening a much wider array of possible foods, humans have thus become very possibly the most adaptable species on the planet.

Most animals, when confronted by fire, have a natural instinct to run away. At some point, long ago in our history, that instinct was stalled by our acculturation, and rather than run from it, some human actually went towards it, and brought it back under her own control. In time, we even learned how to start our own fires, yet the turning point of that first human to run towards the fire remains one of the most pivotal moments in our history. The Greeks immortalized that event in the myth of Prometheus, and the mythology of the San point to it as the turning point of our species:

Kaang gathered all the people and animals about him. He instructed them to live together peacefully. Then he turned to the men and women and warned them not to build any fires or a great evil would befall them. They gave their word and Kaang left to where he could watch his world secretly. As evening approached the sun began to sink beneath the horizon. The people and animals stood watching this phenomenon, but when the sun disappeared fear entered the hearts of the people. They could no longer see each other as they lacked the eyes of the animals which were capable of seeing in the dark. They lacked the warm fur of the animals also and soon grew cold. In desperation one man suggested that they build a fire to keep warm. Forgetting Kaang’s warning they disobeyed him. They soon grew warm and were once again able to see each other. However the fire frightened the animals. They fled to the caves and mountains and ever since the people broke Kaang’s command people have not been able to communicate with animals. Now fear has replaced the seat friendship once held between the two groups.

Humans spread out of Africa, into Asia and Europe. The ice age lowered the water levels, revealing the Bering Land Bridge, which humans followed into the Americas. The lower water levels made the islands of Indonesia and Micronesia larger, and the water between them smaller. Humans hopped from island to island in ancient canoes, until eventually they reached Australia. In these new environments, humans often relied more heavily on meat, at least at first, as they learned the new flora of these strange lands, what was safe to eat, and what was poisonous.

Until recently, the term “Holocene Extinction” referred to a rather minor spate of extinction which took place at the beginning of the Holocene, with the end of the megafauna — woolly mammoths, North American horses, sabertooth cats, and other large mammals. This occured at the beginning of the Holocene, as humans were first moving into many new environments, like the Americas and Australia. This has led to a long-standing debate between “overkill” and “overchill.” Were the megafauna wiped out by climate change? Or by rapacious, brutal bands of overhunting human foragers? Both sides have their evidence, of course.

Nor is this merely an academic argument without reprecussion for the present. The “overkill” theory is routinely cited by some groups as if it were already a proven fact, and used as evidence that humans are an inherently destructive species. So we needn’t worry ourselves with the environmental destruction we wreak. We can’t help it. It’s our nature.

As you might expect, the truth lies somewhere between overkill and overchill. Human populations were almost certainly too small to wreak such havok all by themselves, and the same climate changes that opened the way for humans into Australia and the Americas also had to affect the other large mammals living across the globe. Even more instructive, however, is the modern case of the wolves of Yellowstone. Alpha predators — like wolves, and like humans — play important, keystone roles in any ecology. The introduction of a new alpha predator can have dramatic effects, even causing cascades of extinction. This is not necessarily because the alpha predators overhunt or are even in the least bit maladaptive; this is simply the nature of alpha predators and how they relate in any given ecology. When humans came to Australia and the Americas, they were as harmless as wolves, lions, or any other big mammalian predator. Their presence caused cascades of changes throughout the ecosystem. Given that it was also a period of major climate change, a great number of species that were already under stress adapting to the new climate were tipped over the edge into extinction by the further ecological changes created by the adaptation of a new alpha predator. Our ancestors were hardly noble savages; but neither were they bloodthirsty killers bent on the destruction of all life on earth. They were animals, like any other.

* * *

In the Upper Paleolithic, we see a “revolution” leading to what paleoanthropologists sometimes refer to as “behavioral modernity.” There is a good deal of misinformation all around on this point, so let me first address this concept of “modernity.” Like the waste-basket of Homo erectus, paleoanthropologists have shoe-horned many different species into the category of “anatomically modern Homo sapiens” not based on fossil evidence, but because of their age. The alternative would be to recognize that human evolution was not a process of unilineal evolution — that it was not a tree, but a “bush.” Though this conclusion has become inescapable to most paleoanthropologists today, the categorizations of their predecessors who were not so enlightened often remain.

This has led to some startlement among paleoanthropologists, as we see “anatomically modern” humans, but without evincing any sign of the things we define ourselves by: art, religion, philosophy, etc. So, many have split “modernity” into anatomical and behavioral aspects. This is a false dilemna born not only of the rough shoe-horning of evidence already discussed, but also of the “revolution” idea born of Eurocentrism.

In Europe, the Upper Paleolithic truly is a “revolution.” We have cave art, sculptures, musical instruments, evidence of arithmetic and astronomy all appearing at once. This led many paleoanthropologists to think that “modern behavior” was a package deal, that there was some kind of genetic switch that allowed them all to fllower at once.

In Africa, however, we see each of these various elements accrue over time. They do not appear all at once, as in Europe. The conclusion is simple, and straightforward: “behaviorally modern” humans came out of Africa. This is the same “out of Africa” hypothesis that has won almost unanimous support over the multiregional hypothesis that has so long been the bulwark of racists and pseudo-scientists. If we look only at the European evidence, then, we have a “revolution” — but only because these new, African tribes arrived at a given time, practicing all of their culture at once.

Yet, all of these cultural phenomena that we define ourselves by do have a common origin, in shamanism. David Lewis-Williams is at his most convincing when he shows the underpinnings of shamanism in human neurology and psychology, and how rock art is an expression of that. Michael Winkelman has written a great deal on the evolutionary adaptations of shamanism. Both show how important shamanism was as an adaptation to the Pleistocene environment we evolved in, not only to reconcile the workings of our inner worlds to the world we live in, but also as a touchstone of community life and social function, an integrative function for the psychologically aberrant, and a healing function for the individual and the community.

Shamans most often induced altered states of consciousness through repetitive sound and motion — song and dance. Their visions provided the philosophy and world-view of their tribes, giving rise to the first religion and philosophy. Often, shamanic rituals were tied to the motions of the celestial bodies — and the first evidence we have of arithmetic is a “counting stick” cut off in sets of 28, most likely tracking the phases of the moon. Shamans were ethnobotanists of the highest order, and were willing to experiment even with the spirit world, so in some sense, we might even trace the first glimmerings of science to them, as well. “Behavioral modernity” goes back to the Upper Paleolithic, a gift from the shaman, and that unique adaptation to the Pleistocene that first tried to map the uni