(Things related to metis, Polanyi, and one another, some more obviously than others, I suppose. Much longer unfinished/not-to-be finished piece at the end, full of claims I needed a more compact essay to prove but which are still relevant enough to the original two reviews to be here. All of this is supplemental to the Seeing Like a State review and the Great Transformation review.)

1. Newton and the Sun

When people say that Newton studied astrology and alchemy, the implication is often that this was a quirk, or a sign of the times, or [similar]. Rarely is it interpreted the way that Newton himself understood this study: as necessary scaffolding for the development of the physical sciences. Newton found himself preoccupied with perception and hallucination. Before pronouncing anything about the wold, we need to be certain that we can actually grasp it. He was unsure whether he was really perceiving anything out there or whether his imagination had simply painted a false world before him. If the latter, then the sciences must fall.

Two famous experiments were directly related, both about sight. Anticipating Chalmers by some three hundred years, Newton focused on colors. He stuck a bodkin needle above his eyelid and wiggled it around. When he shook the needle, dark spots clouded his vision. When he was still, no such orbs appeared. So far so good: that his sight could be confounded by a physical object did imply that it was relying on something mechanical.

Newton then stared at the sun for some period of time before turning to a blank sheet of paper. He saw, of course, a vague imprint of the star. So far so good, but the next stage is a doozy. Newton stared at the blank piece of paper later and tried very hard to imagine what he had seen previously. Again, he saw that phantom sun, and recorded this observation in his notebook: “Whence I gather that my fantasy and the Sun had the same operation upon the spirits in my optic nerve.” That is to say: some ghost lurks in the machine. But can you trust it?

Newton clearly begun moving towards mind-body connections through mechanics, given that he studied effects of pressure, thought, and sight. But that failed to provide him with the necessary grounding he needed, the essential confirmation that one isn’t misled by a demon, that the experiment could proceed.

The necessary proof he found in hermetical texts. The line “as above, so below” is famous, and we only know that from Newton’s insistence on it. For him, the phrase assumed an exaggerated importance: if true, it meant that one could proceed with experiment. After all, the below (us) contains the same operations as the above (outside us). They admit of the same basic certainty, and even the same type of proof.

The pithy phrase is not the full passage. Only recently did I learn that Newton actually translated the Hermetic corpus himself. Here’s his translation of the aphorism, taken from Hermes Trismegistus’s “Emerald Tablet“:

2) That wch is below is like that wch is above & that wch is above is like yt wch is below to do ye miracles of one only thing.

The same problems play out for us, of course, although we’re less aware of them than Newton was, and we’ve lost faith in Tomes of Pseudo-Egyptian Origin. And yet, but still, we ought to recall that final, unquoted line: to do ye miracles of one only thing.

2. From the Burned House Horizon

I

I’ve been reading David W. Anthony’s The Horse, the Wheel, and Language, which is a) fantastic but b) not the object of discussion. It’s about the linguistic and archaeological crossover on the hunt for the Proto-Indo-European origin as well as the subsequent migration patterns. I now know more about horse teeth than I ever thought I would (important for analyzing bit wear and domestication patterns).

My interest in history has almost always been culture, in the sense of “what was daily life and belief like for people from [period].” Doesn’t disappoint as regards eneolithic steppe rituals, but I was surprised that the most jarring tidbit was about 19th century Denmark. Anthony, 123:

In 1807 the kingdom of Denmark was unsure of its propspects for survival. Defeated by Britain, threatened by Sweden, and soon to abandoned by Norway, it looked to its glorious past to reassure its citizens of their greatness. Plans for a National Museum of Antiquities, the first of its type in Europe, were developed and promoted. The Royal Cabinet of Antiquities quickly acquired vast collections of artifacts that had been plowed or dug from the ground under a newly expanded agricultural policy. Amateur collectors among the country gentry, and quarrymen or ditch diggers among the common folk, brought in glimmering hoards of bronze and boxes of flint tools and bones. In 1816, with dusty specimens piling up in the back room of the Royal Library, the Royal Commission for the preservation of Danish Antiquities selected Christian J. Thomsen, a twenty-seven-year-old without a university degree but known for his practicality and industry, to decide how to arrange this overwhelming trover of strange and unknown objects in some kind of order for its first display. After a year of cataloging and thinking, Thomsen elected to put the artifacts in three great halls. One would be for the stone artifacts, which seemed to come from graves or sediments belonging to a Stone Age, lacking any metals at all; one for the bronze axes, trumpets, and spearsof the Bronze Age, which seemed to come from sites that lacked iron; and the last for the iron tools and weapons, made during an Iron Age that continued into the era of the earliest written references to Scandanavian history. The exhibit opened in 1819 and was a triumphant success. It inspired an animated discussion among European intellectuals about whether these three ages truly existed in this chronological order, how old they were, and whether a science of archaeology, like the new science of linguistics, was possible. Jens Worsaae, originally an assistant to Thomsen, proved, through careful excavation, that the Three Ages indeed existed as distinct prehistoric eras, with some qualifications. But to do this he had to dig more carefully than the ditch diggers, borrowing stratigraphic methods from geology. Thus professional field archaeology was born to solve a problem, not to acquire things. It was no longer possible, after Thomsen’s exhibit, for an educated person to regard the prehistoric past as a single undifferentiated era into which mammoth bones and iron swords could be thrown together. Forever after time was to be divided, a peculiarly satisfying task for mortals, who now had a way to triumph over their most implacable foe.

II

Here’s a claim from early in the book (15): “But the proto-lexicon contains much more, suggesting taht the speakers of PIE… recognized two sense of the sacred (“that which is imbued with holiness” and “that which is forbidden”).”

That actually destroys several works I’ve read about primordial and political notions of the sacred (looking at you, Agamben). More important is the fact that, rooting around for proof of Anthony’s claim, I found this fascinating article on Proto-Indo-European religious vocabulary.

Here’s the most interesting part:

3.1 Binary homophones. Another kind of binary opposition we ought to keep in mind is one which often appears among IE radical homophones. These are of two kinds: (1) complementaries, and (2) direct oppositions. Among the latter, we might consider *leuk- (‘light, brightness’) and *leug- (‘dark’), *kar- (‘to extol’) and *kar- (‘to slander’), *mii- (‘good’) and *mii- (‘to deceive’), *wei- (‘vital force’) and *wei- (‘to wither’), *yeu- (‘to join’) and *yeu- (‘to separate’), *le(i)- (‘to get’) and */e(i)- (‘to let go’). These homophonic antinomies are not a universal feature of the proto-language, but one which occurs often enough to be indicative of the possible IE tendency toward polarized perception and among which, beside a basic positive-negative duality, we may locate divine-asurian oppositions as well. The complementaries are also homophonic but as balancing counterparts rather than conflicting antitheses. Two examples might be *ais- (‘to wish’)/ *ais- (‘to honor’) and *meldh- (‘to speak words to a deity’)/**meldh- (‘lightning’, i.e., ?’response of a deity’): Pokorny (1959:16, 722).

3. Is Metis Possible?

Haidt’s famous experiment is simple. N=2000+ Americans are asked to fill out a short, two part questionnaire. The first part asks which factors figure into your determinations of right and wrong, from a scale of 0 (lol, who cares?) to 5 (meltdown). The second part presents a series of propositions for you to judge, scored again from 0 (strongly disagree) to 5 (strongly agree). One third of the time, they fill it out for themselves; another third they’re to respond as a “typical liberal”; the last third of the time they do so as a “typical conservative.” Finally, they’re asked for their personal political identification.

Conservatives and moderates were noticeably better at responding as a “typical liberal” than liberals were as a “typical conservative”. Moderates fared better than conservatives on group moral concerns, but conservatives were most accurate on individual concerns. Expanding that from three positions to seven, Haidt says the following: “Extreme liberals exaggerated the moral political differences the most, and moderate conservatives did so the least.” In sum, and looking at partisans: conservatives (taken as GOP and already-conservative trending “moderates”) understand the opposition better than liberals.

Haidt’s focus is on moral grounding rather than political policy, and his explanation is that conservatives use six rather than three “moral tastebuds”. That’s possible, I suppose, although not my point here. I suspect the results are confounded by the relative predominance of liberal voices in the media. “But conservative policies get discussed all the time.” Exactly the opposite of my interest – if the test were over preferred policy, I bet everyone would have done a whole lot better. What fascinates is that liberal reasons are regarded as real “reasons” (harm, egalitarianism, etc.), while conservative reasons are not. Do you agree on a scale of 0-6 with that last statement, and how do you identify politically?

4. Eyes Again

The human sclera (whites of the eye) is unique among primates. That’s come somewhat under question (gorillas with scleras that lack pigmentation have been found), but we still have by far the most visible and consistent scleras, and human irises are much smaller than those of most animals.

The most common theory behind that adaptation – which I won’t contradict – is called the cooperative eye hypothesis. The large sclera and small iris allows us to coordinate sight non-verbally. If a friend looks to the left suddenly you automatically follow their gaze. Our eyes are meant not just for looking but for being looked at.

You can explain any number of things through this, probably half of them falsely, but all with a Third Law: Cute animals and drawings have ridiculously large eyes, the better to determine motives and (lack of) guile; alternately, demons and monsters are often marked as “demonic” because they lack a sclera. It is uncomfortable to stare into a stranger’s eyes for too long because you know that they can guess at your motives; conversely, we show affection, trust, and care by holding the gaze of a loved one. Poker tells are traditionally in the eyes, and the same goes for martial arts; conversely, good boxers are said to have “shark eyes” when you can’t determine where exactly they’ll strike.

In sum: one has to be able to deceive as well as coordinate with any tool, but we’re generally worse with deception. Only a rare few are willing to do it – the rest of us drop our eyes when we lie. It’s the last part that’s critical, because you would see the disappointment in their eyes (you rationalize afterwards). The tool does not simply alert others of your activities. It feeds back into your knowledge about them, which feeds back into your knowledge of yourself.

You look somewhere, and someone’s eyes follow. Or they don’t. You know that they don’t because – sclera – you can see their eyes too. From this you learn infinitely more valuable information – most latter human evolution assumes that other people are the dominant threat, after all. You learn what it is that they think of you, which means you learn your place, which means you learn a course for action.

To learn any of that information, you must be seen, but you must see another.



On Being Martin Guerre

I

Martin Guerre was two people. One of them was Martin Guerre, the other was Arnaud du Tilh. I doubt that either envisioned this outcome when they were children, but the road of life has many curves. Stumbling across its pebbles and around its bends, both happened to pass through stages of “being Martin Guerre” and “not being Martin Guerre”. You would think it would be hard to become a Martin Guerre, but one should have some sympathy. There but for the grace of God, so go we all.

There is in fact a real Martin Guerre. He was born in the Basque country (French side) sometime around 1524, married a woman named Bertrande in ~1538, and vanished in 1548. This is approximately when he stopped being Martin Guerre.

Bertrande could not remarry, because Catholicism. So when Martin Guerre reappeared in 1557, she moved right back in with her husband and things went back to normal. Martin Guerre was, of course, questioned as to whether to not he was Martin Guerre. Only his mother in law and her husband (now his uncle Pierre Guerre) doubted.

Over time, Pierre became more convinced that this was not Martin Guerre. Little bits of evidence accumulated – a passing soldier, for instance, claimed to have met Martin Guerre in the Spanish Army, and to have seen that Guerre lose a leg during the siege of San Quentin. This came to a head in 1559: the villagers tried Martin Guerre as an impostor, but he was acquitted. Although Pierre tried to convince Bertrande, he couldn’t. Besides, where was the proof? Martin Guerre’s mother-and-father-in-law knew that something was wrong, but who would believe those people? Isn’t “mother-in-law hates her son” the oldest joke in the book? Even Juvenal exploited that trope.

But still: piece by piece, the mask slipped. The villagers picked up on strange tics and habits, things they couldn’t exactly explain, and certainly couldn’t demonstrate in a court of law (that had already failed), but that they knew were wrong.

Pierre, undeterred, started to build a case. Finally, in 1560, they had another trial. Pierre had discovered that this Martin Guerre was actually Arnaud du Tilh, a lecher from the neighboring village (this ought to give some idea of just how parochial France was at the time). Arnaud was convicted, but immediately appealed the case. He then proceeded to win the case at the higher court, because the high courts didn’t have the kind of ground-level knowledge that the provincial courts did.

Presumably, Arnaud would have stayed Martin Guerre forever, except that the real Martin Guerre astonishingly returned at the critical moment. This Martin Guerre was missing his leg, the critical piece of evidence, and so Arnaud was convicted. The once-Guerre Arnaud was hanged in front of the real Martin Guerre’s house. One wonders how he interpreted that.

II

I can think of fifty obvious ways to connect this to Seeing Like a State. I’m going to talk about the less obvious ones.

We know this story because Michel de Montaigne happened to be at the final trial. Michel de Montaigne was exactly the person you wanted to be there, because Montaigne was the spirit of the age. Here was his youth: Montaigne’s father had reached peak Renaissance, and wanted his son Michel to climb even higher. Naturally, he locked the child Montaigne in a room with a Latin tutor and forbade anyone to speak French. Latin, being the language of reason, would prepare Michel for a life of Renaissancery. Montaigne was the last native Latin speaker because of this – the second to last native speaker of Latin was seven or eight centuries before him. Besides Latin, Montaigne was schooled in all the everything that one can know. In many ways, Montaigne is a kind of case study in what happens when you train your child to be nothing but intellect. And it worked in some way: Montaigne’s essays are probably the most incredible pieces of argumentation from that period. Even ignoring the content (which is brilliant), they’re stylistically astonishing.

Yet, Montaigne’s father wanted his son to be a True Partisan of Reason. Montaigne became the opposite. For one thing, he didn’t write in Latin. Everyone else did, and he could write it better than them if he so chose. But he didn’t – Montaigne chose to write in French, and not merely in French, but in a highly dialectical, parochial, “peasant” form of it. What he argued in that French was also against the spirit of the age, because Montaigne was highly skeptical of reason. He didn’t dislike it, whatever that would mean, but he argued that it had a great many limitations. That, in Scott’s terms, epistemic knowledge alone will not save you. One of the most famous examples he used for that limit was Martin Guerre.

III

Montaigne was very interested in automatic human responses. He took these to be emblematic of where reason peters out.

Many years later, B. F. Skinner decided to see how far a misreading can go. Specifically, what happens when you take a tiny piece of writing without either its context or its warnings about why you shouldn’t do that and build an entire system around it. The misreading was of Montaigne. The system he called “behaviorism”.

Strict behaviorism (the “show me consciousness no you can’t” variety) is somewhat of a joke now. It always accompanies (or, rather, is accompanied by) variations on [a once-behaviorist] saying, “We spent a while pretending that consciousness didn’t exist, but finally kind of had to admit to ourselves that it did.”

It doesn’t matter much who actually said it, nor does it matter who said it so long as they were a behaviorist. The content of the quote is immaterial: anyone who reads a behaviorist treaty will come up with a similar response: “Yes, but I do feel like I have an inner experience, so…” The import is that an authority admits that they too feel this way. Without an authority’s admission, you have no argument. The form of argumentation precludes such “proof”:

A: “But humans are conscious!”

B: “Where’s the data to support that postulate?”

A: “I am conscious!”

B: “Well, I am not. And I’m the one with studies. And authority.”

You might get the impression that behaviorism progressed thusly: Big Bold Claim about consciousness -> recognition that the claimant is in fact conscious -> weaker form that only claims to study data. This is wrong. A more accurate progression would take one outside of strict behaviorism: empiricism can only study data -> qualia cannot be turned into data -> we’ll set aside qualia for the moment, then -> but without measuring those qualia, can we really say anything meaningful about their existence? -> qualia do not exist.

But even this is too fast. It makes it sound like a conscious decision. I think something a lot more like definition creep passed: slowly but concretely “not measurable” came to mean “irrelevant” came to mean “non-existent”. Fear is a description of a psychological state -> we can’t study internal states, so we’ll look at the responses that correlate with “fear” – > we can’t say much about “fear” as a feeling, but we can say quite a lot about running and screaming and sweating -> there’s not much need to suppose that anyone “feels” fear, is there? -> fear is running and screaming and sweating -> there is no internal state that we call “fear”. “Ok, but what about love and what about-” Cf Scott’s rationalism, of course.

Can you spot the leap? I agree that fear is correlated with behaviors, but the word “correlated” isn’t just window dressing. Internal states might always be accompanied by external factors. They might always be caused by them – see chemical imbalances, etc. But causation does not swallow result, even if we have the causation right. Now would be the time to use the phrase “downward causation“, but I don’t want to argue for one side or the other’s primacy. I merely mean to say: I agree that it’s unwieldy to have an internal state which corresponds to and is influenced by (or influences?) the external and then in turn influences external, measurable things, but that appears to be how the world works. What’s your point, that the world should be neater? You’re a scientist, grow up.

Definition creep is normally a mere annoyance. “I liked using this word, and now it doesn’t mean the word’s old meaning!” In the hands of the authorities, it’s a cudgel.

IV

It was not my argument (correlation to unity is a logical leap) that undid behaviorism. It was just data and complexity reaching the proper authorities, which is to say: admission through the proper channels. This is analogous to Martin Guerre’s leg – nothing before that could have proven that Martin Guerre was not Martin Guerre, even if everyone was certain that it was an impostor. Martin Guerre had been defined in such a way as to only correspond to one type of data, and this Martin Guerre fit that definition. His mother-in-law defined him differently which is why she thought differently. She thought of him as “actually Martin Guerre rather than the trappings of Martin Guerre”, but that definition implies a great deal of uncertainty. You can’t access the internal state of Guerre-ness (Guerritude?), and the impostor had perfectly “become” that data. Only new data – leglessness, here – can change that, because only that fits such a definition.

The James C. Scott connection should be obvious here: these “rational” forms of evidence are never entirely rational, and that can be a tremendous problem. The only way to combat them is to get them to admit that they are wrong, as is the case with the behavioralist admitting that he is conscious. Montaigne’s use of Martin Guerre reflects the particular authority of the times, but we’ve thankfully gotten much better at court definitions, despite a few hiccups here and there. What interests me about the Martin Guerre situation, and I think what interested Montaigne, is not simply the role of uncertainty in it. It’s the particular way that we want authorities to proclaim our certainties as certainties, and there the similarity breaks down. Martin Guerre(‘s family) had a clear, material reason for appealing to the authorities. We do not. Not only does the behavioralist’s interlocutor not have a reason, it’s unclear why he chose to make Skinner an authority.

Montaigne’s skepticism of reason was not necessarily in favor of empiricism (although he did have that bent). It can be more easily describes as a skepticism of the pronouncement of authority simply for being an authority. We ought to recall that the Renaissance adopted a great many metaphors from the Greeks and Romans. One of those metaphors, although metaphor is not a strong enough term for it, is reason as a guiding principle. Reason is the charioteer, as Plato puts it. Montaigne’s interest in automatic responses is the simple recognition that the charioteer is not the horse, and his skepticism was an overall skepticism of the falsifying powers of authority. The behaviorists understood this, logically, to mean that horses are charioteers. They were wrong, but it’s really our fault for paying attention, because the point was not about horses and chariots, or reason and habit/nervous tics, etc. But what Montaigne would have recognized as pernicious in his time is not the same as in ours.

Martin Guerre would have been Martin Guerre no matter what others had said. The categories can be argued – perhaps the name is actually part of the state, and in that sense he wouldn’t be, etc. – but you understand what I mean. It’s not at all clear to me how he would prove that, but it’s even less clear why he would want to outside of a dispute over inheritance. He wanted to be regarded as Martin Guerre legally, but he didn’t need to be assured that he was Martin Guerre. But we… well, we kind of do.

I take it that science, broadly defined, holds a particular place of authority in our culture. Even creationists don’t totally reject science, they just create their own. You could argue, charitably, that this is simply because people want “explanations” and these are the best we have. It comes from a certain skepticism. But I don’t think that’s really what’s going on. I think this is part of a much larger shift from basing yourself in action to basing yourself in identity. That leads to very a different relationship to authorities: one wants not merely o be able to act a way, but to be assured that it is you who is acting this way, to be seen and get recognition for it. And in philosophical matters, that correlated with a shift towards expecting a “why” to lead to a “how”, or overemphasizing a “why”.

V

So that we don’t get lost in abstracts: we still do this with scientists, and it’s clear that it’s not mere “skeptical, good-natured questioning.” Take mirror neurons.

Mirror neurons – along with quantum mechanics, neural plasticity, and epigenetics – are one of those parts of science that result in the kind of speculation that even hardcore STEM people want to call “scientism”. We have direct evidence for them in macaw monkeys and in songbirds, and indirect evidence for them in humans, so that’s kind of something. Everything else is up for grabs, and so they’re alternately charged with: empathy, consciousness, syntax, recognizing intention, language acquisition, etc.

I’m not a neuroscientist, but I’m pretty sure I can throw out intention and empathy on two relatively obvious grounds. Intention cannot possibly be the case, and I offer as proof the existence of Romantic Comedies. Our persistent inability to derive intention from action, so far as I can tell, makes up a solid 21% of Hollywood’s Revenue. Humor, albeit in a slightly different form, also gets rid of the empathy role. As Mel Brooks said, “Tragedy is when I cut my finger. Comedy is when you fall into an open sewer and die.” But if my mirror neurons are somehow firing off falling-to-my-death-feelings, then it’s unlikely that I would laugh. It’s not impossible that they’re part of how we relate to humans, but you’d need to be using a definition of “empathy” that is vastly different from the common one.

Those arguments are somewhat unimportant. More interesting is just how little evidence we have for any of that. Here’s a good review as to why almost everything said above is baseless. So much for the idea of our skepticism.

Now, I suspect that mirror neurons do play some part in our cognition, but I have no idea what they are, and I don’t think anyone does. My interest is not “what mirror neurons do”, but in the desire to latch onto them as an explanation for something that we already know exists. I’ll make that stronger: the desire for an authority to admit the existence of something that’s already part of our experience, to affirm your experience. Knowing that there’s a mechanism, and that it has the stamp of approval, is enough for us.

One of the weird results of this is that everyone is scurrying around for some essential “why”, or complaining about not having a “why”. Don’t get me wrong: whys are important. But, as Montaigne knew, they will not save you. So one tells themselves that if they were to just find it, then everything would be fine, and given the fact that all the old identities are based either in metis and action – trade, craft, art – or discredited concepts – nation, religion, myth – people are reduced to these much deeper, impossible “whys”. So they frantically grasp at any acceptable authority to assert that a) they are who they think they are but also, b) there’s a why, there’s a reason, as though this will make them act someway. That changes the relationship to authority, duh. But it changes the relationship to yourself more.

Martin Guerre never needed someone to prove to himself that he was martin Guerre. He needed to prove it to them. Reversing that is the sign of humans that have lost the ability to be human. They won’t act but they must be, yet to be in some way that matters, an Other has to recognize that. It turns you into a Martin Guerre before yourself, constantly on the outs, skittering around looking for the proof. And, of course, no science is ever going to be able to prove that to you, mostly because this simple truth is also the most profound: You are not who you think you are. So who could possibly prove that you are to you?

Bigger question: why does everyone insist on turning themselves into Martin Guerre but more extreme, into Martin Guerre demanding that the authorities not only admit that he is, but prove to him that he is?

When I was talking about Seeing Like a State I said this. It’s related, I promise:

Namely: there’s a difference between asking a group why something works and asking them what to do. If you asked a villager “why” their crop-system works, the answer they give you will almost certainly be wrong (objectively, scientifically, whatever here). And yet if you ask them what to do, i.e. the government crops are failing, how do we make this work? they’d (in given examples) be correct. This is highly speculative, but I suspect that a lot of the problem with “rationalism” boils down to this.

So goes all. “Is metis possible anymore, and if it isn’t, when will we destroy everything?”

VI

Here’s an irony: The most prominent mirror neuron enthusiast is Vilayanur Subramanian Ramachandran. He’s also famous as the creator of the mirror-box therapeutic approach to phantom limb pain. He has a thing about mirrors, something something hammers and everything nails, something something possibly empathizing with the nail.

Everyone picked up on mirror box treatment, too, for reasons that aren’t explicit but should be understandable. Resolve that dilemma and you start to get the bigger why you wanted. It’s just, well, gross. And, of course, the mirror box was also another thing that no one is actually sure exists or works.

top image from Aleksei German’s Hard to Be a God

Part of the Uruk Series

Share this: Twitter

Facebook

Like this: Like Loading... Related