I.

Subject 1, the Inquirer, sits in a darkened laboratory. She wears a white swim cap over her wavy blond hair. Pressed to the back of her head is a beige device shaped like a large figure eight, or an infinity symbol turned on its side. It’s been positioned carefully, guided by lasers. At its center, a tiny light glows green.

Subject 2, the Respondent, is seated in a second lab, nearly a mile away. This room is brightly lit. He also wears a white skullcap. His is covered in yellow and green electrodes, like the eyes that dot seraphim’s bodies in the Book of Revelation.

The game begins. Both subjects stare at screens. The Inquirer is prompted to pick a category from a drop-down list. Skipping over things like food, boats, and countries, she chooses “animals”; from there, the program makes a random selection that’s kept secret from her. An instant later, the Respondent’s monitor displays the word “shark.”

The Inquirer’s goal is to guess which signifier the Respondent sees. She’s allowed to ask three questions, drawn from a set written by the scientists who’ve designed this game. (Is it a mammal? Can it fly?) She taps on her selection, and it’s relayed to the Respondent.

He must answer yes or no. But he can’t speak, write, or sign. Instead, he stares at his screen. On either side of it are flashing lights. If he looks at one of them, the electrodes translate his brain’s signals into “yes.” If he looks at the other, they discern “no.” His response is then sent via Internet to the Inquirer.

When it’s “yes,” the powerful magnet that’s pressed to her skull sends a pulse through skin and bone, stimulating her occipital lobe, and she sees what’s called a phosphene: a visual disturbance that’s been compared, by Jerry Adler in Smithsonian Magazine, to “heat lightning on the horizon.” (This is why the lights are dim.) If his answer is “no,” she sees nothing.

In this way, the Respondent has replied using only his mind. He’s conveyed a linguistic message without language. Its transmission has been private, silent, consciously sent and received in real time. After three questions, the game is completed.

These were the basic conditions for a recent placebo-controlled experiment undertaken at the University of Washington. Ten healthy participants between the ages of nineteen and thirty-nine took part, yielding five pairs of Inquirers and Respondents. Inquirers managed to win the game—correctly guessing Respondents’ randomly assigned objects—an average of nearly three out of four times. Some subjects scored perfectly.

II.

As a child, I wanted to believe in telepathy. An unexceptional desire, fortified by somewhat exceptional credulity. Until I was at least ten, for instance, my father fooled me into thinking he could read my mind: whenever we went to the airport, he’d guess I wanted pizza for dinner. He was right a statistically significant percentage of the time. (To be fair, my father also believed that he was reading my mind, which added to my confusion. Just as polygraph tests read not lies but anxiety, kids tend to read not truth but self-certainty.)

The promise of telepathy filled me with thrilled unease. I had secrets. I didn’t want them to be legible to some other entity (God, Dad, et al.). When you’re a kid, your mind is one of your few private belongings, even more than your body. I didn’t want anyone extracting my thoughts, tampering with or disabling my dreams.

My brother and I had a babysitter who gave us free reign: let us drink Mountain Dew, stay up late, and watch scary movies. One of these tampered with and disabled me; I had nightmares for weeks. Aptly, it was called Dreamscape, starring a young Dennis Quaid. Its description on IMDb: “A government funded project looks into using psychics to enter people’s dreams, with some mechanical help. When a subject dies in his sleep, [Quaid] becomes suspicious that another of the psychics is killing people in the dreams somehow and that is causing them to die in real life. He must find a way to stop the abuse of the power to enter dreams.” (IMDb rating: 6.3.)

In the spring of 2013, Science published a paper describing the work of neuroscientists who’d managed to decode dreams. Using fMRI technology, they measured neural activity in early stages of sleep, and compared it to patterns they saw in waking subjects’ brains. (They found that a person’s mental maps appeared roughly the same whether he was looking at pictures of the sea or dreaming of it.) These scientists are a long way from “abusing the power to enter dreams.” But the membranes that have for so long sealed in the contents of even our unconscious thoughts now seem vulnerable to puncture.

I’ve never wanted to wield the needle myself; I don’t like poking holes in people. I haven’t read anyone else’s diary or text messages, I refrain from looking in friends’ medicine cabinets, and once, when a boyfriend invited me to use his computer, maybe having forgotten that he’d left his financial records up, I minimized the page before I saw a single value.

This much discretion, of course, can be a fault. While self-containment is distinct from passivity, both place the burden of confession, or even connection, on others. Intimacy can feel like a kind of rupturing. (And then, conversely, rupture can be mistaken for intimacy.) In practical terms, it also makes being a writer more difficult. Another way of putting this may be that I’m not curious enough.

Yet as concerned as I remain with my own and others’ privacy, I’ve always been enticed by complicit secret communication: inside jokes, foreign languages, the esoteric idioms of twins. And, of course, telepathy. Today my long-dormant belief in it is stronger than ever.

III.

In photos, Carlés Grau Fonollosa has a dense white beard and dark-rimmed glasses. Grau is a neuroscientist and honorary psychology lecturer at the Universitat de Barcelona, where he directed the Neurodynamics Laboratory from 2002 to 2012. He was also lead author of a 2014 paper on the first conscious, noninvasive brain-to-brain communication between humans.

“Digital telepathy,” Grau explained in an interview following the paper’s publication, “refers to direct communication between distant brains with a technological support.” (“Psychics [entered] people’s dreams with some mechanical help,” reads the Dreamscape description.) The brains in question were in fact quite distant: three were in Strasbourg; the other was in Thiruvananthapuram, in the state of Kerala. But any distance will do.

Grau and his colleagues’ methods were in some ways similar to those later deployed at the University of Washington with the question-and-answer game. Both teams relied on a combination of brain-computer interface (BCI) and computer-brain interface (CBI) technology. Electroencephalography captured the neural activity of a first participant, translated it into binary code, and then sent the bits digitally to a transcranial magnetic stimulation device that was connected to a second volunteer. He then either briefly saw a phosphene—the “heat lightning” in his brain—or didn’t.

Several things set this experiment apart, not least that it was the first of its kind, using two human participants, both of whom were fully aware. “Indeed,” the authors concluded, “we may use the term mind-to-mind transmission here as opposed to brain-to-brain, because both the origin and the destination of the communication involved the conscious activity of the subjects.” And the study had another distinguishing feature: it was the first in which the encoded data translated into words. Using only their minds, subjects sent “hola” and “ciao” 4,800 miles.

Granted, the transmissions were effortful—every letter was assigned its own string of binary code—and each salutation took more than an hour to relay. But as kindly explained to me by another of the paper’s authors, Dr. Alvaro Pascual-Leone, director of the Berenson-Allen Center for Noninvasive Brain Stimulation and a professor of neurology at Harvard Medical Center, this was a “proof of principle”—a principle that may eventually transform the way we think of language.

IV.

Scientists have been experimenting with brain-to-brain communication for some time; recent results, which have been remarkable, represent the culmination of a decade or so of research. In the past few years, brain-machine interfaces have been used on monkeys, rodents, and people, and in at least one case, on a human-rat dyad. By training his eyes on a flashing light, a volunteer could get a rat’s tail to move.

Some of the most noteworthy innovations have come from a team led by Miguel Nicolelis at Duke. Members of the Nicolelis lab began by connecting pairs of rat brains. After the animals had been implanted with microelectrodes, the neural activity of a rat in a Brazilian lab could be transmitted via Internet to one in Durham, North Carolina. The second rat, upon receiving a brain signal from the first, would perform a task—pressing a lever that rewarded them both with water. These results, when presented three years ago, were seen by many as revolutionary.

But now the Nicolelis team has moved on, connecting several animals at once to establish larger “Brainets.” And their findings—published in a pair of Scientific Reports studies last summer—are even headier. They managed, for example, to get three monkeys to collaborate mentally to move a virtual arm through 3D space. Maybe still more impressive and unsettling, the researchers created a network of four interconnected rat brains, which was able to solve “a number of useful computational problems, such as discrete classification, image processing, storage and retrieval of tactile information, and even weather forecasting.”

To predict the chance of rain, rats were given two different kinds of information: about changes in temperature and barometric pressure. The data were delivered, in different trials, as pulses of stimulation to their brains. Putting the information together into a single output, rats determined the likelihood of precipitation. And their projections were far more accurate when they were connected in Brainets than when they performed alone. Afterwards, a Duke press release claimed, “animal Brainets could serve as the core of organic computers that employ a hybrid digital-analog computational architecture.” As Nicolelis said afterwards: “Essentially, we created a super-brain.” The Nicolelis lab has also applied brain-computer interfacing to people: with the help of more than 150 collaborators, they constructed a brain-controlled exoskeleton that enabled Juliano Pinto, a man with paralyzed legs, to make the opening kick at the 2014 World Cup in Brazil—and to feel his foot come in contact with the ball.

It should be noted that exoskeletons are controversial. Like many BCIs, they implicitly privilege able-bodiedness. Writing in The Atlantic shortly after Pinto’s kick, Rose Eveleth argued that it’s “important to think about why many people seem more interested in hoisting someone out of their wheelchair than they are in making spaces accessible to that chair.” If these devices were to become more commonplace, it could have significant social, civic, and policy implications.

Exoskeletons aren’t the only BCIs that facilitate physical movement. Other studies have used devices to send motor commands from person to person. The same University of Washington team that collaborated to create the question-and-answer game derived that experiment from an earlier study. In 2013, the principal researchers, Rajesh Rao and Andrea Stocco, used electroencephalography and transcranial magnetic stimulation to create the very first human brain-to-brain interface and played a game jointly: Rao visually scanned a screen for targets; when he saw one, he imagined moving his hand to fire a cannon; on the other side of campus, Stocco’s finger twitched involuntarily to hit a space bar, and the target was incinerated.

In a certain sense, the trial was rudimentary. Stocco’s movements were reflexive—not so different from the lab rat whose tail swished without its will. But it laid the foundation for more human-human studies, including the researchers’ own. And, too, the team headed by Grau.

In theory, Grau’s group could have chosen to translate the data sent from Kerala to France in any number of ways: a string of binary code could have been made to represent a musical note, for instance, or a color or shape. But for the first experiment to use brain-to-brain communication in which both participants were conscious of transmission, the decision to convert the code into words doesn’t seem arbitrary. We associate consciousness so strongly with the capacity for language that we tend to consider it the trait that makes us most human.

But language and consciousness how defined? That question has been with us for a long time, more than a century. With the arrival of these new technologies, we’re being challenged once again to revise the abiding view that consciousness is private property, circumscribed and self-contained—that it’s sealed in by a membrane in want of puncturing, and that language is the process of puncture.

V.

The notion that language is required for consciousness collapses pretty quickly under scrutiny. Just because we can’t always confirm the presence of thought doesn’t mean it isn’t happening.

Dr. Pascual-Leone, part of the Grau team, has a warm, reassuring voice, gently inflected by a Spanish accent. When I spoke with him by phone, he referred to Jean-Dominique Bauby’s “stunning, moving” book The Diving Bell and the Butterfly, which depicts Bauby’s life following a stroke that left him with locked-in syndrome, cleaved of language but not consciousness. Pascual-Leone also mentioned the “extremely laborious effort” of composing it. Bauby wrote the whole book by blinking one eye.

It’s reasonable to ask, Dr. Pascual-Leone noted, whether those in comas or persistent vegetative states are aware. “They’re not able to convey information,” he said. “But we don’t know that they don’t have it.” There are other conditions, like speech apraxia or Broca’s aphasia, which may prevent canonical expression, but certainly not thought. Stephen Hawking, who has ALS, uses a device that allows him to type by moving a muscle in his cheek.

A primary aim of researchers working on brain-to-brain communication is to help people without access to conventional language render their thoughts in other ways. But even subtler applications may one day be made possible. Grau and his team concluded that their results “suggest new research directions, including the non-invasive direct transmission of emotions…or the possibility of sense synthesis.” And as Dr. Pascul-Leone told me, “Some people have trouble putting emphasis where they want to put emphasis.”

Even for those who use traditional language, the process of externalization—at least as I experience it—is very often elusive, confusing, and fraught, our words cascading away from us. Our desire to be understood can be frustrated in so many ways, and we suffer for it, sometimes enormously.

One could imagine therapeutic contexts in which people with conditions as wide-ranging and pervasive as Asperger’s, depression, OCD, and PTSD are able to convey their feelings more fluidly, and receive greater relief. The UW group also described trans-lingual applications. The ultimate goal, Dr. Pascual-Leone explained, is “to enrich the level of human communication.”

Yet the Grau paper issues a strong warning: “We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization.” In case readers have doubts about what this change will require, the authors use their final words to make it plain: we’ll need “new ethical and legislative responses.”

VI.

In a 1999 murder case, a novel type of EEG-based lie-detection test led the accused to a guilty plea. And at a controversial 2008 trial in India, data were used “to establish that the suspect’s brain contained knowledge that only the murderer could possess.” Both are mentioned by Rajesh Rao, one of the UW researchers, in his textbook on brain-computer interfacing.

It’s possible to imagine legal settings in which access to suspects’ memories and thoughts could serve the public good, leading to more just incarcerations and helping to exonerate the falsely accused. Rao refers to one such case, in which a convicted murderer was cleared with EEG evidence after spending twenty-four years in prison.

The problem with relying on so-called brain fingerprinting in situations with such high stakes, as Rao points out, is that the techniques “suffer from a number of weaknesses”: lack of proven success in the field, for instance, and potential manipulation through counter-steps. Such weaknesses, of course, can lead to false convictions.

Even if they could be fixed, these new methods raise intractable questions that extend beyond the justice system. It seems relevant to note, for example, that for Rao and Stocco’s pilot brain-to-brain study, they received a grant from the Army Research Office. It’s not hard to understand why the military might be interested in brain-to-brain interfacing: What could be stealthier than sending a message straight to a soldier’s mind?

But if courts and armed forces could benefit from these technologies, so, too, could criminals and terrorists. And institutions designed to protect the rights of citizens also aren’t invulnerable to corruption and overreach. Intelligence agencies might potentially abuse new powers of surveillance. Grau’s team, in alluding to some of these concerns—unusual in such an article—underscored the seriousness with which the authors considered them. At the end of my call with Dr. Pascual-Leone, he suggested that “scientists have a responsibility not just to have an intention in mind, but also to address, or at least be aware of, potential drawbacks and risks,” and to engage in the conversation ahead of time.

A whole chapter in Rao’s book is devoted to the ethics of BCI—safety, security, and privacy concerns. As already mentioned, some devices, like exoskeletons, have been polarizing because of their implied able-normativity. Rao notes as well that BCIs could exacerbate social inequities, further segregating those who can afford cognitive-, memory-, and motor-enhancing technologies from the rest of us. BCIs also present significant challenges with respect to getting informed consent, especially when the people who would use them can’t communicate with conventional language.

These are just a few of the many ethics questions raised by BCIs. As Rao goes on to write,

in the not-too-distant future, one may see the commercialization of sophisticated, wireless BCIs that can both record and stimulate the brain. The advent of such BCIs will bring with it the potential for some alarming scenarios, potentially turning science fiction to reality. In particular, wireless communication from or to a brain could be intercepted if encryption is not used or if the encryption method used is not sufficiently strong.

He then lists some possible hazards, which do, in fact, sound speculative: mind reading / “brain tapping,” coercion, memory manipulation (“the brain could also potentially be hijacked to selectively erase memories or write in false [ones]”), and cognitive damage or control as a result of contact with computer viruses.

Taken to a logical extreme, digital telepathy could theoretically lead to:

1. Content extracted, willingly or not, from a person’s mind

2. Information implanted, willingly or not, by an outside party

3. Confusion over culpability

If a person were to receive an external command—shoot the civilian in the red hat; grab the backpack that’s under the chair; or, simpler, open this window—who is responsible for the action’s outcome: the person performing it, or the one who sent the directive? What if there were multiple actors—or senders? Human Brainets seem far in the future. But what are their implications? What if the sender were a machine? As Rao asks, “where does the human end and the machine begin?” What if you couldn’t be sure if there’d even been a directive—that the idea wasn’t your own?

It’s this last point—the potential, with brain-to-brain interfacing, for diffuse communication and attendant divestments from conventional notions of identity—that I find most fascinating. Such an elastic notion of selfhood captivates me precisely because it’s so at odds with my own desire to preserve boundaries—to respect privacy, to resist puncturing things, to not be too curious, maybe, if it means causing alienation.

I agree with the postmodern consensus that there’s no stable, holistic self; that the notion of a “boundary” is, in some sense, a joke. And yet I also find the idea that consciousness could be made so literally collective—constructed in real time by more than one mind via Borg-like synergy—fundamentally creepy.

But does the promise of fused consciousness really diverge so radically from our current reality? We already have a rudimentary Brainet where we work together to solve problems; it’s called the Internet. Contributors are often anonymous, we may forget our sources soon after they’re assimilated, and plenty of ideas don’t have a sole “author.” This suggests some of what might happen if digital telepathy makes joined awareness more viable.

Our modes of interaction have changed profoundly in recent years. As they continue to transform, virtually every area of our lives will be affected. The way that we conceptualize communication itself will likely also shift. When people connect via digital telepathy, are they using language? Can language happen solely in the mind, without any external signifiers?

It’s easy to say no: the brain-to-brain experiments so far undertaken have relied on unilateral signals—something like Morse code—rather than on shared use of a socially constructed set of symbols. But scientists intend for that to change, for information to be relayed at a far higher level of sophistication, and bi-directionally. What will we think of these messages then?

VII.

In the sixties, a considerable effort was made to deconstruct top-down models of communication. In Paulo Freire’s seminal book on education, Pedagogy of the Oppressed (1968), he decried the “banking” concept, in which students are thought to be empty vaults into which teachers deposit knowledge. Information, in this context, is a commodity: something of specific value that can be moved from place to place, unchanged.

Other thinkers had similar ideas, applied specifically to language. Linguist Michael Reddy, for instance, claimed that the metaphors we use when we talk about language (metalanguage) reflect an unconscious belief that it’s a perfect “conduit” for information. He argued against the notion that we can extract the “content” of a person’s consciousness, via words, and move it directly—deposit it—into someone else’s mind. Referring to an often-used phrase, Reddy noted, “we do not literally ‘get thoughts across’ when we talk, do we? This sounds like mental telepathy or clairvoyance, and suggests that communication transfers thought processes somehow bodily.” (In fact, I’d argue that all communication is intrinsically bodily, but that may be a sidebar.)

“Actually,” Reddy elaborates, “no one receives anyone else’s thoughts directly in their minds when they are using language.” Instead, he suggests, “Language seems rather to help one person to construct out of his own stock of mental stuff something like a replica or copy of someone else’s thoughts.” In other words, we filter all new ideas through the strange apparatus of our own subjectivity, understanding them—to the degree that we do—with the help of our particular powers of perception, personal history, expertise, and taste. In the process, we change the ideas.

And yet, thirty-five years on, almost nowhere is the conduit metaphor more prevalent than in papers on brain-to-brain interface. The Grau et al. article refers to “bits…successfully transmitted [emphasis added] mind-to-mind.” In referring to Brainets, Pais-Vieira et al. describe “information transfer between individual brains.”

There’s logic in this. In some ways, digital telepathy seems like an enactment of the banking concept—proof that information can, indeed, be directly transferred, discrete and intact, from one mind to the next, whether the data moved are flashes of light (a 0 or 1, no or yes), or whether, at some future time, they’re thoughts, feelings, or even skills and memories.

But to Reddy’s point, the information “imported” will still always then be inflected by the recipient’s selfhood; a composer downloading an ornithologist’s bird-watching memory will contextualize it differently than a zoologist (if composers and ornithologists even exist in the future).

A memory isn’t language. But language can happen in any symbolic exchange, externalized or otherwise, made with awareness and intention using a verifiable symbolic system (numbers, letters, signs, or something else we have yet to imagine), whether or not it’s externalized.

Telepathy denotes shared consciousness. But maybe, in some sense, that’s what every act of language is. Whether signed or spoken or written or transmitted with the aid of machines, every collaboration, every instance of intersubjectivity, that is language. Not the words or gestures themselves, and not whatever “content” they may or may not contain, but the collective process of creation and deciphering.