With Elon Musk aiming to build brain implants so people can communicate telepathically, fMRIs already (approximately) reading minds, under-the-radar companies working on computer chips to control brain activity that generates intentions, and technologies promising to boost brain performance like Bradley Cooper’s in “Limitless,” it might seem like neuroscience has become neurofiction. But the advances, and the threats they pose, are all too real, experts warned on Wednesday.

In an essay in Nature, 27 neuroscientists, physicians, ethicists, and artificial intelligence experts argue that these and other powerful “neurotechnologies,” originally conceived to help people who are paralyzed or have other neurological disorders, could “exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people.”

Science, they say, is halfway around the world before ethics has laced up its sneakers.

advertisement

Calling themselves the Morningside Group, the experts conclude that ethics guidelines for experimenting on people and developing artificial intelligence don’t even acknowledge the dystopian possibilities of neurotech. Whether the issue is privacy (will devices be able to read your thoughts as you walk around, as cameras now capture your image?) or autonomy (will devices that read thoughts and “autofill” what you want to do next make people feel their free will has been hijacked?) or other issues, “the ethical thinking has been insufficient,” said Dr. Rafael Yuste of Columbia University, a neuroscientist who co-authored the essay. “Science is advancing to the point where suddenly you can do things you never would have thought possible.”

Some of the most ambitious, and possibly threatening, neurotech might never arrive, of course. If so, it won’t be for lack of trying. Musk isn’t the only zillionaire sinking money into the field.

advertisement

Newsletters Sign up for Morning Rounds Your daily dose of news in health and medicine. Please enter a valid email address. Privacy Policy Leave this field empty if you're human:

Last year, Bryan Johnson took $100 million of the $800 million he got for selling to eBay his online payments company Braintree and started Kernel, which he describes as “a human intelligence company” aiming “to develop the world’s first neuroprosthesis for cognition.” Facebook is going full steam ahead on a “thought-to-typing” system to sense people’s brain waves, decipher the intended words, and type them. Current spending on neurotechnology by companies looking for a profit is $100 million per year “and growing fast,” the Morningside Group estimates.

Furthest along are technologies to sense and decipher brain waves (this pattern means the person is thinking of a car, this pattern means she’s thinking of a hamburger). Called “reading” the brain, it could soon be possible through helmets and other noninvasive, even remote devices. Scientists in Germany, for instance, used sensors to decode brain activity associated with intentionality precisely enough to send accurate “move this way!” commands to a robot, making it possible to “interact with a robotic service assistant … using only thoughts,” they reported in unpublished research.

“If you can read out the activity of people’s brain and decode it, that’s a much bigger threat to privacy than having your texts or emails hacked,” Yuste said in an interview.

Because “citizens should have the ability — and right — to keep their neural data private,” the Morningside group writes, “neurorights” should be incorporated into national laws as well as international pledges such as the Universal Declaration of Human Rights.

Eavesdropping on thoughts is only the beginning of the alarming possibilities. Researchers are going beyond reading the brain to “writing” it, or activating neurons with an external device in a way that alters circuits, controls thought, and even implants memories.

The U.S. Defense Advanced Research Projects Agency, for instance, launched a project this year to develop a wireless device that monitors brain activity using 1 million electrodes. That could decipher brain signals that move different parts of the body, then play them back so that paralyzed people could move again. “But the ultimate goal is to build an integrated circuit chip that you could implant into the brain and ‘write’ activity into it,” Yuste said.

In his own research, Yuste has used one of neuroscience’s most stunning recent advances, optogenetics, to “reprogram” mice’s brains to “make them believe they saw something they never saw.” Optogenetics genetically alters neurons so that when light shines on them the neurons fire. Yuste and his colleagues used light to repeatedly stimulate neurons in mice’s visual cortex, which processes signals from the eyes. Neurons that fire together wire together, forming functional circuits to, for instance, encode information. In this case, the information was a memory of seeing a particular object, they reported last year. And in an unpublished study, Yuste found that after optical stimulation formed the new circuit in mice’s brains, stimulating just one of the neurons that were part of it caused the entire circuit to fire. The animals “remembered” an experience they never had.

People would have to freely choose to wear devices that stimulate neurons, which are expected to be marketed as ways to enhance cognition and boost memory. “But if people believe the devices give them an edge, everyone would jump into the technology,” Yuste said. That opens the door to implanting memories, “and before you know it, there goes your identity and your agency.”

Neurotech also threatens a loss of people’s core sense of identity and autonomy. Implanted devices that promise cognitive enhancement “can become something like a ‘third party’ in the head of a person,” said bioethicist and Morningside member Sara Goering of the University of Washington. “A semi-autonomous device combined with machine learning [could] figure out how to not only read but anticipate what the user intends to do and help make the action a reality. That could be very helpful for smooth, integrated functioning when it’s working well,” but might lead to disturbing thoughts such as, “Did I do that? Or did the device make me do that?”

Although Facebook’s “thoughts to type” system would be external and not implanted, it could include auto-complete or auto-correct function, company scientists have said. If a user starts thinking about what words to use in a post, but the system jumps in to finish the thought or alter it for “accuracy,” people might no longer recognize an action or thought as their own.

How likely is the advent of technologies to read and write the brain? “Very likely,” Yuste said. “The question is, how soon?”