Written by David Vandervort

Recently the New York Post put up a remarkable video, of a man whose Parkinsonian tremors in his arms and legs were controlled by electrical current sent to electrodes in his brain, from a unit in his chest (http://nypost.com/video/watch-as-parkinsons-treatment-calms-mans-tremors-in-seconds). There is currently a huge amount of research being done not just in medicine but in the area of medically beneficial devices that can be worn, or inserted into the body. The well-known Fitbit wristband, that helps people track things like steps walked and energy used, and the wonderful Parkinson’s treatment in the video, are merely the tip of the iceberg. These devices are the first steps toward the Singularity.

I should mention that I personally think that if the Singularity happens, it’s going to be a long time from now. I also think it’s very likely that none of our thinking about it so far comes anywhere near to the way things will work out. We can see the shape of how things will start, though, already around us.

The most common story about how people will grow into the Singularity goes something like this: People will have chips implanted in their brains that will expand their mental capabilities. These chips will allow people to access the Internet and download whole books directly into their brains, store appointments on the chip instead of trying to remember them and do calculations so quickly and easily, they will never need a calculator again. This story is badly misguided, presuming that a chip that can do things that we already have in our phones constitutes a breakthrough “enhancement” that will change the world. I don’t think so.

Even if people were clamoring to throw away their phones and download cat videos directly into their brains (Or whatever. Books, I said? Right, books) memory is a problem. Memory is a multi-step process, involving many different parts of the brain, with which parts are involved depending in part on what is to be remembered. When we are first exposed to something we might remember, it triggers associations in our minds. To a great extent we aren’t even aware of the associations. Most experiences don’t survive long enough to make it into long term memory — in one side of the brain, out the other. In order to be remembered, things usually need some sort of hook or strong association that makes them worth remembering. Then there is the mysterious process known as consolidation, which seems to (mostly) take place when we sleep. Consolidation may help understand memories by linking them to others and fixing them in the brain. I mention all this to show that “downloading” and “learning” have nothing in common. Don’t confuse the one for the other.

So chips have a lot of work to do, most of which we don’t yet understand, before they can expand human capabilities in ways that a few good apps on a smart phone can’t. On the other hand, there may be things a chip can do — implanted strategically, maybe not always in the brain — that can be developed quicker. Like I said above, medical applications are a good place to start looking. There are problems with current wearable medical devices. A wristband or key fob or necklace (or most any other external form factor) is often inaccurate for reasons completely beyond the control of the manufacturer. They bounce around, for one thing. In more technical terms, that means they have only sporadic contact with the subject, decreasing the quality of data collection. The batteries run out. People take them off to take a shower or when they go to bed and then, sometimes, forget to put them back on.

Implanting in the body escapes most of those limitations. So far, very few fitness enthusiasts are so enthusiastic that they prefer direct implantation over a wristband. As time goes on and more applications are found that help people, such as Parkinson’s patients who want control back over their own limbs, both the number and variety of chips will increase. This is, shall we say, step one on the many-step journey toward a much more comprehensive and beneficial use of machine enhancement of humans. Call it the pre-singularity.

This first step is already underway. A recent survey by the Pew Research Center (http://pewinternet.org/2016/07/26/u-s-public-wary-of-biomedical-technologies-to-enhance-human-abilities) shows that, at least here in the U.S., people are not very welcoming of the ideas of technologically enhancing the human body. This is understandable. Who wants to have a chip in their head only to have it hacked? The old story about a hypnotist making someone act like a chicken sounds positively quaint compared to a hacker who can make you forget your name, or think you remember a different one. The survey was limited, though, and the possibilities offered were geared more toward people’s vanity (“improved cognitive abilities”) than to their everyday concerns. In other words, when push comes to shove, someone who thinks brain chips are an abomination will still usually accept one that will keep them alive.

In a sense, that acceptance is step 2 of the pre-singularity. Step three is the crucial and, to me, most interesting one. When someone is in the hospital, their temperature pulse, blood pressure and blood oxygen saturation are often closely monitored. This kind of physical data is also watched for astronauts and test pilots, through sensors in their pressure suits, so that ground crews can monitor their health. How long will it be before hooking someone up to telemetry means just plugging into a port on the back of their neck, or even giving a password to activate the on-board wifi?

Astronauts and test pilots are not the only people whose jobs may be so hazardous to their health that having someone keep a constant eye on their vital signs is a good idea. Firefighters, police officers, hospital ED workers and taxi drivers could be added to the list without even thinking very hard. This is a common pattern for technology. At first it is used by those with deep pockets. Then after utility is proved and manufacturing costs come down, it spreads to segments of the general population whose needs dovetail with the provided services. Think of the way, a few years ago, it was considered unusual for a drug dealer not to have a pager. They adopted it, because of what it could do for them. Likewise, biomonitoring chips will become common when their benefits outweigh their costs. This is why I am concentrating on health-related uses. Staying alive outweighs a lot of costs.

There is one more important step after people begin to adopt implants in order to protect their health. That is when manufacturers compete on features. Probably in less than 20 years (or between 25 and 50 years, if the FDA gets involved), monitoring simple signs like heart rate and O2 sat, will be expected. People will demand more. O2 saturation is down, how is the long term trend? And is there something the chip can do, such as releasing medication located in a pump elsewhere in the body, to help? Automatic medicine pumps already exist. Keying them to immediate physical data is almost child’s play.

Long before chips are integrated with memory, they will be hooked up to other systems in the body. It will be interesting watching the physical stats we’ve talked about being made available in a heads-up type display that exists only in the user’s brain. Google glasses? Who needs ‘em?

While nothing I’ve described here is as exciting as having a chip that doubles your effective IQ, they are still meaningful (upgrades!). Much of the technology (except for the virtual heads-up display) is already under development. These are the things we will see in a few years and that will reduce the wariness the Pew survey found about more advanced tech, wired into our bodies and our brains. Welcome to the pre-singularity.

www.SingularDTV.com

@singularDTV