In a recent interview, Elon Musk talked about AI safety. Here are his (abridged) words:

Something I think is going to be quite important — I don’t know of a company that’s working on it seriously — is a neural lace. If you assume any rate of advancement in AI, we will be left behind by a lot… …I think one of the solutions… the solutions that seems maybe the best one, is to have an AI layer. If you think of the limbic system, your cortex, and then a digital layer, so a third layer above the cortex that could work well, and symbiotically, with you. I mean just as your cortex works symbiotically with your limbic system, your sort-of a third digital layer could work symbiotically with the rest of you.

(Bold emphasis mine). Elon Musk is talking, of course, about brain enhancement. He talks about some of the engineering difficulties involved:

The fundamental limitation is input/output. We’re already a cyborg… but we’re I/O bound. Particularly output bound… our input bandwidth is much better because we have a high-bandwidth visual interface.

(Look at the video to see the full quote; it’s worth it in my opinion).

I’ve actually been thinking along these lines for a long time. He’s 100% right that the main limitation is I/O bandwidth, particularly output bandwidth. It’s hard to put exact numbers, but it’s fair to say that the input from the eyes to the brain is on the order of a few megabits per second. Now compare this to output bandwidth. Our most common way of producing output is speech. Speech has bandwidth around a few hundred bits per second. Typing on a keyboard is another way of producing output, which for most people is somewhat slower than speech – around a few tens to a hundred bits per second. A factor of 10,000 slower than vision. Is there a way to get an output bandwidth closer to megabits per second?

You might ask why don’t we try some form of output that includes not just the mouth and the fingers but the whole body? Well, doing that wouldn’t result in much more output bandwidth because our fingers and speech-production mechanisms actually take up a lot of the total external bandwidth available to the brain. So the only way of really getting past the output bandwidth limitation would be directly capturing signals from the brain itself.

Non-invasive brain signal capture techniques probably won’t suffice. The most common one is EEG which I’ve worked with for several years and is extremely limited. Current EEG systems fall far short of keyboard typing speed and far far short of speech. And EEG is not completely noninvasive either. MEG is a new technology that’s less invasive than EEG but is comparable (or worse) in terms of bandwidth. Technologies like fMRI do exist that have good levels of bandwidth (say, up to 10,000 bits/second seems to be feasible) but they are incredibly bulky and impractical for anything but a specialized laboratory environment. And they too fall far short of the megabits/second target.

So the only way to achieve the target is invasive (surgically-implanted) probes. Imagine a meshwork layered very carefully on top of the cerebral cortex. Presumably, future technology would allow layering this meshwork in a minimally invasive manner.

Anyway, once you have access to the brain, then you have the issue of producing probes that can actually capture information from the brain reliably over a long period of time. The good news is that the cerebral cortex is exposed at the surface and so you could get very high bandwidth just by laying out your probes on the surface of the brain – no deep insertion required. The bad news is that brain tissue is very soft and vulnerable and you could easily wind up cutting and bruising your brain.

And even if you do circumvent those issues, you need to find a way to get your body to ‘accept’ the implants. Normally, with external objects implanted into the body, the body coats the objects with layers of insulating material to protect tissues. This is fine for things like pacemakers and hip implants, but is totally disastrous for brain implants because it would prevent the implants from reading signals produced by neurons. This is an area of active research and people are working on ways to create more bio-compatible materials. One interesting approach uses a very fine micro-scale metallic mesh in which cells taken from you own body are allowed to grow into.

So producing reliable, effective, minimally-invasive neural implants is hard. Still, it sounds like it’s a solvable problem with people already coming up with better and better methods.

Once we have such devices, the question becomes: How do we use them? This is an interesting and very much open question. And it all boils down to what information you can ‘send’ via the probe.

Any electrical probe can be used for both output from the brain and input into the brain. Input into the brain opens up a lot of issues both technical and ethical. To keep this discussion short, let’s just consider a read-only probe system that passively ‘reads’ your cortex. What sort of information could you output using this system? The cerebral cortex deals with sensory, motor, language, planning, and higher-level thought functions. An advanced probe system could presumably read your high-level thoughts. For example, if you imagine a dog, or a game of basketball, or the concept of taking an integral, it would recognize these. Designing the data analysis system necessary for this type of input is far beyond current science, however, so let’s stick with things that we know how to do. We have a fairly good ability to probe the sensorimotor regions of the brain and correlate activities in these regions to specific input stimuli or muscle actions. A probe system could read your sensorimotor areas, in particular your visual cortex, with the result that you now have a two-way visual connection with the outside world. You can see objects and process them into higher-level neural patterns via your normal visual processing route, and you can also imaging higher-level concepts, form visual pictures of them, and have the computer read those pictures. This opens up a new set of possibilities for communicating with a computer, where the primary mode of information transfer would not be textual (as it is now) but visual.