In conclusion, I would anticipate that we first map the primary sensory and motor areas, before we get to multi-modal regions (integrating senses), and finally to the prefrontal cortex and the limbic system. Such a strategy would enable us to slowly but surely shift between the gears of telepathy. Our structural and functional understanding of primary sensory/motor areas is profoundly more detailed and accurate than our knowledge of any other section of the cortex, which also supports this incremental approach. Moreover, that’s the strategy on which the industry and the scientific community can converge, as direct sensory stimulation thrills the industry, while research will likely focus on the step-by-step functional mapping of brain regions, approximating them with deep learning models. As we have no explicitly defined symbols for fuzzy concepts, we will probably reverse engineer prefrontal activity to its corresponding sensory, lower-level features that have been associated with the given concepts in the past, using such derived patterns to then predict prefrontal activity in another (target) mind; aka telepathy in the third gear.

Sight restoration

In the realm of ethically unambiguous applications, I’m most excited about aiding the blind. My 5-day-long blindfolded experience gave me a marginal insight into the life of the visually impaired; even such a short exposure made me reassess the value of vision and our fundamental dependence on it.

As the types of blinding diseases and corresponding physiological alterations are numerous, finding an all encompassing cure seems unattainable; unless you have a high-bandwidth link to the primary visual cortex (V1), in which case, you happen to skip the major source of variability in said diseases: the eye. However, the visual cortex could suffer apathy and cortical reorganization from the lack of visual experience to the extent that it’s completely or partially irreversible. This is the case for the congenital and early blind, who born visually impaired or lost eyesight before the age of 6. For such, the functional connectivity of the visual cortex resembles the sighted’s only coarsely: the rough polar coordinate system of eccentricity and angle is somewhat reserved, yet the fine small scale connections are not. Without any visual input to work on, V1 becomes a bitch to other modalities, doing dirty, post-processing work for them, while they get lazy and lose neural tissue in the process. We may be able to restore neural tissue by stimulation and kickstart overarching plasticity after the early years have passed, but it is doubtful to fully establish the vision of a 50-year-old congenital blind: total lack of visual experience forms a particular mind that cannot, for instance, think allocentric.

Nevertheless, the late blind has a fair chance to regain actual visual experience. Even hacky approaches like visual-to-auditory sensory substitution — representing visual imagery as sound — led to enhanced visual experience for two late blind users. Second Sight has taken up on vision restoration developing two lines of products: the Argus II retinal implant and the Orion visual cortical prosthesis, the latter of which stimulates V1 subdural, and has shown to invoke phosphenes for the late blind in a spatially consistent manner.

I believe it is worthwhile to exploit sensory substitution studies on the quest for sight restoration. Training the early blind to acquire a brand new sense is analogous to learning audio representations of visual information — although the interface is different (V1 vs audio), both require thorough neural plasticity to take place. Building on sensory substitution literature, we should assemble a comprehensive training procedure for early blind sight restoration using as much immediate, multimodal feedback as possible. Just some ideas:

Provide immediate feedback through other sensory inputs to consolidate plasticity: woven textures to touch, haptic screens, solid objects in hand for 3D learning.

Start with simple contrasts (lines, triangles, rectangles), textures (square and hexagonal lattices), objects (platonic solids), and simple directional motion; put off complex natural scenes until the final stages of the training.

Use depth detection to delineate parts of the visual field that are too far away to be touched, and remove them from the image; the early blind needs (haptic or auditory) sensory feedback to consolidate the information stimulated onto their visual cortex: if the object in the scene cannot be reached, the association between its shape (haptic) and visual 2D representation cannot be made; increase the allowed visual field depth as the training proceeds.

Synthesize audio out of the visual stimulus (as in visual-to-auditory sensory substitution) and play them during stimulation to function as artificial auditory side dish to the stimulated image.

All in all, the path to stimulate sight into the late blind seems relatively straightforward, while the training of the early blind pretends to be less trivial. In both cases, there are many moving parts that I ignored here: e.g. I assumed that stimulation is applied to V1, but one could access higher visual areas instead and get away with less electrodes, potentially conveying the same amount of visual information. Moreover, we don’t know how much the stimulation in itself matters compared to the internal predictions, and correspondingly, to the visual experience of the mind. The majority of the input signal arriving at the visual cortex is generated internally; external stimuli is overwhelmed by the internal predictions of the mind, filling up the visual space with imagined content, which sounds counterintuitive given our everyday experience, yet it’s all too apparent under the influence of psychedelics, for instance.

Ambitious applications

Several potential applications have been discussed by the founder of Kernel, Bryan Johnson. They mainly revolve around fixing cognitive biases [?], measuring cognitive power and effort [✓], enhancing creativity [✓] or reading comprehension [✓], substitute caffeine to maintain vigilance and arousal [✓], and feeding other minds’ sensory inputs or states into our own to walk in their shoes [?].

Eliminating cognitive biases of high-impact decision makers is indeed a desirable goal. However, the removal of biases will inherently introduce variance in the possible set of actions we can make, or the variety of choices we will have to cycle through before arriving at a decision. One may argue that most of our cognitive biases are outdated, evolution having a hard time to catch up; but in our everyday life, some are useful or at least can prevent us from spending cognitive power on finding optimal solutions, while a sub-optimal, yet practically comparable one is already within reach, encoded in us.

Let’s take an example: how would one go about correcting loss aversion? What amount of additional value would you assign to the positive outcome compared to the potential loss? By having access to brain patterns and surrounding stimuli, the brain interface is far short of context to “unbias” the human in an objective manner. While blindly encouraging risk-taking behavior is an obvious no-no, gathering more context of the sparse, unique situations people wind up in just seems intractable. In other words, we can only introduce further cognitive bias, which if done right, could oppose our innate biases. However, the magnitude of biasing can hardly be determined objectively without the comprehension of broad contextual information external to the mind being recorded. An alternative plan of attack would be to measure the amount of influence subcortical regions (like the amygdala, responsible for e.g. aggressive responses, among driving other emotion fueled behavior) have on our motor outputs, including both action and speech. Once measured, the user can be notified whether the actions currently taken are prone to be limbicly biased. I’m more in favor of such an approach that makes us more aware instead of just inhibiting limbic control.

Rapid motor learning [✓] By appropriating the Matrix trilogy, brain interface evangelists tend to vent about how the motor control skills of a martial arts expert will be uploaded to our minds in no time. These empty claims make me think of my chiropractor/puzzle games salesman buddy, who told me once that some excessively fat people may even forget how to consciously drive individual muscles in the abdomen or the arms. Not that those muscles don’t exist or function, just their coding in the motor cortex is likely entangled with the movements of other muscles, as they are mostly used in conjunction, if at all. Yet, sportsmen go through cortical motor plasticity while practicing: they build muscle and alter neural connectivity that drives that muscle. Such plasticity takes time, which is my point; it can’t be just uploaded. Alternatively, a viable path to take would be to accelerate motor learning by exciting motor neurons that previously have led to a dopamine rush (internally validated successful move), or 3D scan the user’s movements, fit to the desired movement, and perform subsequent brain stimulation according to the adjustments need to be made. One could also practice dangerous, complicated movements in a virtual world by just imagining moving the muscles: do a backflip in VR before executing it in real life.

Learning languages [✓] Equivalently, I doubt we could just upload the ability to speak foreign languages without active conscious involvement from the user over a longer period of time. Though we can definitely fake it first by decoding speech center activity into words, and just run machine translation on said words (e.g. speak English internally, and the brain interface translates it to Chinese). Translated expressions could then be either fed back to the speaker as audio input or as motor cortex stimulation driving the vocal tract to pronounce the desired phrases. Such an immediate feedback would hasten language learning, be that lexical or spoken, as associations between symbols of different languages can be built on the fly without much of a temporal delay. Just imagine thinking of a word and mouth the translated version of it immediately. It’s way easier than flipping dictionaries. Neural connections are thus can be built between an expression and the retrieval and pronunciation of the corresponding foreign word. These connections will function even when the brain interface is detached.

Some applications from the Wait But Why article on Neuralink: surgeon scalpel as an 11th finger [✓]; decouple sensory experience from the physical world, eating the cake and having it too (or more like not having to have the cake, yet still eat it) [✓]; extinguish pain [✓]; see in the dark by delivering infra cam frames onto the visual cortex [✓].

App-to-brain adaptation [✓] It’s simple: after enough interactions with an application (mobile or IoT), the intent, and corresponding neural patterns, of using its different functionalities can be recorded and recognized later on. Swiping on toxic dating apps won’t burn your thumbs anymore, Facebook and the fridge will open automatically, and you will take selfies at the right moment as seamless as you’ll spew emoji compositions matching your exact thoughts and feelings. I’m pretty sure we could find some meaningful use-cases too. The implementation might not be that simple: at the least we would need to spatially bias the learning models that associate brain activity to app functions; e.g. the intention to swipe should be extracted from the motor cortex.

Some more illustrated

How to teach mice to play Doom: 1/A Put mouse on a ball and place a screen in front, or 1/B overwrite its tiny visual cortex with the gameplay, let it move around (or imagine moving around), and decode the activation of its place and grid cells into the corresponding position on the game map. 2 Map a specific motor movement to the firing of the gun, and 3 induce dopamine surge conditioned on a successful kill.

Slow down perceptual time = speed up perceptual processing. Given that information transfer on silicon is orders of magnitudes faster than on electrochemical neural pathways, silicon shortcuts between brain regions could accelerate cortical communication, increasing reaction times, for instance.

Mind memes.