What exactly are you trying to achieve with Kernel?

The future of the human race is our brains. It’s the master tool that writes books, that codes biology, that loves, that creates, that cooperates, that forms the government. The present and the future of the human race is our brain. Kernel is trying to understand and work with this thing that makes our decisions.

I’ve asked this question many times, over dinners in New York, San Francisco, and L.A.: “How are we going to thrive as a species 50 years from now? What things matter most?” People have the answers mapped on their brain: “Well, climate science is important, education is important, our governance systems need to be good.” They go down their list of things they’re very familiar with. But what is always lacking on that list? The brain. People don’t identify the brain as a critically important element to how we thrive as a species. But I don’t know why it wouldn’t be our obsession.

What makes you think we should make it an obsession now?

Now is the time is because advancements in microelectronics and the science of getting things in the brain has made sufficient progress for a big jump forward. For example, there’s a company called NeuroPace that’s been working for the past 15 years on an implant to help people with epilepsy. Getting things in the brain that are biocompatible is really hard, and they figured it out. They put a foreign object into the brain that performs a function to help humans. That’s a huge step forward. Now the next stage is, how do you make microelectronics better? How do you make them smaller, how do you capture more data, how do you make them more capable? My bet on Kernel is that we’ve reached a point in neuroscience where in the next decade we will demonstrate we can make big inroads on disease and explore what we’re potentially capable of in improving functions.

That’s the hardware side. What about on the neuroscience side? Are there particular breakthroughs that give you confidence?

I guess you could say it one of two ways — you could say we’ve made enormous progress on understanding the brain or you could say we have such a long way to go. But if you compare the current velocity rates of AI and neuroscience, and our ability to work with our own neural code, I think there’s an argument to be made that we should be interested in increasing the velocity of our own improvement. Otherwise we might get to 2027 and our machines are incredibly intelligent and we’re increasingly less intelligent relative to that.

That sounds pretty foreboding. What are the potential consequences if human intelligence doesn’t keep up with the intelligence of our tools?

Well, we don’t know. And I’m careful with this because storytelling creates a blueprint for creation. The current narrative that’s taken form around AI is “they’re coming to get us, we should be scared.” If I build AI, that is the dominant story in my mind. I think if we fall prey to that in neuroscience it could potentially form a poor construction. I’m motivated by the question “if we actually understood neural code, and we could write neural code, what could we become?”

I think there’s an opportunity to create a model of exploration in what we do. When JFK said, “Let’s go to the moon,” he didn’t say, “if we don’t go to the moon, the moon may fly away from Earth.” There was no fear there. It was just, “Awesome! We’re going to do this!” And everyone was like, “Yeah, we’re going! Why? We don’t know, but we’re doing it!”

“We should be interested in increasing the velocity of our own improvement. Otherwise we might get to 2027 and our machines are incredibly intelligent and we’re increasingly less intelligent relative to that.”

How far are you from having a product?

We’re not just at the drawing board in neuroscience. There are quite a few demos out there. BrainGate recently came out with a demo where a woman who is quadriplegic could move a cursor on the screen and type eight words per minute with an implant. These things are already going on; they’re already in humans. And we’re building upon those contributions.

What are your thoughts on Elon Musk’s “neural lace”?

Based upon what I’ve read about what Elon is doing, I think he shares a similar sentiment that understanding and working with the human brain is critical to the future of the human race. I couldn’t be more excited that someone of his caliber has chosen to join in the effort. I really hope he succeeds as we both try to make the velocity of progress that we both desire. As far as I can tell, Kernel and Neuralink are the only two companies who maintain such ambitions and level of funding.

Have you encountered opposition to the idea of using a brain implant to enhance human intelligence?

I recently hosted a dinner with about a dozen top screenwriters from Hollywood. I was working with them for narrative creation. I asked them what they thought about this technology, and there were a couple reactions in the room. One person said, “You give me a technology such as this, the first place I’m going is Russian hackers and loss of identity.”

If politics are any indication of which story sells, it’s fear. I worry that if you introduce enhanced [human intelligence] into our environment, people are going to create the most scandalous narrative possible, which could have a negative consequence on the development of a technology that could be helpful for many people.

Have you thought about how to make sure this technology is available to everyone? You could imagine a situation where a brain implant is expensive, and income inequality becomes cognitive inequality.

That’s no question the case. But human augmentation is not a new concept. We do a form of cognitive enhancement today in the form of private education: if a child is sent to a private boarding school that is a form of cognitive enhancement they have over a person who goes to school in the slums. We just don’t call it that. The difference between that and the new things we’re doing is the tools of inequality are becoming more powerful. If someone can enhance an embryo to become more intelligent, that is potentially a more powerful form of enhancement than sending someone to a private boarding school. And then if you have neurological enhancement, a prosthetic, that could even be more powerful.

In every company I invest in, the objective is that this company would enable billions of people to benefit. Like the printing press helped billions. Computers have helped billions. But there are always constraints in making a product broadly available. If you think back to recent examples like the cellphone, who got access to it, how long did it take — or the Internet — there’s always an adoption curve that happens.

We would be smart to be working on our form of intelligence with just as much excitement and enthusiasm as we are working on AI.

But can you share any details on what you’re doing to make sure the technology will be available to billions?

No, not yet. I can just say it’s foremost on our minds.

You have several big-name scientists, including Craig Venter, involved with you at Kernel. How do you convince people like that to work with you?

There’s a pretty limited number of people in the world who, one, are future-literate, and two, have the ability to make something happen. One thing that’s kind of concerning is if you look at the discussion in our government right now, the things that dominate our attention are tweets and immigration bans, or insurance coverage. Those are important. But it’s not on the scale of, “Hey guys, we can see some technologies forming that could be extremely consequential to the human race.” When you find somebody like Craig who gets this, I think there’s a shared interest to say “You get it, I get it, we both get it,” and then act upon it.

What do you mean by future-literate? Are you and Venter seeing something in the future that people in government are missing?

There’s a tsunami of change coming. It’s coming through biology, and genetics, and neuroscience, and AI, and quantum computing. This change is coming at a pace and will have a level of significance that’s very hard for us to fully grasp.

Tsunami is a scary word.

Well, tsunami is an appropriate word, even though I don’t want to come from a fear-based approach. But it’s the case that if we don’t realize this and talk about it, and prepare for it, it could actually be kind of devastating.

But I don’t know. I really dislike fear-based narratives. Al Gore tried to ring the alarm bell on climate science, and he chose a particular narrative, and it just shows the narrative selection of how you persuade people is important. With climate science you see some good trends, but it’s around the edges. As a global society we’ve been ineffective at co-operating on this massive problem. I think the clear lesson is we have an expected failure of the commons. Because as humans we’re currently wired to care more about the present than we do about the future. And that is a fundamental flaw in our programming.

What can you do about that?

Well, what if you could rewrite neural code? We’re already trying to change ourselves through self-improvement programs. We try to do it through persuasion of speech. We’re already trying to reprogram each other. What if you could just go to the source code?

It sounds like you’re saying not just that being able to do that would be cool, but that we’re in big trouble if we don’t.

My core argument is that we acknowledge working on the brain is desirable, but we have yet as a species to acknowledge it as important. Currently we are the most intelligent species on planet Earth, and we reign ruthlessly with our intelligence. We decide who lives, who dies, who goes extinct, who is safe, who we eat and who we have as pets. And also we are giving birth to a new form of intelligence. What I’m trying to say is not only is [working on the brain] very desirable, it’s extremely important.

Because intelligence is the most powerful and precious resource. In our current relationships with other forms of intelligence, like when we interact with animals, it’s not a negotiation about what happens to that animal. We don’t negotiate with cows or dogs or chickens or pigs, we kind of just do whatever we want with them.

I’m not suggesting that AI is coming to get us. I’m saying that it’s a really important question to answer. We would be smart to be working on our form of intelligence with just as much excitement and enthusiasm as we are working on AI.