Psychokinesis? No, But Neurable’s Brain-Computer Interface Technology Might Still Surprise You

Ramses Alcaide is a busy man—a very busy man. Since last fall, the 31-year old founder of Neurable has raised $2 million in seed money, relocated from Ann Arbour to Boston, and doubled the size of his now 10-person team. For fun, he’s also completed his Ph.D. in neuroscience at the University of Michigan!

Alcaide hasn’t slacked off from his day job either: developing advanced software to control computers and other devices with nothing more than brainwave activity.

While it might sound magical, it’s actually quite real.

The brain’s electrical activity–captured by a headband, glasses or goggles with built-in electroencephalography (EEG) sensors and interpreted by Neurable’s machine learning software–could soon control everything from virtual reality (VR) experiences for gamers to augmented reality (AR) solutions for the military.

The possibilities are intriguing: an intuitive, easy to use system that could augment today’s awkward and sometimes frustrating solutions for controlling software or hardware experiences.

Neurotechnology could drive the improved user experience that many think the VR and AR industries will need as they strive to become mainstream.

If the technology works, some say it could transform the augmented and virtual reality industries—freeing users at last from the limitations of mechanical controllers, hand and body gestures, eye trackers, and voice recognition.

“Instead of just pressing a button to create magic, imagine actually willing magic to happen. That’s the future we want to create with Neurable,” Alcaide boldly declares.

“Brain-computer interfaces are indeed intriguing,” concurs expert Neil Gupta. Gupta founded the AR Boston Meetup group and recently spoke at the MIT Media Lab’s “AR In Action” conference. “I can’t speak to Neurable’s specific efforts, but there are many use cases for augmented reality where people will gravitate to control solutions that combine accuracy, intuitiveness, and social acceptability. These brain interface technologies could hit a sweet spot if they deliver on their promises.”

So, how does Neurable’s brain technology actually work?

We’ve long known that the brain (and the body’s larger nervous system) function by way of complex electrical activity. Clinicians can detect this activity with EEG sensors and often infer important information about the brain and associated neurological problems.

(Alcaide’s doctoral research, in fact, focused on helping children with cerebral palsy take cognitive tests when they couldn’t otherwise communicate. His early findings were published in the November 2014 Journal of Neural Engineering.)

Detecting the brain’s electrical activity is easy, Alcaide explains. The real problem, he continues, is isolating the signals that indicate meaningful intent on the part of a user from non-useful electrical “noise.” He likens it to trying to follow a conversation at a crowded cocktail party with many nearby speakers.

Neurable’s machine learning software improves on previous approaches with better filters as well as an important new predictive capability. Even when the desired signal is too weak to be captured and interpreted, Neurable’s software can fill in the gaps with helpful accuracy.

In our crowded cocktail party, to continue the analogy, the human brain similarly fills in missing words to improve our comprehension. We can thus “hear” scrambled conversations from across the room.

To be clear, Neurable’s software is not a mind-reading tool nor is it magic. The signal that emerges from Neurable’s software only captures a user’s intent from a constrained set of options. It can’t tell if you’re bored, have a headache, or want pizza for lunch, but it can detect which of, say, 5, 10 or 20 discrete choices you prefer when they’re presented in your field of view. Do you want to turn left or right? Which of these 5 movies is most interesting?

In many augmented and virtual reality experiences, it’s believed that Neurable’s solution could be sufficient to make the brain a primary or supplementary control interface.

Gaming would seem to be an obvious application, as would simple applications like watching virtual reality movies. Even alphanumeric typing is said to be possible, a breakthrough idea for keyboardless word processing. Texting and emailing could be similarly transformed.

Seeing is believing, especially for novel solutions with such disruptive potential. After some disappointments at early awards competitions, Alcaide learned to expect some skepticism and to always offer a hands-on demonstration. Done this way, he’s seen much better success with judges, partners, and investors.

“Neurable was the most disruptive technology at the competition. Judges, most of whom are investors, were literally handing Ramses their business cards,” said Anne Perigo, associate director of the Zell Lurie Institute and organizer of the 2016 Michigan Business Competition.



Neurable subsequently placed second at Rice University’s 2016 Business Plan Competition and took away the Owl Investment Prize. Their total cash award? A heady $330,000.

The company’s late 2016 seed round signaled validation of their thinking, says Alcaide. Robert Winter of the Rice Owls and Brian Shin of Accomplice’s Boston Syndicate led the round with participation from Point Judith Capital, Loup Ventures, the Kraft Group, NXT Ventures, and unnamed angel investors. The company plans to use the funding to expand the team, improve the software’s speed and accuracy, and begin the commercialization process.

Shin’s bullish on the company, commenting at the time, “The team at Neurable believes they can enable people to easily control devices and objects with their minds. The implications would be enormous. They have a chance to completely alter the way humans interact with technology, which is something that I had to be a part of.”

Alcaide promises that we should look for news of a beta program “soon”. The company has previously announced plans to release a software development kit (SDK) that will allow partners to licence the technology and embed it in phones, gaming consoles, and other devices.

The kit will include support for game engines such as Unity and Unreal as well as Qt, a platform for UI, application, and embedded device development. Like Intel and Dolby Labs, the company aspires to be a standard that transcends any individual manufacturer.

Pressed on their path to commercial success, Alcaide clearly has a plan that he’s not yet sharing. Could their technology one day find itself running natively on devices from giants like Microsoft, Oculus, Samsung, HTC or Sony, we ask? And how would this plan unfold?

Alcaide smiles and declines to comment. Multiple partnerships, our sources say, would make sense if the company hopes to be an industry standard. The coming 12-18 months could thus be pivotal as the company seeks initial traction.

And what might come even further down the road?

Alcaide’s roots in medical research hint at his passion for helping those with cognitive and mobility disorders. He notes that the same brain-powered software controlling an augmented or virtuality reality experience could one day control an electric wheelchair or home automation system for the disabled.

Even better, Alcaide says, imagine if we could use this technology to better understand and help those with profound neurological challenges. A return to the University of Michigan’s Direct Brain Interface Laboratory and his mentor, Dr. Jane Huggins? It won’t happen in the short-term but it’s clearly something he’s dreamt about.

An exciting vision, indeed—and obviously the reason Alcaide’s so busy.