Computer scientists are developing a mind-reading computer that deciphers symbols that people have looked at.

The device accurately replicates shapes seen. The computer scans brain activity, then successfully redraws those numerals and symbols, say scientists working on the project.

It’s a “step towards a direct ‘telepathic’ connection between brains and computers,” said the Chinese Academy of Sciences (CAS) in a May news article. And indeed, should it work reliably, it would be a significant improvement on simple Functional Magnetic Resonance Imaging (fMRI) scans, which just read activity in parts of the brain and are used primarily for research.

The telepathic algorithm “reads your mind to see what you see,” the academics claim of their machine learning artificial intelligence they call Deep Generative Multiview Model (DGMM)

What they mean is that they use the visual cortex—the part of the brain that sees—to capture brain activity. They then run an algorithm on the data. In other words, the human subject sees the image through its eyes, then three-dimensional patterns, created by the brain’s visual cortex, are captured with fMRI imaging. After that, a computer algorithm interprets the signals and maps them, thus recreating the image.

fMRI imaging measures changes associated with blood flow and consequently shows brain activity.

“Now, eerily sophisticated software is starting to decode that brain activity and assign meaning to it; fMRI is also becoming a window on the mind,” the multiple university institution continues.

What makes the Deep Generative Multiview Model different?

This isn't the first time computers have been used to try and envisage what humans think, but the Chinese scientists claim their method is the most accurate. They say it's because of their attention to the visual cortex—the part of the brain that lights up in three-dimensions when a person sees something. It functions a bit like how a computer reads ones and zeros, the article explains.

Decoding the three-dimensional visual cortex activity and then translating it into machine readable two-dimensions is a key part of their research. That’s where the deep learning algorithm comes in.

The group took advantage of previous studies to build the arithmetic. Large quantities of data have been collected over time—the result of many others attempting to do similar mind-reading experiments. That meant the scientists were able to instigate their deep-learning based on hundreds of existing samples consisting of fMRIs captured while letters and numerals were viewed by test subjects.

Some fMRI samples were held back from the deep learning tranche and were used to perform the algorithmic testing—they asked the artificial intelligence to draw what it thought that the person was seeing during the scan. The images were replicated near-enough precisely. They are “uncannily clear depictions of the original images,” the Financial Times (Paywall) writes of the experiments.

Changde Du, Changying Du, Huiguang He Examples of reconstructed 18 distinct handwritten characters. The top row is the original image, and the bottom row is from the Deep Generative Multiview Model.

Valuable brain-machine interfaces could conceivably result.

The newspaper, though, speculates of sinister applications for the tech, such as “digital stalking” by advertisers.

More innocently, perhaps, the Chinese scientists think the DGMM system might allow the recording of dreams for re-watching. And “what about seeing into the ‘mind’s eye?’”