Imagine a reality where computers can visualize what you are thinking.

Sound far out? It's now closer to becoming a reality thanks to four scientists at Kyoto University in Kyoto, Japan. In late December, Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani released the results of their recent research on using artificial intelligence to decode thoughts on the scientific platform, BioRxiv.

Machine learning has previously been used to study brain scans (MRIs, or magnetic resonance imaging) and generate visualizations of what a person is thinking when referring to simple, binary images like black and white letters or simple geographic shapes (as shown in Figure 2 here).

But the scientists from Kyoto developed new techniques of "decoding" thoughts using deep neural networks (artificial intelligence). The new technique allows the scientists to decode more sophisticated "hierarchical" images, which have multiple layers of color and structure, like a picture of a bird or a man wearing a cowboy hat, for example.

Deep image reconstruction: Natural images (seen images), GIF version

"We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person's brain activity," Kamitani, one of the scientists, tells CNBC Make It. "Our previous method was to assume that an image consists of pixels or simple shapes. But it's known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities."

And the new AI research allows computers to detect objects, not just binary pixels. "These neural networks or AI model can be used as a proxy for the hierarchical structure of the human brain," Kamitani says.

Deep image reconstruction: Visual imagery, GIF version

For the research, over the course of 10 months, three subjects were shown natural images (like photographs of a bird or a person), artificial geometric shapes and alphabetical letters for varying lengths of time.

In some instances, brain activity was measured while a subject was looking at one of 25 images. In other cases, it was logged afterward, when subjects were asked to think of the image they were previously shown.

Once the brain activity was scanned, a computer reverse-engineered (or "decoded") the information to generate visualizations of a subjects' thoughts.

The flowchart, embedded below, is made by the research team at the Kamitani Lab at Kyoto University and breaks down the science of how a visualization is "decoded."