This week a team of Dutch researchers announced that a combination of high-resolution MRI, computational modeling, and a little bit of deck-stacking prescience has allowed them to draw a subject’s experiences right out of the brain. The study builds on years of research into how visual and cognitive information is represented in the brain, and its biggest step forward is actually more conceptual than technological.

Functional MRI images are used mostly for large-scale analysis — which brain regions are eating oxygen to do work during a particular task — but this study was more interested in small-scale activity. Their fMRI data was reported in 2x2x2 millimeter units known as voxels. The researchers showed subjects a particular letter, and their recordings of the resulting brain activity came in the form of these 3D pixels. Normally, this raw data would simply be directly rendered out as a picture of the brain — the muddy, multi-colored pictures most of us associate with MRI scans.

This team from Radboud University Nijmegen, however, took a different approach. They compiled a database of responses to different letters, essentially creating a library of how the relevant brain areas visualize each different shape. Once this was done, recordings of a brain’s reaction to each letter could be checked against the database. However, they didn’t just get a basic letter identification — as seen in the rendering above, they actually reconstructed images of the letters themselves.

Their algorithm uses a process the researchers liken to the way our own mind constructs images from sensory information and prior experience. The algorithm essentially translates brain voxels into image pixels, and it can learn how to do this more accurately as it compiles experience. Lead researcher Marcel van Gerven said they designed their model to compare the letters “to determine which one corresponds most exactly with the [MRI] speckle image, and then push the results of the image towards that letter.”

This method might seem a bit like cheating — but as mentioned it is not all that different from how we perceive things, ourselves. When reading, we don’t slowly identify every letter individually, mentally tracing the outline and figuring out which letter is being viewed. Rather, we see the basic shape and quickly assign it a symbolic meaning — the letter e has an “e-ness” to it that can be recognized in everything from Courier New to Comic Sans to (if you’re geeky enough) even Wingdings. Our brains do much the same sort of pushing of visual data toward conceptual reality, even when the actual shape being perceived varies widely.

In essence, this scan is distinct from true, useful mind-reading in two ways: its resolution is too low to identify much beyond broadly distinguishable block letters, and it requires prior knowledge of the full array of possible images a subject might be viewing. The former of these problems is simple enough to fix — if science is good at one thing, it’s improving on the specifics of preexisting achievements. This experiment worked with data sets of just 1,200 voxels, but the team is already planning to use more advanced machines to take images with up to 15,000. With such an increase in resolution, they hope to upgrade from identifying letters to human faces.

Even with sharper images, they will still need to pass their data through the parsimony algorithm, “pushing” the results toward pre-collected standards. To create totally novel images from brain scans, to literally see what the subject sees like an intra-cranial camera, would require many large steps forward in our understanding of how the brain processes and conceptualizes visual data.

Even with the limitations, these scans could be a powerful tool. Imagine if police could literally probe a victim’s mind to check the memory of an attacker’s face against a series of mugshots. Though scary fantasies of involuntary mind-reading are hard to avoid, the potential here is exciting. Look forward to the images that could emerge from future scans at ten times the resolution of these. If they truly allow identification of a particular human face, it will be a major step forward for brain science.

Now read: Japanese neuroscientists decode human dreams

Research paper: doi.10.1016/j.neuroimage.2013.07.043 – “Linear reconstruction of perceived images from human brain activity”