Tell me what you see.

On second thought, don't: A computer will soon be able to do it, simply by analyzing the activity of your brain.

That's the promise of a decoding system unveiled this week in Nature by neuroscientists from the University of California at Berkeley.

The scientists used a functional magnetic resonance imaging machine – a real-time brain scanner – to record the mental activity of a person looking at thousands of random pictures: people, animals, landscapes, objects, the stuff of everyday visual life. With those recordings the researchers built a computational model for predicting the mental patterns elicited by looking at any other photograph. When tested with neurological readouts generated by a different set of pictures, the decoder passed with flying colors, identifying the images seen with unprecedented accuracy.

"No one that I know would ever have guessed our decoder would do this well," study co-author Jack Gallant said.

As the decoder is refined, it could be used to explore the phenomenon of visual attention – concentration on one part of a complicated scene – and then to illuminate the dimly understood intricacies of the mind's eyes.

"One day it may even be possible to reconstruct the visual content of dreams," Gallant said.

After that, the decoding model could be harnessed for more visionary purposes: early warning systems for neurological diseases or interfaces that allow paralyzed people to engage with the world.

Other uses may not be so noble, such as marketing campaigns crafted for maximum mental penetration or invasions of mental privacy mounted in the name of fighting terrorism and crime.

Those technologies remain decades away, but researchers say it's not too soon to think about them, especially if research progresses at the pace set by this study.

Earlier decoders could only tell whether someone looked at a general type of image – at a dog, for example – but couldn't identify more specific photos, such as a small dog eating a bone. They've also been incapable of predicting what thought patterns an image would provoke.

The Berkeley model overcame both those limitations.

"It's quite tedious to measure every possible thought you might encounter, then measure the brain activity for that," said John-Dylan Haynes, a Max Planck Institute neuroscientist who was not involved in the study. "This is a big step forward."

Future steps involve expanding the decoder beyond its current focus on the brain's primary visual cortex, which represents general forms but doesn't handle the more complicated optical information processed in other parts of the brain.

More detail is also required, as the brain scanners used for the study measure blood flow caused by neural activity at a relatively coarse resolution of two cubic millimeters.

A higher-resolution, fully reconstructive decoder could help researchers chart the incredibly complex processes underlying visual perception. Gallant also hopes it could be used to detect early symptoms of neurological diseases like Alzheimer's and Parkinson's.

Eventually, Haynes said, the Berkeley model could be harnessed for something akin to mind reading.

"We want not only to decode people's perceptions, but also high-level mental states: people's intentions, their plans," Haynes said.

But Gallant warned of technological malfeasance. In the courtroom, mental readouts could have the same problems as eyewitness testimony, which is often unreliable and biased even though witnesses believe they're telling the truth.

The allure of reading minds to prove innocence or guilt, Haynes said, could override concerns about mental privacy – an ethically ambiguous conflict. More obviously dubious is the possible use of mind-reading machines by marketers.

"There's some things we can do, and some we can't," Haynes said. "Some things are very easy, and others are not. But it's vital to think about the ethics now."