Satsuki Ayaya remembers finding it hard to play with other children when she was young, as if a screen separated her from them. Sometimes she felt numb, sometimes too sensitive; sometimes sounds were muted, sometimes too sharp. As a teenager, desperate to understand herself, she began keeping a journal. “I started to write my ideas in my notebooks, like: What’s happened to me? Or: What’s wrong with me? Or: Who am I?” she says, “I wrote, wrote, wrote. I filled maybe 40 notebooks.”

Today, at 43, Ayaya has a better sense of who she is: She was diagnosed with autism when she was in her early 30s. As a Ph.D. student in the history and philosophy of science at the University of Tokyo, she is using the narratives from her teen years and after to generate hypotheses and suggest experiments about autism — a form of self-analysis called Tojisha-Kenkyu, introduced nearly 20 years ago by the disability-rights movement in Japan.

In Ayaya’s telling, her autism involves a host of perceptual disconnects. For example, she feels in exquisite detail all the sensations that typical people readily identify as hunger, but she can’t piece them together. “It’s very hard for me to conclude I’m hungry,” she says. “I feel irritated, or I feel sad, or I feel something [is] wrong. This information is separated, not connected.” It takes her so long to realize she is hungry that she often feels faint and gets something to eat only after someone suggests it to her.

She has also come to attribute some of her speech difficulties to a mismatch between how her voice sounds to her and how she expects it to sound. “Just after she speaks, her own voice feeds back to her ears, and she tends to notice the difference,” says her collaborator Shin-ichiro Kumagaya, a pediatric neurologist at the University of Tokyo who studies autism using Tojisha-Kenkyu. The effect is like the awkward echo on a phone line that makes it difficult to carry on a conversation — except that for Ayaya, it’s like that almost all the time.

Ayaya’s detailed accounts of her experiences have helped build the case for an emerging idea about autism that relates it to one of the deepest challenges of perception: How does the brain decide what it should pay attention to? Novelty captures attention, but to decide what is novel, the brain needs to have in place a prior expectation that is violated. It must also assign some level of confidence to that expectation, because in a noisy world, not all violations are equal: Sometimes things happen for a reason, and sometimes they just happen.

The best guess scientists have for how the brain does this is that it goes through a process of meta-learning — of figuring out what to learn and what not to. According to this theory, biases in the meta-learning process explain the core features of autism. The theory essentially reframes autism as a perceptual condition, not a primarily social one; it casts autism’s hallmark traits, from social problems to a fondness for routine, as the result of differences in how the mind processes sensory input.

Consider what happens when we are new to a situation or a subject. Every detail — every bump on a graph, every change in a person’s tone of voice — seems meaningful. As we gain experience, though, we start to learn what the rule is and what the exception. The minutiae become less salient; the brain shifts its focus to the big picture. In this way, the brain masters one challenge and moves to the next, keeping itself at the cusp between boredom and frustration. Autism might represent a different learning curve — one that favors detail at the price of missing broader patterns.

Unlike other ‘unified theories’ of autism — those that purport to explain all aspects of the condition — this one builds on a broad account of brain function known as predictive coding. The premise is that all perception is an exercise of model-building and testing — of making predictions and seeing whether they come true. In predictive-coding terms, the brain of someone with autism puts more weight on discrepancies between expectations and sensory data. Whereas the typical brain might chalk up a stray car horn to chance variation in a city soundscape and tune it out, every beep draws conscious attention from the autism brain. “It provides a very parsimonious explanation for the cardinal features of autism,” says Karl Friston, a neuroscientist at University College London who helped develop the mathematical foundations of predictive-coding theory as it applies to the brain.

For now, the model is vague on some crucial details. “There’s many loose pieces,” says Katarzyna Chawarska, an autism researcher at Yale University. And some question whether a single model could ever account for a condition as heterogeneous as autism. Yet proponents say this very diversity argues for a unified theory. Understanding a fundamental cause might yield treatments that are equally broad in their reach. “If prediction truly is an underlying core impairment [in autism], then an intervention that targets that skill is likely to have beneficial impacts on many different other skills,” says computational neuroscientist Pawan Sinha of the Massachusetts Institute of Technology.