The only thing you know for sure is that you are conscious. All else is inference, however reasonable. There is something in your head that generates experiences: the words you are reading on this page, the snore of a bulldog on a red carpet, the perfume of roses on a desk. Your experience of such a scene is exclusive to you, and your impressions are integrated into one unified field of perception. It is like something to be you reading, hearing a dog, smelling flowers.

But what is going on in the heads of other people, and do dogs or even computers have experiences too? Is it also like something to be them? If entities besides yourself are sentient, whence does consciousness come? Philosopher Dave Chalmers calls the question of how physical systems give rise to subjective experience the “hard problem” of consciousness. Many philosophers think the hard problem insoluble, because consciousness cannot be reduced to pulses in neurons in the same way bodily functions can be explained by gene expression. While our consciousness is the only thing we know, it is the most mysterious thing in the world.

Jason Pontin (@jason_pontin) is an Ideas contributor for WIRED. He is a senior partner at Flagship Pioneering, a firm in Boston that creates, builds, and funds companies that solve problems in health, food, and sustainability. From 2004 to 2017, he was editor in chief and publisher of MIT Technology Review. Before that he was the editor of Red Herring, a business magazine that was popular during the dot-com boom. Pontin does not write about Flagship’s portfolio companies nor about their competitors.

Understanding consciousness better would solve some urgent, practical problems. It would be useful, for instance, to know whether patients locked in by stroke are capable of thought. Similarly, one or two patients in a thousand later recall being in pain under general anesthesia, though they seemed to be asleep. Could we reliably measure whether such people are conscious? Some of the heat of the abortion debate might dissipate if we knew when and to what degree fetuses are conscious. We are building artificial intelligences whose capabilities rival or exceed our own. Soon, we will have to decide: Are our machines conscious, to even a small degree, and do they have rights, which we are bound to respect? These are questions of more than academic philosophical interest.

What we want is a theory of consciousness that can measure sentience. Recently, Marcello Massimini and colleagues at the University of Milan devised a test that zaps the brains of patients with magnetic stimulation, captures brain activity with electroencephalography, and analyzes the results with a data-compression algorithm. In a groundbreaking study, 102 healthy subjects and 48 responsive but brain-injured patients were “zapped and zipped” when conscious and unconscious, creating a value called a “perturbational complexity index” (PCI). Remarkably, across all 150 subjects, when the PCI value was above a certain value (0.31, is it happens) the person was conscious; if below, she or he was always unconscious.

Massimini then tested his consciousness-meter on patients who were either minimally conscious or else unresponsive but wakeful. Here, the results were more ambiguous. Almost all subjects who were minimally conscious were correctly described as somewhat awake. Of 43 unresponsive but wakeful patients, where communication was impossible, 34 were below the level of consciousness, as expected. But nine people, terrifyingly, showed a complex pattern of brain activity above the threshold of consciousness. They might have been experiencing the world, but they were unable to tell anyone that they were still there, as if in a diving bell at the bottom of the sea.

Massimini's test is important because it is the first real proof of integrated information theory (IIT), a theory of consciousness invented by neuroscientist and psychiatrist Giulio Tononi at the University of Wisconsin. In the 20 years since Tononi began working on IIT, the theory has prompted an enormous literature and generated passionate, often acrimonious debate. Christof Koch, chief scientist at the Allen Institute for Brain Science, says IIT is the “only really promising fundamental theory of consciousness.” But Scott Aaronson, a theoretical computer scientist at the University of Texas at Austin, believes the theory is “demonstrably wrong, for reasons that go to its core.”

IIT doesn’t try to answer the hard problem. Instead, it does something more subtle: It posits that consciousness is a feature of the universe, like gravity, and then tries to solve the pretty hard problem of determining which systems are conscious with a mathematical measurement of consciousness represented by the Greek letter phi (Φ). Until Massimini’s test, which was developed in partnership with Tononi, there was little experimental evidence of IIT, because calculating the phi value of a human brain with its tens of billions of neurons was impractical. PCI is “a poor man’s phi” according to Tononi. “The poor man’s version may be poor, but it works better than anything else. PCI works in dreaming and dreamless sleep. With general anesthesia, PCI is down, and with ketamine it’s up more. Now we can tell, just by looking at the value, whether someone is conscious or not. We can assess consciousness in nonresponsive patients.”