Narain’s group’s analysis was essential to the collaboration’s understanding of a signal that turned out to be the elusive top quark.

“In the end the process wins. It’s not about you or me, because we’re all going after the same thing. We want to discover that particle or phenomenon or whatever else is out there collaboratively. That’s the goal.”

This, she says, is the scientific process: A multitude of steps designed to help us explore the world we live in.

“I had a whole sequence of logic and explanation prepared,” Narain says. “When I presented it, I remember everybody was very supportive. I had expected some pushback or some criticism and nothing like that happened.”

Narain, who was a postdoctoral researcher at the time, talked to her advisor about sharing the group’s result. Her advisor told her that if she had followed the scientific method and was confident in her result, she should talk about it.

For weeks, her group had been working on deciphering some extra background that originally had not been accounted for. Their conclusions contradicted the collaboration’s original assumptions.

Meenakshi Narain, a professor of physics at Brown University, remembers working on the DZero experiment at Fermi National Accelerator Laboratory near Chicago in the winter of 1994. She would bring blankets up to her fifth-floor office to keep warm as she sat at her computer going through data in search of the then-undiscovered top quark.

The modern hypothesis

“The scientific method was not invented overnight,” says Joseph Incandela, vice chancellor for research at the University of California, Santa Barbara. “People used to think completely differently. They thought if it was beautiful it had to be true. It took many centuries for people to realize that this is how you must approach the acquisition of true knowledge that you can verify.”

For particle physicists, says Robert Cahn, a senior scientist at Lawrence Berkeley National Laboratory, the scientific method isn’t so much going from hypothesis to conclusion, but rather “an exploration in which we measure with as much precision as possible a variety of quantities that we hope will reveal something new.

“We build a big accelerator and we might have some ideas of what we might discover, but it’s not as if we say, ‘Here’s the hypothesis and we’re going to prove or disprove it. If there’s a scientific method, it’s something much broader than that.”

Scientific inquiry is more of a continuing conversation between theorists and experimentalists, says Chris Quigg, a distinguished scientist emeritus at Fermilab.

“Theorists in particular spend a lot of time telling stories, making up ideas or elaborating ideas about how something might happen,” he says. “There’s an evolution of our ideas as we engage in dialogue with experiments.”

An important part of the process, he adds, is that the scientists are trained never to believe their own stories until they have experimental support.

“We are often reluctant to take our ideas too seriously because we’re schooled to think about ideas as tentative,” Quigg says. “It’s a very good thing to be tentative and to have doubt. Otherwise you think you know all the answers, and you should be doing something else.”

It’s also good to be tentative because “sometimes we see something that looks tantalizingly like a great discovery, and then it turns out not to be,” Cahn says.

At the end of 2015, hints appeared in the data of the two general-purpose experiments at the Large Hadron Collider that scientists had stumbled upon a particle 750 times as massive as a proton. The hints prompted more than 500 scientific papers, each trying to tell the story behind the bump in the data.

“It’s true that if you simply want to minimize wasting your time, you will ignore all such hints until they [reach the traditional uncertainty threshold of] 5 sigma,” Quigg said. “But it’s also true that as long as they’re not totally flaky, as long as it looks possibly true, then it can be a mind-expanding exercise.”

In the case of the 750-GeV bump, Quigg says, you could tell a story in which such a thing might exist and wouldn’t contradict other things that we knew.

“It helps to take it from just an unconnected observation to something that’s linked to everything else,” Quigg says. “That’s really one of the beauties of scientific theories, and specifically the current state of particle physics. Every new observation is linked to everything else we know, including all the old observations. It’s important that we have enough of a network of observation and interpretation that any new thing has to make sense in the context of other things.”

After collecting more data, physicists eventually ruled out the hints, and the theorists moved on to other ideas.

The importance of uncertainty

But sometimes an idea makes it further than that. Much of the work scientists put into publishing a scientific result involves figuring out how well they know it: What’s the uncertainty and how do we quantify it?

“If there’s any hallmark to the scientific method in particle physics and in closely related fields like cosmology, it’s that our results always come with an error bar,” Cahn says. “A result that doesn’t have an uncertainty attached to it has no value.”

In a particle physics experiment, some uncertainty comes from background, like the data Narain’s group found that mimicked the kind of signal they were looking for from the top quark.

This is called systematic uncertainty, which is typically introduced by aspects of the experiment that cannot be completely known.

“When you build a detector, you must make sure that for whatever signal you’re going to see, there is not much possibility to confuse it with the background,” says Helio Takai, a physicist at Brookhaven National Laboratory. “All the elements and sensors and electronics are designed having that in mind. You have to use your previous knowledge from all the experiments that came before.”

Careful study of your systematic uncertainties is the best way to eliminate bias and get reliable results.

“If you underestimate your systematic uncertainty, then you can overestimate the significance of the signal,” Narain says. “But if you overestimate the systematic uncertainty, then you can kill your signal. So, you really are walking this fine line in understanding where the issues may be. There are various ways the data can fool you. Trying to be aware of those ways is an art in itself and it really defines the thinking process.”

Physicists also must think about statistical uncertainty which, unlike systematic uncertainty, is simply the consequence having a limited amount of data.

“For every measurement we do, there’s a possibility that the measurement is a wrong measurement just because of all the events that happen at random while we are doing the experiment,” Takai says. “In particle physics, you’re producing many particles, so a lot of these particles may conspire and make it appear like the event you’re looking for.”

You can think of it as putting your hand inside a bag of M&Ms, Takai says. If the first few M&Ms you picked were brown and you didn’t know there were other colors, you would think the entire bag was brown. It wouldn’t be until you finally pulled out a blue M&M that you realized that the bag had more than one color.

Particle physicists generally want their results to have a statistical significance corresponding to at least 5 sigma, a measure that means that there is only a 0.00003 percent chance of a statistical fluctuation giving an excess as big or bigger than the one observed.