Testing general relativity is a fraught business. The theory has proven to be so robust that anyone who thinks it's wrong gets slapped around by reality in a pretty serious way. The tests that we apply are also limited by our environment, in that we can only look at gravity with precision where it's rather weak: in the lab, or by tracking the motion of planets. That's a whole range of scales and forces, but it doesn't cover where it might truly matter, which is right next to a black hole.

Observing orbits around a black hole would take a career's worth of measurements and, frankly, who has the time? It is also a rare benefactor who will fund a couple of decades worth of telescope time. Luckily, telescopes have been collecting data for a while, and some of that happens to include the vicinity of some black holes. Recently, some scientists decided to dig up the data and test general relativity in the vicinity of a supermassive black hole.

Beware of the black hole

At the center of our galaxy, there lies a black hole, which like the Rabbit of Caerbannog, fiercely devours unwary wanderers. Nevertheless, there are a few foolhardy stars that orbit close to the rabbit black hole. These stars have orbits of just a couple of decades, and they experience rather large gravitational forces. So, astronomers expect that accurate observations of these stars might pick out deviations from general relativity.

Luckily, the Keck telescopes have been gathering data from the heavens for about 25 years, and over that time, they have turned their unblinking eyes towards the galactic center on numerous occasions. Each time, the observations were performed a bit differently. For instance, the telescopes were upgraded with adaptive optics in 2005, and some of the observations focused on obtaining spectral data rather than imaging. These latter data contain orbital velocity data, because the motion of the star causes a doppler shift in the observed colors of light.

All of this data was combined in a consistent way to map out the orbital positions and velocities of two stars. This is quite an achievement, because for each observation, the telescope is pointing in a slightly different direction, using different exposure times, and accounting for other slight differences. Although other telescopes also have data available, the public records were not detailed enough to allow the scientists to process the data in a consistent way. This is a pity, because, the data set consists of about 100 observations from just these two telescopes. Imagine what might have been obtained if more telescopes had accessible data?

After all of this, what have we learned? General relativity is still right, and it predicted the stellar motion accurately. These measurements tested general relativity in a way that was distinct from all previous ones—in high gravitational fields over long periods of time. In particular, the new measurements helped to put boundaries on extensions to general relativity that follow a kind of modified Newtonian dynamics model. In these models, there is a distance at which a new force becomes apparent, and that force has some unknown characteristic strength. So, astronomers are looking for a consistent distance at which there is a noticeable deviation from predictions. However, the measurements tell us that for any distance that is relevant to the orbit of these stars, a new force would have zero strength.

Or, more precisely, a new force would have a strength that is so small that we cannot yet measure it. Conclusion: general relativity wins again.

Normally, I read papers like this with interest, but I rarely report on them. The problem isn't that the researchers didn't find anything; often that they didn't find anything, but only in a way that is just a bit more sensitive than their previous paper. But this paper represents more than a test—it's truly the beginning of something new.

A big, open data future

One of the themes that's been emerging in scientific research is open data. Science has always generated more data than it can cope with. In the past, most data was collected in physical objects: think astronomical plates, X-ray images, electron microscope images, and many other types of collections.

These were all stored in filing cabinets around the world. The data contained in images were used once or twice and, unless the images were spectacular, forgotten. It took luck—and a damn good memory—to recognize that there were connections among data from disparate projects, hidden across multiple old datasets. Even if you could recognize it, you had to get all the data in one place in order to act on that connection. Put simply, good science was limited by poor data accessibility. It was easier, in fact, just to do new experiments.

What's more, with a few exceptions, most previous experiments had to be limited in scope because each scientist had only one lifetime to spend collecting data.

Now, we are instead reaping the benefits of a revolution. There are experiments, like the last couple of generations of particle accelerators, that create data so fast that no team of scientists, no matter how dedicated or large, can hope to analyze all of it. And there are facilities that generate data for one purpose, but that data can be used in innovative ways to yield new insights in other fields. Mining all this information is really important for very slow processes, where the data accumulates over periods that extend through generations, like climate change, and now orbital mechanics.

The particle physics community led the way in figuring out how to handle this: data from the LHC, and from its predecessors—the LEP and Tevatron—have been available to researchers around the world for quite some time. It is common for theorists to be able to look through the data to see if they can find any hope for their latest fantasies. Climate scientists have created databases that allow anyone to test models and analyze trends. The results being, for particle physics, a standard model that resists fantasy, and climate models that are repeatedly tested against reality.

This silent revolution is spreading to every branch of science, but we are only really scratching the surface of what might be hidden in the vast reams of digitized data. Scientists can now imagine conducting experiments that, a decade ago, might have taken an entire career of observations for one data point. Today, the data may already exist and, most importantly, be accessible. In this respect, the open data movement is probably one of the more important recent developments in science.

In astronomy, the number of eyes pointed at the heavens is increasing. The sensitivity of those eyes is getting better. Once the observations are consistently documented, we will have a treasure trove of data for future generations. We will be able to test our theories of the Universe with exquisite precision. I'm looking forward to that.

Physical Review Letters, 2017, DOI: 10.1103/PhysRevLett.118.211101