Astronomers from around the world pointed their powerful telescopes towards a supermassive black hole that lies in the center of the Milky Way (nearly 26,000 light years from Earth) and believe they have snapped the first-ever picture of a black hole.

It will take months to develop the image, but if the scientists succeed the results may help reveal the mysteries about what the universe is made of and how it came into being.

“Instead of building a telescope so big that it would probably collapse under its own weight, we combined eight observatories like the pieces of a giant mirror,” said Michael Bremer, an astronomer at the International Research Institute for Radio Astronomy (IRAM) and a project manager for the Event Horizon Telescope.

“For the first time in our history, we have the technological capacity to observe black holes in detail,” said Bremer.

In reference to how NVIDIA GPUs were used for the work, Andre Young of the Harvard-Smithsonian Center for Astrophysics said: “The data recorded at the SMA (Submillimeter Array on Maunakea, Hawaii) for the EHT (Event Horizon Telescope) observation is a sum of the signals received in the different reflector antennas that comprise the SMA. The way in which this sum of signals is computed means the raw recorded data from the SMA is in a format different from that recorded at other stations in the EHT. Specifically, at a typical EHT station a number of analog signals are each time-domain sampled at a rate of 4,096 megasamples per second and the resulting digitized signal directly written to disk; at the SMA the corresponding signals are each sampled at 4,576 megasamples per second and converted to the frequency domain prior to recording. Before the SMA data can then be combined with data from the rest of the EHT it first needs to be converted to a compatible format, which is where the GPU processing (by four GTX 980 GPUs) comes in.

During the processing we read the raw data from a set of hard drives using the same hardware platform used to record data during the observation (the Mark6 VLBI recorder), send the raw data over the network to a GPU server where it is processed, and then send the processed data over the network to a second Mark6 unit which records it to a second set of hard drives. The processing on GPU converts the data from the frequency to the time domain, does some filtering, and resamples the data at the required rate.”

All of the collected data will now be processed on supercomputers at the MIT Haystack Observatory in Massachusetts.

Read more >