According to Google, the problem with computing for such a large number is that the time and resources needed for the task increase more rapidly than the number itself. Iwao said the biggest challenge was that the project required "a lot of storage and memory to calculate." It needed a whopping 170 terabytes -- equivalent to all the indexed/searchable parts of the internet back in 2002 -- to be completed. In addition, the longer the supercomputer works on a computation, the higher the risk of hardware outage that can impact the process.

Running the computation in the cloud prevented those issues from arising and made access to 170 terabytes possible, since Google just had to keep its cloud infrastructure running. Also, it allows the tech giant to sell researchers access to the digits for $40-per-day without having to save the massive dataset in physical drives.