Hewlett Packard Enterprise (HPE) has quickly become one of the major suppliers of supercomputer-class computing, thanks in part to its purchase of one of the big names on the Top500 list.

In November 2016, HPE acquired high-performance computing (HPC) company SGI, pushing it up the Top500 supercomputer league

In fact, 26 of the 145 top supercomputers HPE has ranked in the June 2017 Top500 list, are from the SGI acquisition with several of these systems ranked in the Top 50.

Analyst IDC estimated that the global HPC market would be worth $15.3bn by 2019 – and HPE is set to have a major share.

In January, IDC published a whitepaper that suggested the SGI acquisition would inject supercomputer skills into HPE’s business. IDC noted that the SGI employees who transferred across to HPE were experienced at designing hardware-software systems that compete at the bleeding-edge performance levels that typify leadership-class supercomputers.

HPE has now set itself a goal of achieving exascale computing, a level of computer processing power equivalent to the sum of all of today’s Top500 supercomputers combined.

In June, HPE was awarded a research grant from the US Department of Energy (DOE) to develop a reference design for an exascale supercomputer. The overall goal of the programme is to achieve exascale performance in 2022-23. To reach this goal, high-performance computers will need to be 10 times faster and more energy efficient than today’s fastest supercomputers.

Lower supercomputing energy consumption Mike Vildibill, vice-president of advanced technology group at HPE, leads the company’s exascale activity. Speaking to Computer Weekly about the challenges of meeting exascale computing, he said: “If you look at the Top500 – added together, all 500 systems would be equivalent to one exaflop.” In other words, a single exaflop is equivalent to all 500 of the world’s fastest supercomputer systems combined. In total, these systems consume 600MW (megawatts) of energy, which Vildibill said approaches the limits of a nuclear reactor. “The DOE wants to develop a system that only consumes 20MW of power.” Vildibill does not believe future innovations in chip architecture will be enough to tackle the immense amount of energy needed for exascale computing. “Just waiting for technological advancement will not come close to achieving the energy requirements,” he said, predicting that the power to move data will exceed the power to process it. “By 2022, we will consume more power moving data than processing data.” Read more about HPE high-performance computing The head of Hewlett-Packard Labs speaks to Computer Weekly about a new era of computing, where memory is no longer a constrained resource.

The Machine is HPE’s proof-of-concept next-generation hardware architecture that aims to overcome the limits of today’s IT by using large memory arrays. Rather, than moving data between main memory and the processor, HPE’s architecture relies on the concept of memory-driven computing, where data is stored in main memory so that it no longer needs to be copied from one location to another for the computer’s processor to work on it. Memory-driven computing is at the heart of HPE’s exascale reference design. The work on exascale computing and memory-driven computing is derived from HPE Labs’ The Machine research programme. HPE said fundamental technologies that will be instrumental in the exascale project include a new memory fabric and low-energy photonics interconnects. The company said it would be looking at non-volatile memory options that could attach to the memory fabric, significantly increasing the reliability and efficiency of exascale systems. “HPE is taking some of the fundamental parts of The Machine and dramatically reducing the power of these systems,” said Vildibill.