Servers immersed in a liquid cooling solution from GRC (Green Revolution Cooling). (Photo: GRC)

It sounds like science fiction: Take a supercomputer and immerse it in tanks of liquid coolant, which must be kept cool with the use of water. This sci-fi scenario has created a real-world scientific computing powerhouse.

The Vienna Science Cluster uses immersion cooling, dunking SuperMicro servers into a dielectric fluid similar to mineral oil. Servers are inserted vertically into slots in the tank, which is filled with 250 gallons of ElectroSafe fluid, which transfers heat almost as well as water but doesn’t conduct an electric charge.

The system has emerged as one of the world’s most efficient supercomputers, as measured by Power Usage Effectiveness (PUE), the leading metric for the efficiency of data center facilities. The Vienna Science Cluster 3 system touts a mechanical PUE of just 1.02, meaning the cooling system overhead is just 2 percent of the energy delivered to the system. A mechanical PUE doesn’t account for energy loss through the power distribution system, which means the actual PUE would be slightly higher.

The end result: 600 teraflops of computing power uses just 540 kilowatts of power and 1,000 square feet of data hall space.

“We are very impressed by the efficiency achieved with this installation,” said Christiaan Best, CEO and founder of Green Revolution Cooling, which designed the immersion cooling system. “It is particularly impressive given that it uses zero water. We believe this is a first in the industry.”

Why Liquid Cooling Matters

Liquid cooling can offer clear benefits in managing compute density and may also extend the life of components. The vast majority of data centers continue to cool IT equipment using air, while liquid cooling has been used primarily in high-performance computing (HPC). With the emergence of cloud computing and “big data,” more companies are facing data-crunching challenges that resemble those seen by the HPC sector, which could make liquid cooling relevant for a larger pool of data center operators.

Last fall at the SC14 conference, a panel of HPC experts outlined their expectation for a rapid expansion for liquid cooling that may extend beyond its traditional niches. At Data Center Frontier we’ll be tracking this transition, and keeping readers posted on relevant innovations in liquid cooling, such as the water-less implementation in Vienna.

The Vienna Scientific Cluster combines several efficiency techniques to create a system that is stingy in its use of power, cooling and water.

Water management is a growing priority for the IT industry, as cloud computing is concentrating enormous computing power in server farms supported by cooling towers, where waste water from the data center is cooled, with the heat being removed through evaporation. Most of the water is returned to the data center cooling system, while some is drained out of the system to remove sediment.

The fluid temperature in the immersion tank is maintained by a pump with a heat exchanger, which is usually connected to a standard cooling tower. The Vienna Scientific Cluster uses a closed loop dry cooler as the final method of heat rejection, requiring no water at all. Energy use may rise slightly in the summer, but should still remain near the 1.1 to 1.2 level seen among leading hyperscale data centers.

Free Resource from Data Center Frontier White Paper Library Solutions to Data Center Water and Power Availability The major challenges facing the data center industry today are finite water and power resources. This is especially true for large scale operations and particularly the ongoing trend toward hyperscale facilities. Get the new report that explores a new data center cooling method called the StatePoint Liquid Cooling (SPLC)4 system from Nortek Solutions. This report explores a usage case study in Singapore. The major challenges facing the data center industry today are finite water and power resources. This is especially true for large scale operations and particularly the ongoing trend toward hyperscale facilities. Get the new report that explores a new data center cooling method called the StatePoint Liquid Cooling (SPLC)4 system from Nortek Solutions. This report explores a usage case study in Singapore. We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service . You can opt out at any time. Get this PDF emailed to you. Email* Email Me This PDF

The novelty of the Vienna design is that it combines a water-less approach with immersion cooling, which has proven effective for cooling high-density server configurations, including high-performance computing clusters for academic computing, seismic imaging for energy companies, and even bitcoin mining.

Breaking the CRAC Habit

While not seen often in today’s enterprise and cloud data centers, liquid cooling isn’t new. If you’ve been around the industry for a few years, you’ll recall the days when water-cooled mainframes were standard in corporate data centers. But that soon shifted to racks of servers cooled by air using the familiar “hot aisle/cold aisle” design seen in most data centers today, with water chilling loops confined to the air handlers and “CRACs” (computer room air conditioners) housed around the perimeters of the data hall.

The alternative is to bring liquids into the server chassis to cool chips and components. This can be done through enclosed systems featuring pipes and plates, or by immersing servers in fluids. Some vendors integrate water cooling into the rear-door of a rack or cabinet.

Immersion takes a different approach, sinking the equipment in liquid to cool the components.

Green Revolution has been in the forefront of the recent resurgence of interest in immersion. In addition to supporting extreme power density, immersion cooling offers potential economic benefits by allowing data centers to operate servers without a raised floor, computer room air conditioning (CRAC) units or chillers. It also eliminates the need for server fans, which can also be power hogs.

The VSC-3, was installed in 2014, with Green Revolution Cooling working with Intel, ClusterVision, and Supermicro. It supersedes the VSC-2 cluster, which used a rear-door cooling solution that achieved a mechanical PUE of 1.18. VSC-3 features 2,020 compute nodes, each with 16 processor cores housed in the CarnotJet tanks.

The Cost Component of Cooling

Liquid cooling often requires higher up-front costs, which can be offset by savings over the life of a project. Economics were a key driver for the Vienna design.

“The value proposition (of the GRC system) was extremely impressive,” said Christopher Huggins, Commercial Director at ClusterVision, a leading European HPC specialist. “The whole data center and cluster was far less expensive than it would have been with any other cooling solution on the market. We are certain we will be using the GRC solution on more projects in the future.”