Data Centers Struggle To Mitigate Cooling Issues

Liquid cooling is already in many enterprise data centers

Data centers occupy large amount of space, eat up considerable amounts of energy and tend to overheat the rooms they subjugate. These issues pale in comparison to underlying issues that threaten not only the authenticity of their capacity but signify potential major malfunctions in design flaws that could cost companies millions down the road. We’ve outlined the concerns that data centers face heading into the next decade, bearing in mind the virtualization factor that many data centers will be heading towards soon.

Environment

Not only does the outer environment matter to ecologists, the inner environments that data centers support also becomes a major factor in keeping data centers running smoothly. IT managers need to have an extremely pragmatic approach in keep data centers cooled off, actual servers running efficiently without wasting millions of dollars annually on wasted energy costs and keep a safe working environment as well. To curtail heavy costs, underutilized systems are continuously used instead of replacing outdated or inefficient equipment; workloads need to be eradicated from older machines to make way for deployment by more energy conscious rack servers. This issue can eventually shut down a company that cannot adhere to environmental concerns especially in our current lackadaisical economy.

Security Threats

Harboring concerns of constant hacks and security breaches, IT companies simply battle daily malicious server intrusions by hand, frequently changing their parameters to learn how the uninvited guests got inside. Security issues concern other businesses in current cloud computing platforms and frustrate government entities to no end. Low latency in data connectivity is in high demand yet many of the older servers cannot handle these needs to perfection as needed.

Cooling

Perhaps one of the largest underlying server issues needing to be tackled causes the previous two threats to become more viable — cooling data centers. The larger servers, responding to the need for higher utilization within daily business operations, are finding it increasingly hard to cool off when put under excruciating strain. The more servers have to work, the more electricity is put to use and inevitably wasted. With servers operating in tight spaces, cooling large rooms full of running servers has become gradually problematic for IT managers to tackle. Take into account both internal and external computer connections to the servers and you’ll understand how several million dollars’ worth of servers can be difficult to contain in cool environments.

Does the blame for cooling reside in the equipment themselves, or do the actual rooms warehousing the servers play a role in this issue? We delve even deeper into the cooling constraints servers face and why.

The Struggle To Cool Data centers

Stuck in rooms without windows, inadequate ventilation and often times little supervision you’ll find racks upon racks of servers growing hotter by the minute, causing errors both on end-user and server sides. The struggle to cool data centers has been a clandestine issue for roughly 30 years and now, with many programs and data being virtualized on cloud platforms, cooling these rooms and the equipment within them has now become a monstrosity to tackle with conventional methods. With roughly 1 million KWh of electricity being wasted yearly by data centers, the prognosis is initially to change the equipment. What precisely causes data centers to struggle with cooling? Let’s examine further:

Lack Of Central Cooling Source

Saving money along with man-hours may be frugal on the surface, yet refusing to channel appropriate adequate air into server rooms would be construed as counterproductive. Innovations are still several years behind that enable servers to operate with less heat emission yet immediate remedies aren’t being implemented to prevent servers from going down for periods of time while they cool off. The necessity for centralized cooling or placing larger rack servers within a ‘hot aisle’ still hasn’t been implemented in larger companies to date completely.

High Voltage Wiring

Today’s standard of wiring for data centers calls for more scalable yet modular wiring configurations to replace the higher voltage direct current power eaters. Instead of having a vested interest in replacing wiring throughout data centers, many smaller IT firms simply avoid the costs completely by placing fans in larger rooms to cool wiring down or cut off the company central air unit to reduce voltage strain. Having many uninterruptible power sources in place for data centers is also lacking which drives up costs and could potentially leave data centers susceptible to electromagnetic surges during massive storms and prevent wiring from overheating especially if relying upon older CAT cables for connection.

Uneven Cooling

Data centers which do have cooling devices have improperly planned the location of cooling systems which only benefit those rack servers closest to the cooling source. When an uneven distribution of cooling is an issue, server farms can become problematic in carrying out commands or even staying properly operational especially if one server relies on the actions of another. Inadequately placed ducting to deliver air, and no cold air return in place, can also cause wasted electricity as the cooling unit will need to strain to produce cool air when no return source exists.

While cooling data centers is perhaps the catalyst behind many other issues, we’ll look towards innovative measures data center managers can deploy to cool both the wiring configurations and servers within their data centers to even cool air distribution, keep costs down while adhering to environmental standards as we head into the next decade of data center needs relying on cloud computing solutions.

Data center Cooling Solutions

Many of the issues that are being dealt with by IT firms today pertain to airflow in data centers which causes unnecessary server strain, using more electricity and inevitably wearing down the equipment much faster than if adequate air was available. Distribution of electricity can become more predictable since newer configurations could include specific power flow to wiring which is then distributed to IT mainframes and servers. Cooling liquids can be drawn through piping to cool servers which elevates efficient power circulation yet self-recycles to save further costs. Using these methods could lead to immeasurable savings on electricity waste; the floor of these savings is nearly unfathomable. Given today’s current limits on technology, however, amassing the low savings levels of server usage could take some time, depending on what’s offered and available through specific vendors.

Pumping coolants into IT equipment is still an immature yet conceivably attainable process that is yet to be perfected. With proper direction, air conditioning plans for data centers in place and proper organization of the finer details of these innovations, cooling can expect to see an incremental 5–10% gain in efficiency nearly immediately with electrical waste slowly becoming phased out with employee computer systems with smarter dynamisms in place working in syncopation with greener servers.

Although we’ve thoroughly stresses greener equipment and cooling of data centers, proper architectural attention and eco-efficient strategies still need implementation for the future of data centers to thrive with cloud-based solutions and storage for mass business usage. Reduction in oversizing, coupled with higher efficiency equipment and effective architectural designs, could lead to abundant savings of over 40% in electricity consumption while still having a feasibly effective product to offer consumers and businesses alike. However, will the cooling really take that much effort when more simplistic methods already exist that aren’t being properly implemented?

Equipment racks that contain perforated skinning would be an initial fix after the wire cooling issues is disbanded. With perforated rack spaces, air can freely pass through and compensate where internal server fans may be lacking. Solid fronts on server racks decrease longevity of server equipment by 50% for every 18 degrees over 20 degrees Celsius which leads to immeasurable losses over time. Since recent manufacturing specs reveal that newer servers will be produced to emit 10kW to 20kW per frame, small businesses and even larger IT corporations will find plenty of struggle in cooling newly purchased servers unless properly ventilated rooms and cooling processes are in place such as those listed above.

According to Moore’s Law, processing power of semiconductors will double ever 18 to 24 months without fail. Shrinking footprint production means that space is being compromised without any cooling solution in place to compensate for the space loss. Compacting technology may have initially feasible perks, yet in the world of electrical consumption and overheating servers, it’s definitely not always plausible to shrink space. Overall, space consumption is falling at alarming rates of roughly 30% per annum. Chip power has nearly doubled over the last two years along to an output of 118 watts/chip.

Solve cooling issues now. Save money later.

Roughly 60 million megawatt-hours of data center usage is expounded without productive results every year, raising alarms across the information technology world. With current server configurations, only half of the electricity used actually arrives to computer loads; basically, IT centers are purchasing 50% of the electricity simply to throw away while 50% is effectively used for actual power for data centers. With many vendors still providing mass power data products, IT companies are looking both towards an effective solution for data passing while getting onboard with renewable energy sources. With overhead costs to consider and our lackadaisical economy not quite up to pre-war standards, finding data center vendors with renewable energy-ready servers has become incommodious at best.

Saving energy, however, isn’t simply the final point of contention with IT companies: since many of the data centers across the globe are reaching the end of their lifespan, anecdotal connotations for dying servers incapable of handling demands of today’s data storage and retrieval are surfacing more often in meetings. Overall, the need has arisen to both improve capacity utilization while lowering the power consumption which will stop the unnecessary electrical spending every month. These steps would be able to cut down on both man hours and flagrant use of power. The main issue has become more prevalent across many of the IT centers across the nation, yet still woven into the fabric of information storage and disbursement lays prehistoric power sources that are becoming more outdated as we speak.

One great thing to look forward to is that much of the cooling and powering infrastructures is becoming more energy efficient and continuing to offer gains in performance and capacity, yet still seems to take a distant second place to current trends of energy hogs. Software solutions that improve energy productivity, including server virtualization and centralized power management, are more widely available. Cloud computing infrastructures, both public and private, are offering significant energy efficiency gains compared to traditional IT infrastructures.

Guaranteeing the consistency of precarious system processes has been a huge imposition since the early days of information technological developments. As a result, computer systems have routinely been overbuilt to reduce the likelihood of unplanned disruptions due to hardware or software failures or system slowdowns caused by unexpected user demand. Will our current infrastructure have the ability to make a plausible run at changing the sources for this energy? We believe it to be inevitable.

Nipping the cooling issue directly in its bud should begin with innovative engineering that includes reformatting the design schema of floor plans, redo the air conditioning to promulgate better air flow into rooms containing server farms and ditching the conceived notion that smaller is better in terms of how much space should be allocated for server floor space. From the floor up, the cooling issue can be tackled in a virtually cost-conscious manner simply by reinventing the server, per se, and properly allocating air or simply channeling cooling fluids across high impedance wiring.

As an IT community we already possess the proper tools to implement cooling solutions; it’s just a manner of proper planning.