Data center cooling, it’s one of the most widely discussed and important topics in the industry. As discussed in our recent article entitled “Data Center Real Estate, A Tale of Two Markets,” we noted the growing discrepancy between older data centers and new hyperscale facilities. Regardless of the age or scale of the facility, data center power utilization and efficiency are critical.
It’s no secret that data centers are one of the largest consumers of electricity worldwide. It’s estimated that the data center industry is responsible for 1-1.5% of global electricity consumption. This statistic is only expected to increase as cloud services, edge computing, IoT, artificial intelligence (AI), and other digital transformation technologies take hold. Improvements in technology efficiency will only be offset by the ever-increasing amounts of compute and storage required to satisfy consumer and business demands.
Furthermore, data center power density requirements continue to increase year after year. The average rack power density is currently around 7kW and it is not uncommon to see rack density as a high as 15-16 kW per rack. With high performance compute (HPC), power densities can reach 100kW per rack. The question becomes, what do increasing power densities and a shrinking footprint mean for data center cooling? How does it impact Power Usage Effectiveness (PUE)? What are data center owners and operators doing to combat client demand changes in their facilities?
In this article, I will be examining the current systems and methods for cooling data center facilities as well as future cooling technologies that could disrupt the data center industry. We will examine the different components of data center cooling as well as the costs and potential cost-savings.
Why is Data Center Cooling Important?
The high costs associated with cooling infrastructure are one of the reasons why businesses abandon on-premises data centers and migrate to colocation. Most private data centers and telco closets are quite inefficient when it comes to cooling IT infrastructure. They also lack the monitoring capabilities of colocation data centers, which makes it increasingly challenging to fully optimize infrastructure to reduce cooling demands.
It should be obvious that poorly managed data center cooling can result in excessive heat which can lead to significant stress on servers, storage devices, and networking hardware. This can lead to downtime, damage to critical components, and a shorter lifespan for equipment which leads to increased capital expenditures. Not only that. Inefficient cooling systems can increase power costs significantly from an operational perspective.
Current Cooling Systems & Methods
CVC is a form of data center cooling technology made specifically for high-density servers. It optimizes the airflow path via equipment to allow the cooling system to handle heat more effectively, making it possible to grow the ratio of circuit boards per server chassis and utilize fewer enthusiasts.
Chilled water is a data center cooling system commonly used in mid-to-large-sized data centers that uses heated water to cool air being brought in by air handlers (CRAHs). Water is supplied by a chiller plant located somewhere in the facility.
Cold and hot aisle containment is a common form of data center server rack deployment that uses alternating rows of "cold aisles" and "hot aisles." A cold aisle has cold air intakes on the front of the racks, while the hot aisles consist of the air exhausts on the rear of the racks. Hot aisles expel hot air into the air conditioning intakes to be chilled then vented into the cold aisles. Empty racks are full of blanking panels to prevent overheating or wasted cold air.
One of the most common features of any data center, CRAC units are extremely similar to conventional air conditioners powered by a compressor that draws air across a refrigerant-filled cooling unit. They are quite inefficient concerning energy usage, however, the equipment itself is comparatively inexpensive.
A CRAH unit functions within a wider system involving a chilled water plant (or chiller) somewhere in the facility. Chilled water flows through a cooling coil inside the unit, which then uses modulating fans to draw air from outside the facility. Because they operate by chilling external air, CRAH units are a lot more efficient when used in locations with colder yearly temperatures.
This measurement represents the complete usable cooling capacity (usually expressed in watts of power) on the data center floor for the purposes of cooling servers.
Manages temperature by exposing warm air to water, which causes the water to evaporate and also draw the heat out of the air. The water could be discharged either in the kind of a misting system or a moist material such as a filter or mat. While this system is extremely energy efficient as it doesn't utilize CRAC or even CRAH units, it will demand a lot of water. Datacenter cooling towers are often utilized to facilitate evaporations and transfer extra heat to the outside atmosphere.
Any data center cooling system that uses the exterior atmosphere to present cooler air to the servers instead of continually chilling the same air. Even though this can only be implemented in certain climates, it's a very energy-efficient form of server cooling.
A raised floor is a frame that lifts the data center floor above the building's concrete slab flooring. The space between the two is employed for water-cooling pipes or enhanced airflow. While power and network cables are sometimes run through this space as well, newer data center cooling design and best practices place these wires overhead.
Visit our partners at vXchnge here for more information on data center cooling technologies
Future Cooling Systems & Technologies
Though air cooling technology has improved significantly over the years, it is still limited by fundamental problems. Besides significant energy costs, air conditioning systems use up a great deal of data center space. They also introduce moisture into sealed environments and are notorious for mechanical failures.
Until recently, data centers had no other choices for meeting their cooling demands. With many new liquid cooling technologies and methods available, colocation data centers are starting to experiment with new methods for solving their cooling challenges.
Liquid Cooling Technologies
While early iterations of liquid cooling systems were complicated, messy, and very pricey, the latest generation provides increased efficiency and effectiveness in cooling. Unlike air cooling, which requires a lot of power and introduces pollutants and condensation into the data center, a liquid cooling system is cleaner, more scalable, and highly targeted. Two common liquid cooling methods are full immersion cooling and direct-to-chip cooling.
Immersion systems involve submerging the hardware itself into a tub of non-conductive, non-flammable dielectric liquid. Both the fluid and the hardware are contained within a leak-proof case. The dielectric fluid absorbs heat far more efficiently than air, and as heated water turns to vapor, it condenses and falls back into the fluid to aid in cooling.
Direct to Chip Cooling
Direct-to-chip cooling utilizes pipes that deliver liquid coolant directly into a cold plate which sits atop a motherboard's chips to draw off heat. The extracted heat is subsequently fed to some chilled-water loop to be transported back to the facility's cooling plant and expelled into the outside atmosphere. Both methods provide far more efficient cooling solutions for power-hungry data center deployments.
Future Demands from AI, HPC, and GPUs?
Power and cooling efficiency will continue to be a top concern for data centers in the future. New generations of processors for machine learning artificial intelligence and analytics programs will require massive energy demands and generate substantial amounts of heat.
How will data center owners and operators respond? I believe that future cooling technologies like liquid and immersion cooling will play a critical role in the data center of the future. This will take place at the hardware manufacturer level as well as at the data center level. In addition, I can see a future where rack and containment products undergo extensive changes. These changes would include self-contained racks and even private change space. Just imagine if data center facilities go back to racks on concrete floors. I should say self-contained ecosystems on concrete.
What are your thoughts? I’m curious to know what you think about data center cooling technologies and where the industry is heading.