Power-hungry data centers and HPC systems adopt innovative, environmentally-friendly solutions
Now that we’ve entered the era of exascale high-performance computing (HPC) systems, component makers are set on a new target: Zettascale. Getting there means components like processors, GPUs, fabric, and storage solutions must continue their brisk development pace and overcome bottlenecks that impede the delivery of the necessary speed and scale. An even more significant challenge involves meeting the electrical demands of coming HPC systems and data centers in a sustainable way. Some exascale systems already have energy requirements akin to an entire town. However, that requirement will grow dramatically over time. During May 2022’s International Supercomputing Conference, Intel’s Jeff McVeigh noted that future computing infrastructure will use between 3 and 7% of global electricity production by 2030.
Improving cooling systems
One of the power-eaters involved in data center infrastructure is cooling. In past years, air cooling through heat sinks and fans could keep processors within ideal operating temperatures. In the future, these legacy approaches will be unable to keep up. We already see some very interesting approaches for heat dissipation while consuming less electricity in the process. For example, Microsoft announced success with its prototype underwater data center. Because it was built into a sealed container, the system could submerge many fathoms deep to make the most of the ocean’s temperatures for cheap, natural cooling. As a bonus, because the system remained vibration-free and in a fully-controlled environment, the server gear inside demonstrated greater longevity than hardware in a typical land-based data center.
In another scenario, High-Performance Computing as a Service (HPCaaS) provides cloud instances for advanced workloads instead of HPC infrastructure on-premise. HPCaaS instances hosted at atNorth take a multi-tiered approach to sustainability. Low ambient temperatures can assist in the cooling process by building data centers in Iceland. Also, Iceland offers several sources of green power from geothermal and hydroelectric processes. The combination helps atNorth data centers consume less electricity, and what they do consume is plentiful, inexpensive, and renewable.
We also see cases in which heat produced from data centers is re-directed for alternate uses. For example, several data centers in Stockholm use surplus heat to warm buildings. A different data center transfers excess heat to nearby greenhouses to help grow food for the local population.
Immersion cooling approaches
Making in-house cooling systems more efficient remains a high priority for those data centers without the ability to embrace solutions like the above to cool accelerator cards and processors’ cores. Liquid cooling approaches are stepping up to help address that need.
Today, three types of liquid cooling are in various stages of usage and development. These include liquid-assisted air cooling, immersion, and direct-to-chip recirculation. The most straightforward of these options is the liquid-plus-air approach. Water-filled heat sinks connect to a server rack, dissipating heat through a specialized radiator. In direct-to-chip cooling methods, a server rack has one or more pumps that drive cooled water into a jacket atop a processor. When warm water exits the jacket, tubes guide the liquid back to the rack’s cooling area. Since it’s a closed system, the same water is used over and over, making the process more environmentally friendly.
In contrast to this approach, some OEMs design servers for immersion cooling. The process involves submerging a sealed server system into a “bathtub” of coolant. With “single phase” immersion systems, the coolant remains in the tub as a liquid. …….
Source: https://www.datasciencecentral.com/making-data-centers-more-sustainable/