Organizations are still choosing to over-provision their data center and IT requirements, which can get in the way of efforts to run these facilities more sustainably.
It is not uncommon, for instance, for enterprises to turn off operating features that allow a system to run at its most efficient mode, said John Frey, chief technologist of sustainable transformation at Hewlett Packard Enterprise (HPE).
Also: Apple is building a high-security OS to run its AI data centers – here’s what we know so far
He explained that HPE ships its devices set to operate at their most efficient level, with power performance optimized. However, customers often will turn off the default setting as soon as they receive the new product, worried that it would deprive the system of the computing power it needs to run without lag.
“So we can design our products to operate in the most efficient way and set it to do so as a default. The question is, how do we get customers to leave it [running] that way,” said Frey, who spoke to ZDNET on the sidelines of HPE Discover 2024.
He noted that, most times, businesses operate their data center and IT infrastructures at 30% utilization, choosing to over-provision to ease their anxiety about these systems continuing to operate smoothly.
These businesses end up with a hardware stack that is hyper-efficient, but used inefficiently, he said. He added that a huge part of HPE’s efforts go toward helping customers use its products more efficiently, which would in turn reduce the energy needed to power these systems.
Customer education and a change in mindset play a big part in driving overall sustainability efforts, he said. For its part, HPE provides whitepapers and case studies, including adoption frameworks to guide customers through the change, according to Frey.
He noted that metrics and analytics also have a role in quantifying the returns for businesses, be it in terms of dollars, risk reduction, enhanced resilience, reduced carbon emission, or cybersecurity benefits.
Also: Singapore keeping its eye on data centers and data models as AI adoption grows
And while regulations and mandating some of these operating standards can help drive adoption, these should be rolled out in collaboration with the industry and user community, he said. This will ensure there are no unintended consequences, such as poorer performance in other areas.
Policies that lead to such unintended results may compel companies to move workloads out of the regulating country, which is not what the government wants in setting out these mandates, he said.
Asked about key barriers to building more sustainable data centers, Frey noted that the one-size-fits-all strategy no longer works, particularly amid the anticipated spike in artificial intelligence (AI) workloads.
Facilities that power AI applications will likely need liquid cooling to maintain or improve energy efficiency for these compute-intensive environments. On the other hand, tapping ambient air may be sufficient to cool the insides of data centers running more general-purpose applications, he explained.
Also: Business sustainability ambitions are hindered by these four big obstacles
Efforts to address higher temperatures, such as Singapore’s data center operating standards for tropical climates, also are better suited for traditional workloads, he said.
As companies move toward higher rack power density with their adoption of AI, they will likely need to move to liquid cooling environments, he noted.
Eventually, Frey believes, most data center operators will move in the same direction, as newer more powerful processors generate more heat since they are capable of handling more tasks.
The average IT rack used to run at between 3 and 5 kilowatts (kW) and this has been growing in the past decade to more than 20kW for mainstream computing workloads, he noted.
Power requirements go up further with racks that run AI workloads or train models, hitting more than 50kW per rack. Ambient air alone then will not be sufficient to cool such environments and will drive the need for liquid cooling, he said.
Also: Global tech spending expected to keep climbing on AI demand
In fact, demand for liquid cooling has driven the data center thermal management market to $7.67 billion, according to tech research and advisory firm Omdia. It is expected to climb at a compound annual growth rate of 18.4% until 2028, fueled by the adoption and development of AI.
In particular, liquid cooling saw significant growth in China and North America, Omdia said. “The data center thermal management is advancing due to AI’s growing influence and sustainability requirements,” the research noted. “Despite strong growth prospects, the industry faces challenges with supply chain constraints in liquid cooling and embracing sustainable practices.”
The firm added that the integration of AI-optimized cooling systems, strategic vendor partnerships, and ongoing push for energy-efficient and environmentally friendly solutions will shape the industry’s development.
“Data center cooling is projected to be a $16.8 billion market by 2028, fueled by digitalization, high power capacity demand, and a shift toward eco-friendly infrastructure, with liquid cooling emerging as the biggest technology in the sector,” said Shen Wang, Omdia’s principal analyst.
Artificial Intelligence