Wed, Apr 8

New problems emerge as big data centers spread

New problems emerge as big data centers spread

Posted on April 8, 2026 by admin

By Kennedy Maize

Literally hot new environmental issues — water stress, heat islands, and type of power supply — are emerging as giant data centers proliferate.

A new report from international consultancy Wood Mackenzie finds that water stress — the availability of water for cooling industrial processes — is “becoming a strategic constraint for energy infrastructure as AI pushes compute density beyond air cooling capacity.”

Water stress is not a new issue in the energy sector. In France, high river water temperatures and low flows in summer have often forced nuclear plants to reduce output or shut down temporarily as they face insufficient cooling water. Hydroelectric plants are critically impacted by water stress as H2O is essentially their fuel. The two giant hydro projects on the Colorado River — Hoover Dam and Glen Canyon Dam — are facing existential issues with the long-term decline in the river.

According to Wood Mac, “By 2050, 31% of global GDP will be exposed to high water stress, up from 24% in 2010, threatening the thermal power plants and data centres that underpin modern economies.” Data centers have typically cooled their multiple computers by circulating cool air over them. But the recent trend toward ever bigger data centers, driven by the computing needs of large language models, is leading them toward water cooling.

As data centers reach and surpass a gigawatt in electric demand to run their enormous banks of computers, they generate ever more heat. The report notes that “the explosion of AI computing is pushing data centres toward liquid cooling systems that can handle 250 kilowatts per rack; 10 times what air cooling can manage and creating a parallel demand spike just as water availability becomes volatile.”

According to the report, “Air cooling is limited to 15-20 kilowatts per rack. Modern AI training nodes exceed 120-200 kilowatts per rack. That gap cannot be bridged with more fans and pushes data centres toward liquid cooling systems.” Water cooling is more energy efficient than air cooling in data centers: the split of the power input in air-cooled center is 60% going to power the works, with 40% going to cooling. The ratio in liquid cooling is 90-10.

There’s no free lunch in moving to water cooling. The report cautions, “But improved efficiency often comes at the cost of higher water use. Average water usage effectiveness is projected to rise 20% in the same period as operators use evaporative cooling to minimize power demand.”

“AI clusters generate heat loads that air simply cannot handle at scale,” said Jom Madan, Wood Mackenzie principal analyst, “Liquid cooling isn’t optional anymore, it’s the foundation for next-generation compute.”

The water stress issue goes beyond data centers, according to the analysis. Madan adds, “The water question hasn’t gone away; it’s moved from the data hall to the power plant and that’s where the real exposure sits. Thermal power generation remains 10 to 20 times more water-intensive than data centre on-site cooling. As water stress intensifies, the case for wind, solar, and dry cooling becomes operational, not just environmental. The technology exists. The pressure is mounting. What’s missing is the policy framework to accelerate deployment at the speed the market demands.”

The prodigious heat output from power plants and data centers also impacts the surrounding area, complicating the water issue. A new study — not yet peer reviewed — from the University of Cambridge estimates that “the land surface temperature increases by 2°C (3.6 degrees F) on average after the start of operations of an AI data centre, inducing local microclimate zones, which we call the data heat island effect.”

The study says that the creation of these heat islands could have “a remarkable influence on communities and regional welfare in the future, hence becoming part of the conversation around environmentally sustainable AI worldwide.”

Reporting on the new study, CNN noted that the Cambridge analysis found that in “extreme cases, nearby temperatures increase by up to 16.4 degrees Fahrenheit.”

It is possible to reduce the heat data centers produce, while increasing their energy efficiency. It’s a new take on an old electric industry dispute: alternating current versus direct current.

According to Lawrence Berkeley National Laboratory’s Center of Expertise for Data Center Efficiency, “Power flows into most data centers at a high voltage, is converted from AC to DC, and then back to AC in Uninterruptible Power Supplies (UPSs). Voltage is then dropped in Power Distribution Units (PDUs) and converted from AC back to DC in individual server power supplies. This process is extremely inefficient as each transition results in energy losses and heat, which in turn must be removed by the cooling system.”

According to the national lab, a trend is developing toward use of direct current throughout: “Most data center server racks are not currently powered this way, but with the advent of servers on the market that can operate with either AC or DC, it is possible to use the DC powering approach, thus eliminating extra power conversion steps and losses. Other benefits include reduced cooling needs, higher equipment densities, and reduced heat-related failures.”

IEEE Spectrum reported late last month, “Last week’s Nvidia GTC conference highlighted new chip architectures to power AI. But as the chips become faster and more powerful, the remainder of data center infrastructure is playing catch-up. The power-delivery community is responding: Announcements from Delta, Eaton, Schneider Electric, and Vertiv showcased new designs for the AI era. Complex and inefficient AC-to-DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.

“‘While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,’ says Chris Thompson, vice president of advanced technology and global microgrids at Vertiv.”

The Quad Report

3
3 replies