Most companies rushing to adopt AI are unprepared for the energy demands it will place on their infrastructure, and few understand the power consumption of AI systems or the implications for their data centers. data.
A study commissioned by AI chip company SambaNova found that 72% of business executives are aware that AI models have huge power requirements, and many are concerned about it, but only 13% monitor consumption electrical power of the AI systems they have deployed.
Power consumption is in most cases due to the reliance on power-hungry GPUs that are crammed into high-performance server systems to handle model training. SambaNova leader Rodrigo Liang said:
“Without a proactive approach towards more efficient AI hardware and power consumption, particularly in the face of growing demand for AI workflows, we risk undermining the progress that AI promises to achieve. »
He expects a change in attitude and predicts that by 2027, most business leaders will closely monitor energy consumption as a key performance indicator (KPI).
The rise of so-called agentic AI models adds to this trend. These are being developed to be capable of taking autonomous action and solving multi-step problems, and their greater complexity will add to energy problems, according to SambaNova.
Naturally, SambaNova is pitching its AI silicon, integrated into servers with the company’s software stack, as a less power-hungry alternative to GPUs. However, not everyone will want to take this route.
For organizations sticking with GPUs, managing the heat generated by power-hungry hardware becomes another major issue, with Nvidia’s Blackwell products rated at 1,200W, for example. In many cases this will involve more efficient cooling systems, with liquid cooling becoming increasingly popular.
Analyst firm Omdia estimated last year that revenue from data center liquid cooling is expected to exceed $2 billion by the end of 2024 and $5 billion by 2028.
However, not all installations will be suitable for liquid cooling, according to managed services provider Redcentric.
“Greater investments in the development and implementation of AI will likely lead to increased demand for data centers,” said Paul Mardling, chief technology officer. Building new facilities, he added, is “a significant investment” that takes time, a building permit, electricity supply and physical construction.
“In the short term, this will lead to increased demand for existing data centers, many of which were not designed for the density or power consumption required by AI systems.”
While traditional facilities were built around halls with racks with a power density of 2 to 5 kW, new construction must now accommodate much higher power density, he said.
“Liquid cooling is essential in racks with power density greater than 10 kW and is desirable in the 5 to 10 kW range. Efficient use of excess heat is also likely to emerge, whether for regenerative thermal energy production or for district heating projects.
Omdia agrees that AI is driving energy demand and the need for more efficient cooling.
“Yes, greater adoption of computing AI will lead to an increase in the energy density of data centers,” said research director Vlad Galabov. The register.
“We are already seeing this and there are several implications: we have seen demands on utilities for more electricity and the adoption of on-site self-generation, either by gas engines or turbines,” he added.
Power upgrades also involve prefabricated modules housing additional switchgear, UPSs and batteries deployed across campus to enable higher power capacity, while some sites have been fitted with high capacity busbars to space for cables to distribute more power to the racks.
However, Galabov believes this type of renovation is less likely in older data centers due to costs.
“There is probably a ceiling to the density that can be increased. At an Equinix site, I saw a modernization project result in an increase in rack density from 10 kW to 30 kW per rack.
This specific case involved new piping to support a coolant distribution unit (CDU) connected to the rear door heat exchangers in the racks, while the power distribution moved from the cables to new armored conduit, and in each rack of new power distribution units (PDUs). were installed, Galabov told us.
When it comes to liquid cooling, he says some sites have looked at adopting air-to-liquid CDU as a way to avoid having to completely update the pipe network within their data centers, and Microsoft is a strong supporter of this approach.
Yet, according to Omdia’s research director, their adoption could be limited because the density they enable is not up to the task of supporting an inbound AI infrastructure like the reference design at the level Nvidia Blackwell rack (NVL72).
“In areas where operators did not want to upgrade racks and install headers for liquid cooling directly on the chip, we also saw the deployment of back door heat exchangers, which are a good way to do face,” Galabov said. The sellers believe they can achieve 100 kW with this arrangement. However, he believes this is unlikely, as the ambient air temperature would require a significant drop, which would be very costly.
For UK businesses, colocation company Telehouse recently unveiled a liquid cooling lab at its London Docklands campus, showcasing several of the technologies available. These include a waterless two-phase system for next-generation server chips and air-assisted liquid cooling technology of up to 90 kW per cabinet. ®
Matt Murdock, lawyer by day and vigilante by night played by Charlie Cox, and Kingpin,…
People who had multiple or severe episodes of COVID-19, and those who were not vaccinated…
CHARLOTTE, N.C. (AP) — The NASCAR team owned by Hall of Famer Dale Earnhardt Jr.…
ReutersDiego Garcia is the largest island in the Chagos Archipelago in the Indian Ocean.US President-elect…
The Food and Drug Administration on Wednesday banned the use of red dye No. 3…
John Deere is accused of illegally restricting farmers' ability to repair their tractors and other…