Data Center Evolution: AI Changing Datacenter Design Strategies

August 7, 2024 / Ben Bajarin

• AI workloads are driving a dramatic increase in data center power requirements, with AI server racks consuming 4-5x more power than traditional racks

• “Mega” data centers are emerging, with some facilities approaching 1 million square feet and 1 GW power draw

• Cooling remains a critical challenge, accounting for 30% of power consumption, leading to increased adoption of liquid cooling solutions

• Data center designs are evolving rapidly, with different approaches for enterprise, colocation, hyperscale, and AI-focused facilities, while still adhering to traditional tier classifications

The landscape of data center construction and design is undergoing a rapid transformation, driven primarily by the surge in artificial intelligence (AI) workloads and their unprecedented power demands. This shift is reshaping how data centers are designed, built, and operated.

Central to this evolution is the stark contrast in power requirements between traditional and AI-focused server racks. Our research reveals that while traditional server racks typically consume 12-13 kW per rack, AI server racks can demand a staggering 50-60 kW per rack – a four to five-fold increase. This substantial difference is not just a matter of scale; it represents a fundamental change in the approach to data center architecture. AI data centers now require approximately 40% more overall power capacity compared to their traditional cloud counterparts, pushing the boundaries of what’s possible in terms of power delivery and management.

This increased power demand is driving a trend towards larger, more power-intensive facilities. Some new hyperscale data centers are being designed to draw over 1 GW of power, with facility sizes expanding to match these enormous power requirements. It’s not uncommon to see new projects approaching or exceeding 1 million square feet, representing a new frontier in the industry that we call “mega” data centers.

Understanding the economics of these massive facilities provides insight into the industry’s priorities. Typically, about one-third of a data center’s budget is allocated to power, cooling, building security, and related infrastructure, with the remaining two-thirds dedicated to IT equipment. This split underscores the significant investment required in supporting infrastructure to enable the deployment of cutting-edge IT equipment.

Cooling remains a major concern, both in terms of initial investment and ongoing operational costs. It accounts for about 30% of a data center’s power consumption, with 18-25% of building costs (excluding IT) allocated to cooling products. These figures highlight the critical nature of efficient cooling solutions in modern data center design.

As power densities increase, traditional air cooling methods are reaching their limits, leading to a growing trend towards liquid cooling solutions. Liquid cooling offers several advantages:

  • Potential to reduce cooling power consumption by up to 60% compared to traditional air cooling
  • Ability to handle higher power densities, making it ideal for AI workloads
  • Improved overall energy efficiency, contributing to better Power Usage Effectiveness (PUE)

Hyperscale providers are leading the adoption of liquid cooling, with colocation facilities increasingly offering liquid-cooled options to attract high-density clients. Enterprise data centers are also gradually incorporating liquid cooling, often in specific high-performance pods.

The industry continues to use a tiered classification system to define levels of redundancy and reliability, ranging from Tier 1 (basic design with single power source) to Tier 4 (highest level of redundancy and fault tolerance). However, different types of data centers approach design with varying priorities. Enterprise data centers often opt for Tier 2 or Tier 3 designs, focusing on balancing performance with cost-effectiveness. Colocation facilities typically aim for Tier 3, with some offering Tier 4 options, emphasizing flexibility and scalability. Hyperscale data centers usually implement Tier 3 designs with some Tier 4 elements for critical systems, leading the way in advanced cooling technologies and energy efficiency. AI-focused data centers are designed for high power density from the outset, often incorporating liquid cooling, at the bare minimum, as a standard feature.

As the industry evolves, we can expect to see further innovations in power management, cooling technologies, and modular design. Key trends to watch include:

  1. The continued rise of liquid cooling as a standard feature in high-density environments
  2. Adoption of new technologies like direct-to-chip cooling
  3. Increased adoption of modular, scalable designs to allow for rapid deployment and easy upgrades
  4. Growing focus on energy efficiency and sustainability, driven both by cost considerations and environmental concerns
  5. The potential emergence of new tier classifications or design standards to address the unique needs of AI-focused data centers

The data centers of the future will need to be more flexible, efficient, and scalable than ever before to meet the growing demands of our increasingly data-driven world. As AI and other high-performance computing workloads continue to push the boundaries of what’s possible, the data center industry stands at the forefront of innovation, continuously adapting to power the technologies that are shaping our digital future.

Join the newsletter and stay up to date

Trusted by 80% of the top 10 Fortune 500 technology companies