Blog Article

Data Centers in Transition: How Data, AI and Sustainability Shape the Future

Data centers are at a turning point: The constant increase in data volumes, the growing demand for AI applications and the growing complexity of hybrid IT landscapes are shaping the industry. While hyperscalers like Amazon are investing billions in IT infrastructure expansion, traditional data center operators need to adapt to keep pace with the demands of modern technologies. This article offers a glimpse into the future of data centers and highlights the most exciting trends and challenges.

Find out more

The Evolution of Data Centers: Rising Demands, Rising Challenges

In an interview with Handelsblatt in July 2019, then-CEO of AWS and now Amazon CEO, Andy Jassy, stated: “In ten or 20 years, most companies will no longer have their own data centers. Only tasks requiring proximity – such as in a factory – will still be handled on-site.

However, the bold vision outlined by Jassy has yet to materialize, and it is unlikely to do so in the foreseeable future. Over the past five years, the “cloud-first” (or even “cloud-only”) strategy has been largely replaced by hybrid approaches.

That said, the growth of hyperscale data centers remains strong, driven in part by the rising demand for AI applications. Amazon alone has planned to invest over $75 billion in 2025, with Microsoft following at $80 billion – and the trend shows no signs of slowing. According to a Bloomberg analysis, the industry as a whole spent more than $200 billion in 2024 on servers, chips, and data centers. 

The overall growth in the data center sector is remarkable: Between 2015 and 2022, the number of servers increased from 54 million to 86 million, representing a 45% growth in just the last seven years

While Andy Jassy’s predictions about the hyperscale market may have been overly ambitious, his analysis of the driving forces and change factors was spot on. These include the numerous challenges data center operators face due to the increasing complexity of IT landscapes and the growing demands for advanced data analytics. The latter requires the non-trivial task of integrating diverse data streams into actionable data lakes and generating business-critical insights through data science. 

Jassy suggested that public cloud providers or hyperscalers are best equipped to handle this rising complexity. When comparing the challenges faced by operators of private data centers with those of hyperscalers, one finds a significant overlap. Let’s take a closer look at these shared challenges. 

On-Premise and Cloud: Heterogeneous IT Ecosystems in Data Centers

Modern data centers are characterized by increasingly diverse ecosystems. Alongside mainframes – which still handle 68% of all IT workloads – server and virtualization technologies have become standard, incorporating operating systems like Linux and Windows alongside z/OS. On-premises applications frequently interact with cloud-based systems, while virtual machines and container technologies are becoming more prevalent. In some cases, critical systems are operated redundantly, both in the cloud and on-premises, to ensure reliability and resilience. 

grafik_trends_in_datacenter.png

Meeting Growing Challenges in Data Centers

In addition to increasing heterogeneity, data center operators face rising demands for SLAs, DevOps practices, and non-functional aspects like security and compliance. Key examples of compliance regulations include DORA and NIS-2. 

There are several approaches to address this mix of challenges and requirements: 

  1. Next-Generation Applications: These are designed specifically for hybrid system landscapes. For instance, workload automation tools like ANOW! Automate seamlessly integrate and manage hybrid cloud instances, on-premises applications, and mainframe ecosystems. 

  2. Standards and Interoperability: To ensure smooth communication between cloud and on-premises systems, businesses increasingly rely on proven standards and patterns, such as RESTful APIs and message brokering. 

  3. Standardizing Processes and Technologies: Operating on-premises and cloud environments in parallel often necessitates standardization. While this may require significant effort initially, it frequently leads to long-term efficiency gains. 

By adopting these strategies, operators can better navigate the complexities of modern data centers while meeting evolving demands. 

Large Data in Data Centers

IDC predicts that global data volumes will grow to an astonishing 221 zettabytes by 2026 – that is 221,000,000,000,000,000,000 bytes. 

Data has always been essential for informed decision-making and is often referred to as the “new gold” in the context of Artificial Intelligence. The more data available, the better the performance of AI models. 

However, the value of this data depends on several key factors, which also define the challenges involved. Consider the example of a global manufacturer with a hybrid cloud strategy. Here, significant hurdles arise in harmonizing and standardizing diverse data sources into a usable data lake. These sources might include legacy systems, external partner APIs, and cloud-based SaaS solutions. The wide variety of formats and structures requires meticulous transformation to ensure that data can be correctly interpreted and processed. Without precise adjustments, analysis errors may occur, potentially leading to severe consequences, including legal implications. 

According to various sources (e.g. Statista or Forbes), around 60% of corporate data is currently stored in the cloud. However, it is also certain that a large proportion of the world’s corporate data is stored on mainframes or originated there. IBM, the manufacturer of the most popular and widely used models, estimates that 80% of the world’s corporate data is stored on mainframes. This differs from the data volume in the cloud mentioned above, but both figures cannot be verified without considerable effort.  

The bottom line remains: The amount of data on the mainframe is significant. And so it is hardly surprising that harnessing this data stored or generated on the mainframe is an important topic. Data centers must also prioritize backup and recovery plans to ensure data integrity and to enable rapid recovery in the event of system failures or other disruptions. 

The Era of AI

The AI boom, which began with the release of ChatGPT in November 2022, continues unabated despite intermittent skepticism. 

The high demand for GPU chips is well-documented, as are the significant investments associated with it. These trends present particular challenges for hyperscalers. As reported by Handelsblatt, a prominent German business publication: 

“The substantial investments represent a massive bet on a future where Amazon [and other hyperscalers] and their cloud customers generate dream-level margins with new AI services. (...) However, the boom could lead to excesses, according to Hamilton. An industry ‘overbuild’ – where more server or energy infrastructure is built than necessary to meet demand – is a real possibility. (...) Parallel structures that inevitably arise could quickly lose value due to a technological leap; an unexpected innovation could shift the market. Some analysts are watching the tech giants’ race with concern. Amazon’s stock price, for example, dropped by more than 20% in August after the company reported sharply increasing investments.” 

This ongoing race for AI dominance underscores the delicate balance between seizing opportunities in a transformative technology and managing the risks of overinvestment and market volatility. For hyperscalers, the stakes are high, with both enormous potential and significant challenges on the horizon. 

What Role Do Location, Energy Consumption and Cybersecurity Play?

The energy demand of data centers continues to rise, driven in large part by the increasing use of AI applications. While semiconductor manufacturers like Nvidia and AMD are steadily improving the energy efficiency of their hardware, the overall growth in data production and computational requirements still outpaces these advancements. 

Shifts in location strategies for AI training nodes reveal a preference for regions with robust fiber-optic infrastructure and reliable energy availability, such as the 41st parallel in the U.S. These areas are becoming virtual “prime locations” for data centers. Hyperscalers are prioritizing “power-first” sites, such as Council Bluffs in Iowa and Lancaster in Texas. Meanwhile, AI workloads with lower latency sensitivity are fueling demand in regions with lower electricity costs and tax incentives. 

In Germany, the majority of modern data centers are concentrated in the Frankfurt area, largely due to DE-CIX Frankfurt, the world’s largest internet exchange point by data throughput.

Strategic Energy Sourcing

Ensuring a stable energy supply has become a strategic priority. Hyperscalers are increasingly focused not just on procuring energy but also on generating it. Companies like Microsoft, Google, and Amazon are exploring and investing in nuclear energy as a potential solution for long-term sustainability. Additionally, to enhance societal acceptance of their business models, hyperscalers and AI providers are increasingly committing to renewable energy sources.

The Rising Importance of Cybersecurity

Cybersecurity has grown into an even more critical issue in recent years. The rise in cyber threats and the potential for data breaches have prompted data center operators to strengthen their defenses. Measures include implementing robust cybersecurity frameworks, deploying advanced encryption technologies, adopting real-time threat monitoring systems, and conducting employee training programs. 

From energy management to security, these evolving challenges underscore the need for innovative strategies to meet the demands of modern data center operations. 

Find out more

Tags

IT SecurityMainframeWorkload Automation

Share

Further Resources

Blog Article
planes-e.png

What Is a Mainframe?

A mainframe computer, often referred to as “big iron”, is a powerful computing system used by large organizations for essential tasks such as mass data processing and enterprise resource planning. Although mainframes are smaller than supercomputers, they offer higher computing power compared to other computers classes.
Blog Article
soap-blog.jpg

What Is SOAP?

Service Orchestration and Automation Platforms (SOAP) are revolutionizing how businesses streamline operations and embrace digital transformation. Far beyond simple task scheduling, SOAP integrates automation, resource management and cloud-native capabilities into a powerful, unified system. By breaking down silos, enabling intelligent decision-making and driving innovation, SOAP empowers organizations to tackle complexity, reduce costs and scale with ease. Discover why SOAP is becoming a must-have tool for enterprises ready to thrive in a fast-changing world.
Blog Article
maximizing-efficiency.jpg

Maximizing Efficiency with Event-Driven Automation

Automation is becoming increasingly indispensable in modern organizations, driving efficiencies across departments and fields from infrastructure to DevOps. However, while most companies recognize the potential benefits of automation, many still struggle to make the most of it. Fragmented processes, disconnected tools, and poorly defined objectives often prevent organizations from reaching full automation potential. Here, we’ll explore a strategic approach to building a mature automation framework that enables seamless, efficient, and scalable operations.