Blog Article

blogartikel_kubernetes.jpg
Workload Automation in Kubernetes: Bridging Technology for Modern Container Orchestration

How classic workload automation can integrate with modern Kubernetes environments. This article explores IBM Workload Scheduler, the Beta Systems Cloud Connector, and the roles of Ingress, Sidecars, and Git-based job definitions in enterprise IT automation for containerized infrastructures.

Introduction to Workload Automation and Its Importance for Enterprises

Workload Automation (WLA), or SOAP – System Orchestration and Automation Platform, as leading analysts call this segment, is a core and enterprise-wide function. In large organizations, WLA executes several hundred million individual activities (so-called jobs) annually across the various platforms of the respective corporate IT landscape. Systems like IBM Workload Scheduler (IZWS) centrally manage the definitions of these workflows. The faultless availability of this processing is mission-critical. No business process functions without the support of workload automation.

Containerization Changes the Requirements for IT Automation

Changes in enterprise IT are also shifting and expanding the demands placed on WLA. The containerization of enterprise IT has fundamentally transformed operations in data centers over the past decade. Whether systems are run in the cloud or on-premises has become less relevant. According to a 2024 study by techconsult, around 80 percent of companies are already using container technologies or plan to integrate them into their IT infrastructures in the near future.

Kubernetes as the Dominant Platform for Container Orchestration

Among the platforms, Kubernetes clearly leads in its various forms. WLA systems therefore face the challenge of ensuring that process chains, which traditionally span across systems and platforms, can also run in and on these new container environments. To enable Kubernetes job scheduling via established WLA systems, several challenges must be overcome. Most workload automation platforms, such as the market-leading IBM Workload Scheduler, control their jobs centrally, often from a z/OS mainframe. But how does an execution command reach a Kubernetes worker node from the central scheduler?

A standardized access path to the container environment is needed. Kubernetes provides a service called Ingress, which manages HTTP and HTTPS communication from outside into the cluster. Combined with the Kubernetes system’s APIs, WLA agents can transmit job execution commands to the worker nodes and ultimately into the pods. This is a central aspect of WLA Kubernetes integration.

Stateless Containers and the Challenge of Job Definitions

The next challenge lies in the stateless nature of containers. Their strength is in their ability to quickly spin up and shut down instances. However, the job itself requires specific instructions – usually in the form of scripts, often written in JCL (Job Control Language). How can these instructions be made available within the Kubernetes environment without embedding them into the container images?

One solution is to transmit job definitions through the communication stream. Modern extensions like the Beta Systems Cloud Connector go one step further. They use the ability to mount content into the container via persistent volumes. This enables Git job definitions to be centrally managed – for example, in a Git repository – and made available to the container at runtime. The access credentials required for this are securely stored using Kubernetes Secrets to avoid vulnerabilities.

Runtime Dynamics: Variables and Resources in the Job Workflow

With access to the worker node and the ability to retrieve job definitions as scripts, packages, or via Git, the WLA system can now execute the desired container automation. The Git integration, in particular, is highly beneficial for developers in Kubernetes environments. They can continue working in their familiar environments, track changes, and manage stages like development, testing, and production independently.

Despite this flexibility, the WLA system maintains full traceability of what was executed, when, and with what result. It must also respond dynamically to runtime conditions. Systems like the Cloud Connector make it possible to inject variables and environmental resources into container job definitions at runtime. This is a crucial feature for advanced IT automation in container environments.

After a job is executed, another requirement arises for the WLA system. It needs detailed execution logs. But how is this data returned?

The Cloud Connector uses so-called sidecar containers. These run in the same pod as the job, access environmental data, and collect output. Using the established communication paths via API and Ingress, this data is returned to the WLA system, which is then fully informed of the execution details and can align the next steps in the job chain accordingly.

kubernetes-scheduling.png

Successful Customer Projects and Market Response to the Cloud Connector

The availability of this bridging technology between an established mainframe-based WLA system like IZWS and Kubernetes environments has quickly gained positive feedback from Beta Systems customers. Shortly after its launch, the first customer adopted the Cloud Connector, and additional prospects are currently preparing pilot projects. This approach seems to strike an excellent balance between necessary effort and the automation potential achievable for many enterprises.

Beta Systems as a Pioneer of Platform-Independent Automation

Beta Systems, as a leading German provider in the workload automation sector, has been addressing these requirements for quite some time. Modern System Orchestration and Automation Platforms like ANOW! Automate were designed from the ground up for process automation in container environments. On the path to fully platform-independent automation, the Cloud Connector bridges both worlds. It modernizes classic mainframe automation and simultaneously enables integration with Kubernetes. This represents a robust and forward-looking transitional step on the journey toward the next-generation WLA system.

Author

betasystems-events-webinars-niels-von-der-hude.jpg
Niels von der Hude
Director Product Strategy, Beta Systems

Tags

Job SchedulingWorkload Automation

Share

Further Resources

Blog Article
What Is IT Operations Management?

What Is Workload Automation?

Workload automation (WLA) is one of the cornerstones of today’s data center architecture – with WLA systems, IT processes are streamlined, scheduled and orchestrated, bridging the growing diversity of IT platforms.
Blog Article
blogpost_migration-ohne-reue.jpg

Migration Without Regret: How to Future-Proof Your Automation Strategy

In today's fast-paced, data-driven IT landscape, the demands on enterprise systems are escalating rapidly. The need for agility, scalability, and integration with emerging technologies like AI, cloud infrastructure, and observability platforms is reshaping how organizations approach automation. Amid this transformation, many companies are confronting a critical question: Should we modernize our workload automation (WLA) platforms?
Blog Article
blogpost-title-data-in-motion.jpg

Key Insights from the EMA Research Report “Data in Motion: Orchestrating File Transfers and Data Pipelines in the Cloud Era”

The EMA report highlights the growing importance of secure, automated data movement in digital transformation. Workload Automation (WLA) and Managed File Transfer (MFT) are key technologies, especially in multi-cloud environments. Enterprises are shifting toward integrated, scalable solutions to ensure efficient and compliant data flows.