Optimizing Event-Driven Workloads with Containers and AWS Serverless
Deutsche Bahn AG has bundled its entire bus business in the DB Regio business unit, Bus division (DB Regio Bus). DB Regio Bus focuses on local public transport in rural areas. The bus companies, which were independent in the past, have been merged on a regional basis and are market leaders in the German public transport market.
Automating workloads through containers and AWS Serverless.
Improving batch workloads through containers and AWS Serverless.
As a business unit of the Deutsche Bahn Group (DB), DB Regio Bus is subject to the compliance requirements of the DB Group. The compliance requirements provide clear guidelines regarding the management of IT infrastructure and data. Having a complete and up-to-date overview of a company’s IT infrastructure is particularly important when it is a company with different complex business units. A key component of DB Group’s compliance requirements in this regard is the creation of an infrastructure register (asset register) that clearly shows which resources DB Regio Bus manages in the AWS Cloud.
The implementation of this requirement necessitated the development of a completely new process and application.
The following challenges arise for the implementation:
- Separation of staging and production operations.
- Requirement: implement a batch workload as a fully managed service to ensure minimal burden on operations teams (automated and day-to-day execution of the workload)
- Ensure that the application can be further developed in an automated way (without much effort by developers)
- Continuous development and deployment of the application using CI/CD
- No permanent runtime (can be deployed as needed). The runtime exists only as long as a task is to be completed.
Division into Test and Productive Operation:
In order to ensure the simultaneous further development of the application and, at the same time, stable productive operation of the application, the application was divided into two identical, but separate environments. While the stable version of the application is run on the production environment, work on optimizing the application can continue in the test environment.
Automation of Docker Images Builds to ECR:
The implemented CI/CD pipeline based on AWS CodePipeline, CodeCommit, CodeBuild and CodeDeploy enables fully automated deployments to both environments. Unit and integration testing of the application artifact within the CI/CD pipeline is also automated. After successful testing, a Docker image is automatically created and stored in AWS ECR.
Automatic Updates of the ECS service:
Once a new image is created, this new version is rolled out to AWS Elastic Container Service (AWS ECS) fully automatically using CodeDeploy. AWS ECS now ensures that the next trigger (see below) automatically uses the latest version of the application.
Move to Cloudwatch Triggered Batch Workload:
The workload can be implemented as a so-called batch workload: Data is read from the source database, converted within the application according to the requirements and transferred to the target system. After all data sets have been processed and transferred, there is no reason to keep the application “running”.
For this reason, AWS ECS was selected as the service is best suited for batch workloads. Based on an AWS CloudWatch event trigger, the application is started every day at a predefined time. After all records have been processed, the application is automatically stopped and only restarted by the next trigger.
This reduces the cost of the implemented solution compared to an architecture based on AWS EC2 by about 10 times.