How to Use Testcontainers on Jenkins CI

Releasing software often and with confidence relies on a strong continuous integration and continuous delivery (CI/CD) process that includes the ability to automate tests. Jenkins offers an open source automation server that facilitates such release of software projects.

In this article, we will explore how you can run tests based on the open source Testcontainers framework in a Jenkins pipeline using Docker and Testcontainers Cloud. 

Jenkins, which streamlines the development process by automating the building, testing, and deployment of code changes, is widely adopted in the DevOps ecosystem. It supports a vast array of plugins, enabling integration with various tools and technologies, making it highly customizable to meet specific project requirements.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers makes it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Testcontainers also provides support for many popular programming languages, including Java, Go, .NET, Node.js, Python, and more. This article will show how to test a Java Spring Boot application (testcontainers-showcase) using Testcontainers in a Jenkins pipeline. Please fork the repository into your GitHub account. To run Testcontainers-based tests, a Testcontainers-supported container runtime, like Docker, needs to be available to agents.

Note: As Jenkins CI servers are mostly run on Linux machines, the following configurations are tested on a Linux machine only.

Docker containers as Jenkins agents

Let’s see how to use dynamic Docker container-based agents. To be able to use Docker containers as agents, install the Docker Pipeline plugin. 

Now, let’s create a file with name Jenkinsfile in the root of the project with the following content:

pipeline {
agent {
docker {
image 'eclipse-temurin:17.0.9_9-jdk-jammy'
args '–network host -u root -v /var/run/docker.sock:/var/run/docker.sock'
}
}

triggers { pollSCM 'H/2 * * * *' } // poll every 2 mins

stages {
stage('Build and Test') {
steps {
sh './mvnw verify'
}
}
}
}

We are using the eclipse-temurin:17.0.9_9-jdk-jammy Docker container as an agent to run the builds for this pipeline. Note that we are mapping the host’s Unix Docker socket as a volume with root user permissions to make it accessible to the agent, but this can potentially be a security risk.

Add the Jenkinsfile and push the changes to the Git repository.

Now, go to the Jenkins Dashboard and select New Item to create the pipeline. Follow these steps:

Enter testcontainers-showcase as pipeline name.

Select Pipeline as job type.

Select OK.

Under Pipeline section:

Select Definition: Pipeline script from SCM.

SCM: Git.

Repository URL: https://github.com/YOUR_GITHUB_USERNAME/testcontainers-showcase.git. Replace YOUR_GITHUB_USERNAME with your actual GitHub username.

Branches to build: Branch Specifier (blank for ‘any’): */main.

Script Path: Jenkinsfile.

Select Save.

Choose Build Now to trigger the pipeline for the first time.

The pipeline should run the Testcontainers-based tests successfully in a container-based agent using the remote Docker-in-Docker based configuration.

Kubernetes pods as Jenkins agents

While running Testcontainers-based tests on Kubernetes pods, you can run a Docker-in-Docker (DinD) container as a sidecar. To use Kubernetes pods as Jenkins agents, install Kubernetes plugin.

Now you can create the Jenkins pipeline using Kubernetes pods as agents as follows:

def pod =
"""
apiVersion: v1
kind: Pod
metadata:
labels:
name: worker
spec:
serviceAccountName: jenkins
containers:
– name: java17
image: eclipse-temurin:17.0.9_9-jdk-jammy
resources:
requests:
cpu: "1000m"
memory: "2048Mi"
imagePullPolicy: Always
tty: true
command: ["cat"]
– name: dind
image: docker:dind
imagePullPolicy: Always
tty: true
env:
– name: DOCKER_TLS_CERTDIR
value: ""
securityContext:
privileged: true
"""

pipeline {
agent {
kubernetes {
yaml pod
}
}
environment {
DOCKER_HOST = 'tcp://localhost:2375'
DOCKER_TLS_VERIFY = 0
}

stages {
stage('Build and Test') {
steps {
container('java17') {
script {
sh "./mvnw verify"
}
}
}
}
}
}

Although we can use a Docker-in-Docker based configuration to make the Docker environment available to the agent, this setup also brings configuration complexities and security risks.

By volume mounting the host’s Docker Unix socket (Docker-out-of-Docker) with the agents, the agents have direct access to the host Docker engine.

When using DooD approach file sharing, using bind-mounting doesn’t work because the containerized app and Docker engine work in different contexts. 

The Docker-in-Docker (DinD) approach requires the use of insecure privileged containers.

You can watch the Docker-in-Docker: Containerized CI Workflows presentation to learn more about the challenges of a Docker-in-Docker based CI setup.

This is where Testcontainers Cloud comes into the picture to make it easy to run Testcontainers-based tests more simply and reliably. 

By using Testcontainers Cloud, you don’t even need a Docker daemon running on the agent. Containers will be run in on-demand cloud environments so that you don’t need to use powerful CI agents with high CPU/memory for your builds.

Let’s see how to use Testcontainers Cloud with minimal setup and run Testcontainers-based tests.

Testcontainers Cloud-based setup

Testcontainers Cloud helps you run Testcontainers-based tests at scale by spinning up the dependent services as Docker containers on the cloud and having your tests connect to those services.

If you don’t have a Testcontainers Cloud account already, you can create an account and get a Service Account Token as follows:

Sign up for a Testcontainers Cloud account.

Once logged in, create an organization.

Navigate to the Testcontainers Cloud dashboard and generate a Service account (Figure 1).

Figure 1: Create a new Testcontainers Cloud service account.

To use Testcontainers Cloud, we need to start a lightweight testcontainers-cloud agent by passing TC_CLOUD_TOKEN as an environment variable.

You can store the TC_CLOUD_TOKEN value as a secret in Jenkins as follows:

From the Dashboard, select Manage Jenkins.

Under Security, choose Credentials.

You can create a new domain or use System domain.

Under Global credentials, select Add credentials.

Select Kind as Secret text.

Enter TC_CLOUD_TOKEN value in Secret.

Enter tc-cloud-token-secret-id as ID.

Select Create.

Next, you can update the Jenkinsfile as follows:

pipeline {
agent {
docker {
image 'eclipse-temurin:17.0.9_9-jdk-jammy'
}
}

triggers { pollSCM 'H/2 * * * *' }

stages {

stage('TCC SetUp') {
environment {
TC_CLOUD_TOKEN = credentials('tc-cloud-token-secret-id')
}
steps {
sh "curl -fsSL https://get.testcontainers.cloud/bash | sh"
}
}

stage('Build and Test') {
steps {
sh './mvnw verify'
}
}
}
}

We have set the TC_CLOUD_TOKEN environment variable using the value from tc-cloud-token-secret-id credential we created and started a Testcontainers Cloud agent before running our tests.

Now if you commit and push the updated Jenkinsfile, then the pipeline will run the tests using Testcontainers Cloud. You should see log statements similar to the following indicating that the Testcontainers-based tests are using Testcontainers Cloud instead of the default Docker daemon.

14:45:25.748 [testcontainers-lifecycle-0] INFO org.testcontainers.DockerClientFactory – Connected to docker:
Server Version: 78+testcontainerscloud (via Testcontainers Desktop 1.5.5)
API Version: 1.43
Operating System: Ubuntu 20.04 LTS
Total Memory: 7407 MB

You can also leverage Testcontainers Cloud’s Turbo mode in conjunction with build tools that feature parallel run capabilities to run tests even faster.

In the case of Maven, you can use the -DforkCount=N system property to specify the degree of parallelization. For Gradle, you can specify the degree of parallelization using the maxParallelForks property.

We can enable parallel execution of our tests using four forks in Jenkinsfile as follows:

stage('Build and Test') {
steps {
sh './mvnw verify -DforkCount=4'
}
}

For more information, check out the article on parallelizing your tests with Turbo mode.

Conclusion

In this article, we have explored how to run Testcontainers-based tests on Jenkins CI using dynamic containers and Kubernetes pods as agents with Docker-out-of-Docker and Docker-in-Docker based configuration. 

Then we learned how to create a Testcontainers Cloud account and configure the pipeline to run tests using Testcontainers Cloud. We also explored leveraging Testcontainers Cloud Turbo mode combined with your build tool’s parallel execution capabilities. 

Although we have demonstrated this setup using a Java project as an example, Testcontainers libraries exist for other popular languages, too, and you can follow the same pattern of configuration to run your Testcontainers-based tests on Jenkins CI in Golang, .NET, Python, Node.js, etc.

Get started with Testcontainers Cloud by creating a free account at the website.

Learn more

Sign up for a Testcontainers Cloud account.

Watch the Docker-in-Docker: Containerized CI Workflows session from DockerCon 2023.

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Amazon RDS für Db2 unterstützt jetzt die Prüfungsprotokollierung

Amazon Relational Database Service (Amazon RDS) für Db2 unterstützt jetzt die Prüfung von Db2-Datenbanken. Wenn diese Option aktiviert ist, speichert Amazon RDS für Db2 die Prüfungsprotokolle in Amazon S3, um Ihre Richtlinien zur langfristigen Aufbewahrung zu erfüllen. Die Aufbewahrung von Prüfungsprotokollen in Amazon S3 und anderen Prüfungskategorien kann unter der Option Gruppieren mit der gespeicherten Prozedur rdsadmin.confgure_db_audit konfiguriert werden.
Quelle: aws.amazon.com

Amazon AppStream 2.0 unterstützt jetzt administrative Steuerelemente zum Begrenzen der Zwischenablage

Die Datenbewegungen zu und von Amazon AppStream 2.0-Streaming-Sitzungen der Benutzer können Sie ab sofort über die Zwischenablage besser steuern. Die maximale Anzahl an Zeichen (bis zu 20.971.520), die über die Funktionalität der Zwischenablage aus und/oder in die Sitzung übertragen werden, können Sie unabhängig davon festlegen. Beispielsweise könnten Sie ihre Benutzer maximal 300 Zeichen aus ihrer AppStream 2.0-Sitzung auf ihre privaten Geräte kopieren lassen und gleichzeitig ein anderes Limit von 100 Zeichen für Daten von ihrem persönlichen Gerät auf AppStream 2.0 und umgekehrt festlegen. Sie haben weiterhin die Möglichkeit, bei Bedarf die Funktionalität der Zwischenablage vollständig zu blockieren. Diese neue Konfiguration ermöglicht Kunden eine flexible Steuerung der Datenexfiltration.
Quelle: aws.amazon.com

Der Software-Paketkatalog für AWS IoT Device Management ist in den Regionen AWS GovCloud (USA) verfügbar

Der Softwarepaket-Katalog für AWS IoT Device Management Softwarepaket-Katalog ist jetzt in den Regionen AWS GovCloud (USA-Ost) und AWS GovCloud (USA-West) verfügbar. Sie können mit SPC für AWS IoT Device Management bis zu 10.000 Softwarepaketversionen und die zugehörigen Metadaten verwalten. Dies bietet Ihnen einen umfassenderen Überblick über die Softwareversionen der gesamten Flotte und ermöglicht auf diese Weise eine bessere Verwaltung und Steuerung. 
Quelle: aws.amazon.com

Amazon OpenSearch Serverless unterstützt jetzt TLS 1.3 und Perfect Forward Secrecy

Wir freuen uns, Ihnen mitteilen zu dürfen, dass die Unterstützung von Transport Layer Security (TLS) Version 1.3 in Amazon OpenSearch Serverless jetzt erweiterte Sicherheitsoptionen für Workloads bietet. OpenSearch Serverless ist die Serverless-Option für Amazon OpenSearch Service, mit der Sie Such- und Analyse-Workloads einfacher ausführen können, ohne sich über die Infrastrukturverwaltung Gedanken machen zu müssen.
Quelle: aws.amazon.com