Azure Container Registry and Docker Hub: Connecting the Dots with Seamless Authentication and Artifact Cache

By leveraging the wide array of public images available on Docker Hub, developers can accelerate development workflows, enhance productivity, and, ultimately, ship scalable applications that run like clockwork. When building with public content, acknowledging the potential operational risks associated with using that content without proper authentication is crucial. 

In this post, we will describe best practices for mitigating these risks and ensuring the security and reliability of your containers.

Import public content locally

There are several advantages to importing public content locally. Doing so improves the availability and reliability of your public content pipeline and protects you from failed CI builds. By importing your public content, you can easily validate, verify, and deploy images to help run your business more reliably.

For more information on this best practice, check out the Open Container Initiative’s guide on Consuming Public Content.

Configure Artifact Cache to consume public content

Another best practice is to configure Artifact Cache to consume public content. Azure Container Registry’s (ACR) Artifact Cache feature allows you to cache your container artifacts in your own Azure Container Registry, even for private networks. This approach limits the impact of rate limits and dramatically increases pull reliability when combined with geo-replicated ACR, allowing you to pull artifacts from the region closest to your Azure resource. 

Additionally, ACR offers various security features, such as private networks, firewall configuration, service principals, and more, which can help you secure your container workloads. For complete information on using public content with ACR Artifact Cache, refer to the Artifact Cache technical documentation.

Authenticate pulls with public registries

We recommend authenticating your pull requests to Docker Hub using subscription credentials. Docker Hub offers developers the ability to authenticate when building with public library content. Authenticated users also have access to pull content directly from private repositories. For more information, visit the Docker subscriptions page. Microsoft Artifact Cache also supports authenticating with other public registries, providing an additional layer of security for your container workloads.

Following these best practices when using public content from Docker Hub can help mitigate security and reliability risks in your development and operational cycles. By importing public content locally, configuring Artifact Cache, and setting up preferred authentication methods, you can ensure your container workloads are secure and reliable.

Learn more about securing containers

Try Docker Scout to assess your images for security risks.

Looking to get up and running? Use our Quickstart guide.

Have questions? The Docker community is here to help.

Subscribe to the Docker Newsletter to stay updated with Docker news and announcements.

Additional resources for improving container security for Microsoft and Docker customers

Visit Microsoft Learn.

Read the introduction to Microsoft’s framework for securing containers.

Learn how to manage public content with Azure Container Registry.

Quelle: https://blog.docker.com/feed/

Revolutionize Your CI/CD Pipeline: Integrating Testcontainers and Bazel

One of the challenges in modern software development is being able to release software often and with confidence. This can only be achieved when you have a good CI/CD setup in place that can test your software and release it with minimal or even no human intervention. But modern software applications also use a wide range of third-party dependencies and often need to run on multiple operating systems and architectures. 

In this post, I will explain how the combination of Bazel and Testcontainers helps developers build and release software by providing a hermetic build system.

Using Bazel and Testcontainers together

Bazel is an open source build tool developed by Google to build and test multi-language, multi-platform projects. Several big IT companies have adopted monorepos for various reasons, such as:

Code sharing and reusability 

Cross-project refactoring 

Consistent builds and dependency management 

Versioning and release management

With its multi-language support and focus on reproducible builds, Bazel shines in building such monorepos.

A key concept of Bazel is hermeticity, which means that when all inputs are declared, the build system can know when an output needs to be rebuilt. This approach brings determinism where, given the same input source code and product configuration, it will always return the same output by isolating the build from changes to the host system.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers make it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Using Bazel and Testcontainers together offers the following features:

Bazel can build projects using different programming languages like C, C++, Java, Go, Python, Node.js, etc.

Bazel can dynamically provision the isolated build/test environment with desired language versions.

Testcontainers can provision the required dependencies as Docker containers so that your test suite is self-contained. You don’t have to manually pre-provision the necessary services, such as databases, message brokers, and so on. 

All the test dependencies can be expressed through code using Testcontainers APIs, and you avoid the risk of breaking hermeticity by sharing such resources between tests.

Let’s see how we can use Bazel and Testcontainers to build and test a monorepo with modules using different languages.We are going to explore a monorepo with a customers module, which uses Java, and a products module, which uses Go. Both modules interact with relational databases (PostgreSQL) and use Testcontainers for testing.

Getting started with Bazel

To begin, let’s get familiar with Bazel’s basic concepts. The best way to install Bazel is by using Bazelisk. Follow the official installation instructions to install Bazelisk. Once it’s installed, you should be able to run the Bazelisk version and Bazel version commands:

$ brew install bazelisk
$ bazel version

Bazelisk version: v1.12.0
Build label: 7.0.0

Before you can build a project using Bazel, you need to set up its workspace. 

A workspace is a directory that holds your project’s source files and contains the following files:

The WORKSPACE.bazel file, which identifies the directory and its contents as a Bazel workspace and lives at the root of the project’s directory structure.

A MODULE.bazel file, which declares dependencies on Bazel plugins (called “rulesets”).

One or more BUILD (or BUILD.bazel) files, which describe the sources and dependencies for different parts of the project. A directory within the workspace that contains a BUILD file is a package.

In the simplest case, a MODULE.bazel file can be an empty file, and a BUILD file can contain one or more generic targets as follows:

genrule(
name = "foo",
outs = ["foo.txt"],
cmd_bash = "sleep 2 && echo 'Hello World' >$@",
)

genrule(
name = "bar",
outs = ["bar.txt"],
cmd_bash = "sleep 2 && echo 'Bye bye' >$@",
)

Here, we have two targets: foo and bar. Now we can build those targets using Bazel as follows:

$ bazel build //:foo <- runs only foo target, // indicates root workspace
$ bazel build //:bar <- runs only bar target
$ bazel build //… <- runs all targets

Configuring the Bazel build in a monorepo

We are going to explore using Bazel in the testcontainers-bazel-demo repository. This repository is a monorepo with a customers module using Java and a products module using Go. Its structure looks like the following:

testcontainers-bazel-demo
|____customers
| |____BUILD.bazel
| |____src
|____products
| |____go.mod
| |____go.sum
| |____repo.go
| |____repo_test.go
| |____BUILD.bazel
|____MODULE.bazel

Bazel uses different rules for building different types of projects. Bazel uses rules_java for building Java packages, rules_go for building Go packages, rules_python for building Python packages, etc.

We may also need to load additional rules providing additional features. For building Java packages, we may want to use external Maven dependencies and use JUnit 5 for running tests. In that case, we should load rules_jvm_external to be able to use Maven dependencies. 

We are going to use Bzlmod, the new external dependency subsystem, to load the external dependencies. In the MODULE.bazel file, we can load the additional rules_jvm_external and contrib_rules_jvm as follows:

bazel_dep(name = "contrib_rules_jvm", version = "0.21.4")
bazel_dep(name = "rules_jvm_external", version = "5.3")

maven = use_extension("@rules_jvm_external//:extensions.bzl", "maven")
maven.install(
name = "maven",
artifacts = [
"org.postgresql:postgresql:42.6.0",
"ch.qos.logback:logback-classic:1.4.6",
"org.testcontainers:postgresql:1.19.3",
"org.junit.platform:junit-platform-launcher:1.10.1",
"org.junit.platform:junit-platform-reporting:1.10.1",
"org.junit.jupiter:junit-jupiter-api:5.10.1",
"org.junit.jupiter:junit-jupiter-params:5.10.1",
"org.junit.jupiter:junit-jupiter-engine:5.10.1",
],
)
use_repo(maven, "maven")

Let’s understand the above configuration in the MODULE.bazel file:

We have loaded the rules_jvm_external rules from Bazel Central Registry and loaded extensions to use third-party Maven dependencies.

We have configured all our Java application dependencies using Maven coordinates in the maven.install artifacts configuration.

We are loading the contrib_rules_jvm rules that supports running JUnit 5 tests as a suite.

Now, we can run the @maven//:pin program to create a JSON lockfile of the transitive dependencies, in a format that rules_jvm_external can use later:

bazel run @maven//:pin

Rename the generated file rules_jvm_external~4.5~maven~maven_install.json to maven_install.json. Now update the MODULE.bazel to reflect that we pinned the dependencies.

Add a lock_file attribute to the maven.install() and update the use_repo call to also expose the unpinned_maven repository used to update the dependencies:

maven.install(

lock_file = "//:maven_install.json",
)

use_repo(maven, "maven", "unpinned_maven")

Now, when you update any dependencies, you can run the following command to update the lock file:

​​bazel run @unpinned_maven//:pin

Let’s configure our build targets in the customers/BUILD.bazel file, as follows:

load(
"@bazel_tools//tools/jdk:default_java_toolchain.bzl",
"default_java_toolchain", "DEFAULT_TOOLCHAIN_CONFIGURATION", "BASE_JDK9_JVM_OPTS", "DEFAULT_JAVACOPTS"
)

default_java_toolchain(
name = "repository_default_toolchain",
configuration = DEFAULT_TOOLCHAIN_CONFIGURATION,
java_runtime = "@bazel_tools//tools/jdk:remotejdk_17",
jvm_opts = BASE_JDK9_JVM_OPTS + ["–enable-preview"],
javacopts = DEFAULT_JAVACOPTS + ["–enable-preview"],
source_version = "17",
target_version = "17",
)

load("@rules_jvm_external//:defs.bzl", "artifact")
load("@contrib_rules_jvm//java:defs.bzl", "JUNIT5_DEPS", "java_test_suite")

java_library(
name = "customers-lib",
srcs = glob(["src/main/java/**/*.java"]),
deps = [
artifact("org.postgresql:postgresql"),
artifact("ch.qos.logback:logback-classic"),
],
)

java_library(
name = "customers-test-resources",
resources = glob(["src/test/resources/**/*"]),
)

java_test_suite(
name = "customers-lib-tests",
srcs = glob(["src/test/java/**/*.java"]),
runner = "junit5",
test_suffixes = [
"Test.java",
"Tests.java",
],
runtime_deps = JUNIT5_DEPS,
deps = [
":customers-lib",
":customers-test-resources",
artifact("org.junit.jupiter:junit-jupiter-api"),
artifact("org.junit.jupiter:junit-jupiter-params"),
artifact("org.testcontainers:postgresql"),
],
)

Let’s understand this BUILD configuration:

We have loaded default_java_toolchain and then configured the Java version to 17.

We have configured a java_library target with the name customers-lib that will build the production jar file.

We have defined a java_test_suite target with the name customers-lib-tests to define our test suite, which will execute all the tests. We also configured the dependencies on the other target customers-lib and external dependencies.

We also defined another target with the name customers-test-resources to add non-Java sources (e.g., logging config files) to our test suite target as a dependency.

In the customers package, we have a CustomerService class that stores and retrieves customer details in a PostgreSQL database. And we have CustomerServiceTest that tests CustomerService methods using Testcontainers. Take a look at the GitHub repository for the complete code.

Note: You can use Gazelle, which is a Bazel build file generator, to generate the BUILD.bazel files instead of manually writing them.

Running Testcontainers tests

For running Testcontainers tests, we need a Testcontainers-supported container runtime. Let’s assume you have a local Docker installed using Docker Desktop.

Now, with our Bazel build configuration, we are ready to build and test the customers package:

# to run all build targets of customers package
$ bazel build //customers/…

# to run a specific build target of customers package
$ bazel build //customers:customers-lib

# to run all test targets of customers package
$ bazel test //customers/…

# to run a specific test target of customers package
$ bazel test //customers:customers-lib-tests

When you run the build for the first time, it will take time to download the required dependencies and then execute the targets. But, if you try to build or test again without any code or configuration changes, Bazel will not re-run the build/test again and will show the cached result. Bazel has a powerful caching mechanism that will detect code changes and run only the targets that are necessary to run.

While using Testcontainers, you define the required dependencies as part of code using Docker image names along with tags, such as Postgres:16. So, unless you change the code (e.g., Docker image name or tag), Bazel will cache the test results.

Similarly, we can use rules_go and Gazelle for configuring Bazel build for Go packages. Take a look at the MODULE.bazel and products/BUILD.bazel files to learn more about configuring Bazel in a Go package.

As mentioned earlier, we need a Testcontainers-supported container runtime for running Testcontainers tests. Installing Docker on complex CI platforms might be challenging, and you might need to use a complex Docker-in-Docker setup. Additionally, some Docker images might not be compatible with the operating system architecture (e.g., Apple M1). 

Testcontainers Cloud solves these problems by eliminating the need to have Docker on the localhost or CI runners and run the containers on cloud VMs transparently.

Here is an example of running the Testcontainers tests using Bazel on Testcontainers Cloud using GitHub Actions:

name: CI

on:
push:
branches:
– '**'

jobs:
build:
runs-on: ubuntu-latest
steps:
– uses: actions/checkout@v4

– name: Configure TestContainers cloud
uses: atomicjar/testcontainers-cloud-setup-action@main
with:
wait: true
token: ${{ secrets.TC_CLOUD_TOKEN }}

– name: Cache Bazel
uses: actions/cache@v3
with:
path: |
~/.cache/bazel
key: ${{ runner.os }}-bazel-${{ hashFiles('.bazelversion', '.bazelrc', 'WORKSPACE', 'WORKSPACE.bazel', 'MODULE.bazel') }}
restore-keys: |
${{ runner.os }}-bazel-

– name: Build and Test
run: bazel test –test_output=all //…

GitHub Actions runners already come with Bazelisk installed, so we can use Bazel out of the box. We have configured the TC_CLOUD_TOKEN environment variable through Secrets and started the Testcontainers Cloud agent. If you check the build logs, you can see that the tests are executed using Testcontainers Cloud.

Summary

We have shown how to use the Bazel build system to build and test monorepos with multiple modules using different programming languages. Combined with Testcontainers, you can make the builds self-contained and hermetic.

Although Bazel and Testcontainers help us have a self-contained build, we need to take extra measures to make it a hermetic build: 

Bazel can be configured to use a specific version of SDK, such as JDK 17, Go 1.20, etc., so that builds always use the same version instead of what is installed on the host machine. 

For Testcontainers tests, using Docker tag latest for container dependencies may result in non-deterministic behavior. Also, some Docker image publishers override the existing images using the same tag. To make the build/test deterministic, always use the Docker image digest so that the builds and tests always use the exact same version of images that gives reproducible and hermetic builds.

Using Testcontainers Cloud for running Testcontainers tests reduces the complexity of Docker setup and gives a deterministic container runtime environment.

Visit the Testcontainers website to learn more, and get started with Testcontainers Cloud by creating a free account.

Learn more

Visit the Testcontainers website.

Get started with Testcontainers Cloud by creating a free account.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

AWS Amplify Hosting kündigt Unterstützung für benutzerdefinierte SSL-Zertifikate/TLS an

AWS Amplify Hosting unterstützt jetzt benutzerdefinierte SSL-Zertifikate für benutzerdefinierte Domains. Dieses neue Feature ermöglicht es Entwicklern, SSL/TLS-Zertifikate für mit Amplify gehostete Webanwendungen einfach hochzuladen und zu verwenden, was die Flexibilität und Sicherheit erhöht. Mit diesem Feature können Entwickler jetzt Zertifikate einsetzen, die sie von der Zertifizierungsstelle (CA) eines Drittanbieters erworben haben, oder Zertifikate nutzen, die von AWS Certificate Manager (ACM) ausgestellt wurden, um mehr Kontrolle über Ihre Domain- und IT-Compliance-Anforderungen zu erhalten.
Quelle: aws.amazon.com

AWS kündigt neuen Edge-Standort in der Türkei an

Amazon Web Services (AWS) kündigt eine Erweiterung in der Türkei an und eröffnet in Istanbul einen neuen Amazon-CloudFront-Edge-Standort. Kunden in der Türkei können für Daten, die über den neuen Edge-Standort bereitgestellt werden, mit einer durchschnittlichen Verbesserung der Latenz und Leistung von bis zu 30 % für Daten rechnen. Der neue AWS-Edge-Standort bietet alle Vorteile von Amazon CloudFront, einem sicheren, hochgradig verteilten und skalierbaren Content Delivery Network (CDN), das statische und dynamische Inhalte, APIs sowie Live- und On-Demand-Videos mit geringer Latenz und hoher Leistung bereitstellt.
Quelle: aws.amazon.com

AWS kündigt neuen Edge-Standort in der Türkei an

Amazon Web Services (AWS) kündigt eine Erweiterung in der Türkei an und eröffnet in Istanbul einen neuen Amazon-CloudFront-Edge-Standort. Kunden in der Türkei können für Daten, die über den neuen Edge-Standort bereitgestellt werden, mir einer durchschnittliche Verbesserung der Latenz und Leistung von bis zu 30 % rechnen. Der neue AWS-Edge-Standort bietet alle Vorteile von Amazon CloudFront, einem sicheren, hochgradig verteilten und skalierbaren Content Delivery Network (CDN), das statische und dynamische Inhalte, APIs sowie Live- und On-Demand-Videos mit geringer Latenz und hoher Leistung bereitstellt.
Quelle: aws.amazon.com