Is Your Container Image Really Distroless?

Containerization helped drastically improve the security of applications by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. 

The concept of “distroless” images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. 

In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential.

What’s a distro?

A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface.

Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. 

On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. 

A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure.

Multi-stage builds

Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. 

The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image.

FROM golang:1.21.5-alpine as build
WORKDIR /
COPY go.* .
RUN go mod download
COPY . .
RUN go build -o my-app

FROM scratch
COPY –from=build
/etc/ssl/certs/ca-certificates.crt
/etc/ssl/certs/ca-certificates.crt
COPY –from=build /my-app /usr/local/bin/my-app
ENTRYPOINT ["/usr/local/bin/my-app"]

BuildKit

BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. 

The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede.

#syntax=cmdjulian/mopy
apiVersion: v1
python: 3.9.2
build-deps:
– libopenblas-dev
– gfortran
– build-essential
envs:
MYENV: envVar1
pip:
– numpy==1.22
– slycot
– ./my_local_pip/
– ./requirements.txt
labels:
foo: bar
fizz: ${mopy.sbom}
project: my-python-app/

So, is your image really distroless?

Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. 

However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. 

Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers.

Init containers

In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. 

The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. 

The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started.

Kubernetes example

Here are two examples of using init containers. First, using Kubernetes:

apiVersion: v1
kind: Pod
metadata:
name: kubecon-postgress-pod
labels:
app.kubernetes.io/name: KubeConPostgress
spec:
containers:
– name: postgress
image: laurentgoderre689/postgres-distroless
securityContext:
runAsUser: 70
runAsGroup: 70
volumeMounts:
– name: db
mountPath: /var/lib/postgresql/data/
initContainers:
– name: init-postgress
image: postgres:alpine3.18
env:
– name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: kubecon-postgress-admin-pwd
key: password
command: ['docker-ensure-initdb.sh']
volumeMounts:
– name: db
mountPath: /var/lib/postgresql/data/
volumes:
– name: db
emptyDir: {}

– – –

> kubectl apply -f pod.yml && kubectl get pods
pod/kubecon-postgress-pod created
NAME READY STATUS RESTARTS AGE
kubecon-postgress-pod 0/1 Init:0/1 0 0s
> kubectl get pods
NAME READY STATUS RESTARTS AGE
kubecon-postgress-pod 1/1 Running 0 10s

Docker Compose example

The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions.

services:
db:
image: laurentgoderre689/postgres-distroless
user: postgres
volumes:
– pgdata:/var/lib/postgresql/data/
depends_on:
db-init:
condition: service_completed_successfully

db-init:
image: postgres:alpine3.18
environment:
POSTGRES_PASSWORD: example
volumes:
– pgdata:/var/lib/postgresql/data/
user: postgres
command: docker-ensure-initdb.sh

volumes:
pgdata:

– – –
> docker-compose up
[+] Running 4/0
✔ Network compose_default Created
✔ Volume "compose_pgdata" Created
✔ Container compose-db-init-1 Created
✔ Container compose-db-1 Created
Attaching to db-1, db-init-1
db-init-1 | The files belonging to this database system will be owned by user "postgres".
db-init-1 | This user must also own the server process.
db-init-1 |
db-init-1 | The database cluster will be initialized with locale "en_US.utf8".
db-init-1 | The default database encoding has accordingly been set to "UTF8".
db-init-1 | The default text search configuration will be set to "english".
db-init-1 | […]
db-init-1 exited with code 0
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: listening on IPv6 address "::", port 5432
db-1 | 2024-02-23 14:59:33.194 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1 | 2024-02-23 14:59:33.196 UTC [9] LOG: database system was shut down at 2024-02-23 14:59:32 UTC
db-1 | 2024-02-23 14:59:33.198 UTC [1] LOG: database system is ready to accept connections

As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. 

Conclusion

This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create “distroless” images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

containerd vs. Docker: Understanding Their Relationship and How They Work Together

During the past decade, containers have revolutionized software development by introducing higher levels of consistency and scalability. Now, developers can work without the challenges of dependency management, environment consistency, and collaborative workflows.

When developers explore containerization, they might learn about container internals, architecture, and how everything fits together. And, eventually, they may find themselves wondering about the differences between containerd and Docker and how they relate to one another.

In this blog post, we’ll explain what containerd is, how Docker and containerd work together, and how their combined strengths can improve developer experience.

What’s a container?

Before diving into what containerd is, I should briefly review what containers are. Simply put, containers are processes with added isolation and resource management. Containers have their own virtualized operating system with access to host system resources. 

Containers also use operating system kernel features. They use namespaces to provide isolation and cgroups to limit and monitor resources like CPU, memory, and network bandwidth. As you can imagine, container internals are complex, and not everyone has the time or energy to become an expert in the low-level bits. This is where container runtimes, like containerd, can help.

What’s containerd?

In short, containerd is a runtime built to run containers. This open source tool builds on top of operating system kernel features and improves container management with an abstraction layer, which manages namespaces, cgroups, union file systems, networking capabilities, and more. This way, developers don’t have to handle the complexities directly. 

In March 2017, Docker pulled its core container runtime into a standalone project called containerd and donated it to the Cloud Native Computing Foundation (CNCF).  By February 2019, containerd had reached the Graduated maturity level within the CNCF, representing its significant development, adoption, and community support. Today, developers recognize containerd as an industry-standard container runtime known for its scalability, performance, and stability.

Containerd is a high-level container runtime with many use cases. It’s perfect for handling container workloads across small-scale deployments, but it’s also well-suited for large, enterprise-level environments (including Kubernetes). 

A key component of containerd’s robustness is its default use of Open Container Initiative (OCI)-compliant runtimes. By using runtimes such as runc (a lower-level container runtime), containerd ensures standardization and interoperability in containerized environments. It also efficiently deals with core operations in the container life cycle, including creating, starting, and stopping containers.

How is containerd related to Docker?

But how is containerd related to Docker? To answer this, let’s take a high-level look at Docker’s architecture (Figure 1). 

Containerd facilitates operations on containers by directly interfacing with your operating system. The Docker Engine sits on top of containerd and provides additional functionality and developer experience enhancements.

How Docker interacts with containerd

To better understand this interaction, let’s talk about what happens when you run the docker run command:

After you select enter, the Docker CLI will send the run command and any command-line arguments to the Docker daemon (dockerd) via REST API call.

dockerd will parse and validate the request, and then it will check that things like container images are available locally. If they’re not, it will pull the image from the specified registry.

Once the image is ready, dockerd will shift control to containerd to create the container from the image.

Next, containerd will set up the container environment. This process includes tasks such as setting up the container file system, networking interfaces, and other isolation features.

containerd will then delegate running the container to runc using a shim process. This will create and start the container.

Finally, once the container is running, containerd will monitor the container status and manage the lifecycle accordingly.

Docker and containerd: Better together 

Docker has played a key role in the creation and adoption of containerd, from its inception to its donation to the CNCF and beyond. This involvement helped standardize container runtimes and bolster the open source community’s involvement in containerd’s development. Docker continues to support the evolution of the open source container ecosystem by continuously maintaining and evolving containerd.

Containerd specializes in the core functionality of running containers. It’s a great choice for developers needing access to lower-level container internals and other advanced features. Docker builds on containerd to create a cohesive developer experience and comprehensive toolchain for building, running, testing, verifying, and sharing containers.

Build + Run

In development environments, tools like Docker Desktop, Docker CLI, and Docker Compose allow developers to easily define, build, and run single or multi-container environments and seamlessly integrate with your favorite editors or IDEs or even in your CI/CD pipeline. 

Test

One of the largest developer experience pain points involves testing and environment consistency. With Testcontainers, developers don’t have to worry about reproducibility across environments (for example, dev, staging, testing, and production). Testcontainers also allows developers to use containers for isolated dependency management, parallel testing, and simplified CI/CD integration.

Verify

By analyzing your container images and creating a software bill of materials (SBOM), Docker Scout works with Docker Desktop, Docker Hub, or Docker CLI to help organizations shift left. It also empowers developers to find and fix software vulnerabilities in container images, ensuring a secure software supply chain.

Share

Docker Registry serves as a store for developers to push container images to a shared repository securely. This functionality streamlines image sharing, making maintaining consistency and efficiency in development and deployment workflows easier. 

With Docker building on top of containerd, the software development lifecycle benefits from the inner loop and testing to secure deployment to production.

Wrapping up

In this article, we discussed the relationship between Docker and containerd. We showed how containers, as isolated processes, leverage operating system features to provide efficient and scalable development and deployment solutions. We also described what containerd is and explained how Docker leverages containerd in its stack. 

Docker builds upon containerd to enhance the developer experience, offering a comprehensive suite of tools for the entire development lifecycle across building, running, verifying, sharing, and testing containers. 

Start your next projects with containerd and other container components by checking out Docker’s open source projects and most popular open source tools. 

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Ankündigung der Unterstützung dynamischer Parameter für AWS-AppConfig-Erweiterungen

AWS AppConfig unterstützt jetzt dynamische Parameter, die die Funktionalität von AppConfig-Erweiterungen ausweiten, indem Sie Ihren Erweiterungen bei der Konfigurationsbereitstellung Parameterwerte hinzufügen können. AWS-AppConfig-Erweiterungen sind anpassbare Aktionen, die AWS AppConfig während des Lebenszyklus von Konfigurationsdaten aufrufen kann. Mit dynamischen Parametern können Sie Eingaben zum Zeitpunkt des Aufrufs der Erweiterung bereitstellen und nicht, wenn diese erstmals mit Ihren AppConfig-Ressourcen verknüpft werden.
Quelle: aws.amazon.com

M6gd-Instances von Amazon EC2 sind ab sofort in der Region AWS GovCloud (USA) verfügbar

Ab heute sind M6gd-Instances von Amazon Elastic Compute Cloud (Amazon EC2) in den Regionen AWS GovCloud (USA-Ost) und AWS GovCloud (USA-West) verfügbar. Diese Instances werden von AWS-Graviton2-Prozessoren betrieben und bauen auf dem AWS Nitro System auf, einer Sammlung von AWS-entwickelten Hardware- und Softwareinnovationen, die die Bereitstellung effizienter, flexibler und sicherer Cloud-Services mit isolierter Multi-Tenancy, privaten Netzwerken und schneller lokaler Speicherung ermöglichen. Diese Instances bieten bis zu 25 Gbit/s Netzwerkbandbreite, bis zu 19 Gbit/s Bandbreite zu Amazon Elastic Block Store (Amazon EBS) und bis zu 3,8 TB an lokalem NVMe-SSD-Instance-Speicher.
Quelle: aws.amazon.com

Amazon Managed Grafana bringt ein Upgrade für Enterprise-Plugins auf den Markt

Amazon Managed Grafana bringt ein Upgrade für Enterprise-Plugins auf den Markt, das den Zugriff auf Datenquelle-Plugins in Grafana Enterprise, wie z. B. ServiceNow, Splunk und New Relic, sowie die Unterstützung und Schulung von Grafana Labs ermöglicht. Enterprise-Plugins sind vorgefertigte Plug-ins, durch die Sie Enterprise-Systeme von Drittanbietern über Ihren Workspace in Amazon Managed Grafana analysieren, abfragen und benachrichtigen können, ohne Daten aus Ihrem ursprünglichen Datenspeicher übertragen zu müssen.
Quelle: aws.amazon.com

Ankündigung der Synthetics NodeJS-Laufzeitversion 7.0 und der Synthetics Python-Laufzeitversion 3.0 für Amazon CloudWatch Synthetics

Amazon CloudWatch Synthetics gibt die Veröffentlichung neuer Laufzeitversionen bekannt: syn-nodejs-puppeteer-7.0 für NodeJS Runtime und syn-python-selenium-3.0 für Python Runtime. Das Update für NodeJS Runtime umfasst die Aktualisierung der Abhängigkeiten auf Puppeteer (v21.9.0) und Chromium (v121.0.6167.0.85). Das Update für Python Runtime umfasst die Aktualisierung der Abhängigkeiten auf Chromium und Chromedriver (v121.0.6167.85). Weitere Informationen finden Sie in den NodeJS-Versionshinweisen und den Python-Versionshinweisen.
Quelle: aws.amazon.com

AWS AppFabric unterstützt jetzt Box und IBM Security® Verify

Heute kündigt AWS AppFabric die Unterstützung für zwei neue Software as a Service (SaaS)-Anwendungen an: Box und IBM Security® Verify. Ab sofort können IT-Administratoren und Sicherheitsanalysten AppFabric verwenden, um schnell in 25 SaaS-Anwendungen zu integrieren, angereicherte und normalisierte SaaS-Auditprotokolle zu aggregieren und den Zugriff des Endbenutzers auf alle ihre SaaS-Anwendungen zu prüfen. Diese Markteinführung ist eine Erweiterung der von AWS AppFabric unterstützten Anwendungen, die innerhalb einer Organisation eingesetzt werden.
Quelle: aws.amazon.com