How Docker IT Streamlined Docker Desktop Deployment Across the Global Team

At Docker, innovation and efficiency are integral to how we operate. When our own IT team needed to deploy Docker Desktop to various teams, including non-engineering roles like customer support and technical sales, the existing process was functional but manual and time-consuming. Recognizing the need for a more streamlined and secure approach, we leveraged new Early Access (EA) Docker Business features to refine our deployment strategy.

A seamless deployment process

Faced with the challenge of managing diverse requirements across the organization, we knew it was time to enhance our deployment methods.

The Docker IT team transitioned from using registry.json files to a more efficient method involving registry keys and new MSI installers for Windows, along with configuration profiles and PKG installers for macOS. This transition simplified deployment, provided better control for admins, and allowed for faster rollouts across the organization.

“From setup to deployment, it took 24 hours. We started on a Monday morning, and by the next day, it was done,” explains Jeffrey Strauss, Head of Docker IT. 

Enhancing security and visibility

Security is always a priority. By integrating login enforcement with single sign-on (SSO) and System for Cross-domain Identity Management (SCIM), Docker IT ensured centralized control and compliance with security policies. The Docker Desktop Insights Dashboard (EA) offered crucial visibility into how Docker Desktop was being used across the organization. Admins could now see which versions were installed and monitor container usage, enabling informed decisions about updates, resource allocation, and compliance. (Docker Business customers can learn more about access and timelines by contacting their account reps. The Insights Dashboard is only available to Docker Business customers with enforced authentication for organization users.)

Steven Novick, Docker’s Principal Product Manager, emphasized, “With the new solution, deployment was simpler and tamper-proof, giving a clear picture of Docker usage within the organization.”

Benefits beyond deployment

The improvements made by Docker IT extended beyond just deployment efficiency:

Improved visibility: The Insights Dashboard provided detailed data on Docker usage, helping ensure all users are connected to the organization.

Efficient deployment: Docker Desktop was deployed to hundreds of computers within 24 hours, significantly reducing administrative overhead.

Enhanced security: Centralized control to enforce authentication via MDM tools like Intune for Windows and Jamf for macOS strengthened security and compliance.

Seamless user experience: Early and transparent communication ensured a smooth transition, minimizing disruptions.

Looking ahead

The successful deployment of Docker Desktop within 24 hours demonstrates Docker’s commitment to continuous improvement and innovation. We are excited about the future developments in Docker Desktop management and look forward to supporting our customers as they achieve their goals with Docker. 

Existing Docker Business customers can learn more about access and timelines by contacting their account reps. The Insights Dashboard is only available in Early Access to select Docker Business customers with enforced authentication for organization users.

Curious about how Docker’s new features can benefit your team? Get in touch to discover more or explore our customer stories to see how others are succeeding with Docker.

Learn more

Read the Docker IT case study to learn more.

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Best Practices: Using ARG and ENV in Your Dockerfiles

If you’ve worked with Docker for any length of time, you’re likely accustomed to writing or at least modifying a Dockerfile. This file can be thought of as a recipe for a Docker image; it contains both the ingredients (base images, packages, files) and the instructions (various RUN, COPY, and other commands that help build the image).

In most cases, Dockerfiles are written once, modified seldom, and used as-is unless something about the project changes. Because these files are created or modified on such an infrequent basis, developers tend to rely on only a handful of frequently used instructions — RUN, COPY, and EXPOSE being the most common. Other instructions can enhance your image, making it more configurable, manageable, and easier to maintain. 

In this post, we will discuss the ARG and ENV instructions and explore why, how, and when to use them.

ARG: Defining build-time variables

The ARG instruction allows you to define variables that will be accessible during the build stage but not available after the image is built. For example, we will use this Dockerfile to build an image where we make the variable specified by the ARG instruction available during the build process.

FROM ubuntu:latest
ARG THEARG="foo"
RUN echo $THEARG
CMD ["env"]

If we run the build, we will see the echo foo line in the output:

$ docker build –no-cache -t argtest .
[+] Building 0.4s (6/6) FINISHED docker:desktop-linux
<– SNIP –>
=> CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6 0.0s
=> [2/2] RUN echo foo 0.1s
=> exporting to image 0.0s
<– SNIP –>

However, if we run the image and inspect the output of the env command, we do not see THEARG:

$ docker run –rm argtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d19f59677dcd
HOME=/root

ENV: Defining build and runtime variables

Unlike ARG, the ENV command allows you to define a variable that can be accessed both at build time and run time:

FROM ubuntu:latest
ENV THEENV="bar"
RUN echo $THEENV
CMD ["env"]

If we run the build, we will see the echo bar line in the output:

$ docker build -t envtest .
[+] Building 0.8s (7/7) FINISHED docker:desktop-linux
<– SNIP –>
=> CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6 0.0s
=> [2/2] RUN echo bar 0.1s
=> exporting to image 0.0s
<– SNIP –>

If we run the image and inspect the output of the env command, we do see THEENV set, as expected:

$ docker run –rm envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=f53f1d9712a9
THEENV=bar
HOME=/root

Overriding ARG

A more advanced use of the ARG instruction is to serve as a placeholder that is then updated at build time:

FROM ubuntu:latest
ARG THEARG
RUN echo $THEARG
CMD ["env"]

If we build the image, we see that we are missing a value for $THEARG:

$ docker build -t argtest .
<– SNIP –>
=> CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6 0.0s
=> [2/2] RUN echo $THEARG 0.1s
=> exporting to image 0.0s
=> => exporting layers 0.0s
<– SNIP –>

However, we can pass a value for THEARG on the build command line using the –build-arg argument. Notice that we now see THEARG has been replaced with foo in the output:

=> CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6 0.0s
=> [2/2] RUN echo foo 0.1s
=> exporting to image 0.0s
=> => exporting layers 0.0s
<– SNIP –>

The same can be done in a Docker Compose file by using the args key under the build key. Note that these can be set as a mapping (THEARG: foo) or a list (- THEARG=foo):

services:
argtest:
build:
context: .
args:
THEARG: foo

If we run docker compose up –build, we can see the THEARG has been replaced with foo in the output:

$ docker compose up –build
<– SNIP –>
=> [argtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6 0.0s
=> CACHED [argtest 2/2] RUN echo foo 0.0s
=> [argtest] exporting to image 0.0s
=> => exporting layers 0.0s
<– SNIP –>
Attaching to argtest-1
argtest-1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
argtest-1 | HOSTNAME=d9a3789ac47a
argtest-1 | HOME=/root
argtest-1 exited with code 0

Overriding ENV

You can also override ENV at build time; this is slightly different from how ARG is overridden. For example, you cannot supply a key without a value with the ENV instruction, as shown in the following example Dockerfile:

FROM ubuntu:latest
ENV THEENV
RUN echo $THEENV
CMD ["env"]

When we try to build the image, we receive an error:

$ docker build -t envtest .
[+] Building 0.0s (1/1) FINISHED docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 98B 0.0s
Dockerfile:3
——————–
1 | FROM ubuntu:latest
2 |
3 | >>> ENV THEENV
4 | RUN echo $THEENV
5 |
——————–
ERROR: failed to solve: ENV must have two arguments

However, we can remove the ENV instruction from the Dockerfile:

FROM ubuntu:latest
RUN echo $THEENV
CMD ["env"]

This allows us to build the image:

$ docker build -t envtest .
<– SNIP –>
=> [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6 0.0s
=> CACHED [2/2] RUN echo $THEENV 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
<– SNIP –>

Then we can pass an environment variable via the docker run command using the -e flag:

$ docker run –rm -e THEENV=bar envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=638cf682d61f
THEENV=bar
HOME=/root

Although the .env file is usually associated with Docker Compose, it can also be used with docker run.

$ cat .env
THEENV=bar

$ docker run –rm –env-file ./.env envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=59efe1003811
THEENV=bar
HOME=/root

This can also be done using Docker Compose by using the environment key. Note that we use the variable format for the value:

services:
envtest:
build:
context: .
environment:
THEENV: ${THEENV}

If we do not supply a value for THEENV, a warning is thrown:

$ docker compose up –build
WARN[0000] The "THEENV" variable is not set. Defaulting to a blank string.
<– SNIP –>
=> [envtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6 0.0s
=> CACHED [envtest 2/2] RUN echo ${THEENV} 0.0s
=> [envtest] exporting to image 0.0s
<– SNIP –>
✔ Container dd-envtest-1 Recreated 0.1s
Attaching to envtest-1
envtest-1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1 | HOSTNAME=816d164dc067
envtest-1 | THEENV=
envtest-1 | HOME=/root
envtest-1 exited with code 0

The value for our variable can be supplied in several different ways, as follows:

On the compose command line:

$ THEENV=bar docker compose up

[+] Running 2/0
✔ Synchronized File Shares 0.0s
✔ Container dd-envtest-1 Recreated 0.1s
Attaching to envtest-1
envtest-1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1 | HOSTNAME=20f67bb40c6a
envtest-1 | THEENV=bar
envtest-1 | HOME=/root
envtest-1 exited with code 0

In the shell environment on the host system:

$ export THEENV=bar
$ docker compose up

[+] Running 2/0
✔ Synchronized File Shares 0.0s
✔ Container dd-envtest-1 Created 0.0s
Attaching to envtest-1
envtest-1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1 | HOSTNAME=20f67bb40c6a
envtest-1 | THEENV=bar
envtest-1 | HOME=/root
envtest-1 exited with code 0

In the special .env file:

$ cat .env
THEENV=bar

$ docker compose up

[+] Running 2/0
✔ Synchronized File Shares 0.0s
✔ Container dd-envtest-1 Created 0.0s
Attaching to envtest-1
envtest-1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1 | HOSTNAME=20f67bb40c6a
envtest-1 | THEENV=bar
envtest-1 | HOME=/root
envtest-1 exited with code 0

Finally, when running services directly using docker compose run, you can use the -e flag to override the .env file.

$ docker compose run -e THEENV=bar envtest

[+] Creating 1/0
✔ Synchronized File Shares 0.0s
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=219e96494ddd
TERM=xterm
THEENV=bar
HOME=/root

The tl;dr

If you need to access a variable during the build process but not at runtime, use ARG. If you need to access the variable both during the build and at runtime, or only at runtime, use ENV.

To decide between them, consider the following flow (Figure 1):

Both ARG and ENV can be overridden from the command line in docker run and docker compose, giving you a powerful way to dynamically update variables and build flexible workflows.

Learn more

Discover more Docker Best Practices.

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Introducing Organization Access Tokens

In the past, securely managing access to organization resources has been difficult. The only way to gain access has been through an assigned user’s personal access tokens. Whether these users are your engineer’s accounts, bot accounts, or service accounts, they often become points of risk for your organization.

Now, we’re pleased to introduce a long-awaited feature: organization access tokens.

Organization access tokens are like personal access tokens, but at an organizational level with many improvements and features. In this post, we walk through a few reasons why this feature release is so exciting.

Frictionless management

Every day, we are reducing the friction for organizations and engineers using our products. We want you working on your projects, not managing your development tools. 

Organization access tokens do not require you to manage groups and repository assignments like users require. This means you benefit from a straightforward way to manage access that each access token has instead of managing users and their placement within the organization.

If your organization has SSO enabled and enforced, you have likely run into the issue where machine or service accounts cannot log in easily because they don’t have the ability to log into your identity provider. With organization access tokens, this is no longer a problem.Did someone leave your organization? No problem! With organization access tokens, you are still in control of the token instead of having to track down which tokens were on that user’s account and deal with the resulting challenges.

Fine-grained access

Organization access tokens introduce a new way to allow for tokens to access resources within your organization. These tokens can be assigned to specific repositories with specific actions for full access management with “least privilege” applied. Of course, you can also allow access to all resources in your organization.

Expirations

Another critical feature is the ability to set expirations for your organization access tokens. This is great for customers who have compliance requirements for token rotation or for those who just like the extra security.

Visibility

Management and registry actions all show up in your organization’s activity logs for each access token. Each token’s usage also shows up on your organization’s usage reports.

Business use cases and fair use

We believe that organization access tokens are useful in the context of teams and companies, which is why we are making them available to Docker Team and Docker Business subscribers. With the usual attention to the security aspect, avoiding any “misuse” related to the proliferation of the number of access tokens created, we are introducing a limitation in the maximum number of organization access tokens based on the type of subscription. There will be a limit of 10 for Team plans and 100 for Business plans.

Try organization access tokens

If you are on a team or business subscription, check out our documentation to learn more about using organization access tokens.

Learn more

Read the organization access tokens documentation.

Learn about Docker subscription plans.

Subscribe to the Docker Newsletter.

Authenticate and update to receive your subscription level’s newest Docker features.

New to Docker? Create an account.

Quelle: https://blog.docker.com/feed/

How to Improve Your DevOps Automation

DevOps brings together developers and operations teams to create better software by introducing organizational principles that encourage communication, collaboration, innovation, speed, security, and agility throughout the software development lifecycle. And, the popularity and adoption rates of DevOps continue to grow, with 83% of 10,000 global developers surveyed saying that they use the principles, according to an April 2024 report commissioned by the Continuous Delivery Foundation (CDF), a Linux Foundation project.

DevOps includes everything from continuous integration/improvement and continuous deployment/delivery (CI/CD) as code is created and modified, to critical automation capabilities covering a wide range of development processes. Also built into DevOps principles is a focus on creating better applications from code conception all the way through to end-user experiences. Before this unified framework existed, code typically was created in separate silos that did not easily allow collaboration or foster efficient management, speed, or quality. These conditions eventually inspired the DevOps framework and principles.  

DevOps principles and practices also help organizations by constantly integrating user feedback regarding application features, shortcomings, and code glitches, thereby reducing security and operational risks in code as it reaches production.

This blog post aims to help enterprises focus on one of these critical DevOps capabilities in particular — the use of automation to speed and streamline processes across the development lifecycle of applications — to further expand and drive the benefits of using DevOps processes within an organization.

As DevOps use continues to grow, more developers are finding that the Docker containerization platform integrates well as a crucial component of DevOps practices, especially due to its built-in automation features and capabilities.

What is DevOps automation?

DevOps automation is a major time-saver for developers and operations teams because it automates labor-intensive and repetitive processes that can free up developers to instead work on new code innovations and ideas that can create business value.  

Automating repetitive manual tasks using DevOps automation tools drives notable efficiencies and productivity boosts for developers and organizations, using automatic actions that eliminate frequent developer or operations team intervention. 

What DevOps processes can you automate?

DevOps automation is especially valuable because it can be used on a broad spectrum of tasks in the application development environment, including CI/CD pipelines and workflows, code writing, monitoring and logging, and Infrastructure as Code (IaC) tools. It can also help improve and streamline configuration management, infrastructure provisioning, unit tests, code testing, security steps and scans, troubleshooting, code review, deploying and delivering code, project management, and more.

By bringing beneficial and time-saving automation to the DevOps lifecycle, developers can create cleaner and more secure code with much less manual intervention and human error compared to traditional software development methods. 

Benefits of DevOps automation tools

For development and operations teams, using DevOps automation to streamline and improve their operations goes far beyond just reducing human error rates and increasing the efficiency and speed of code creation and the deployment process.

Other benefits of DevOps automation include improved consistency and reliability, delivery of predictable and repeatable results, and enhanced scalability and manageability of multiple applications and processes. These benefits become possible with automation because it reduces many human mistakes and miscalculations.

DevOps automation benefits can also include smoother collaboration among multiple developers working on applications at the same time by automatically handling merge conflicts, and performing automatic code testing for multiple developers at once. Automation that troubleshoots applications can also speed up project development times by immediately notifying systems personnel of problems as they arise.

How to automate DevOps with Docker

As a flexible tool for DevOps automation, Docker is available in four subscription levels, from the free Docker Personal version to the top-of-the-line Docker Business tier. 

Docker Business delivers a wide range of helpful tools that empower DevOps teams to identify development bottlenecks where automation can free up resources and resolve repetitive tasks and operations. The following tools are included with Docker Business. (Read our September 2024 announcement about upgraded Docker subscription plans that will deliver even more value, flexibility, and power to your development workflows.) 

Docker Image Access Management

With Docker Business, developers and operations teams can quickly start automating tasks using features such as Docker Image Access Management, which gives administrators control over the types of container images that developers can pull and use from Docker Hub. This includes Docker Official Images, Docker Verified Publisher Images, and community images. Using Image Access Management, developers and teams can more easily search private registries and community repositories for needed container images to use to build their applications. 

Image Access Management allows organizations to give developers freedom of choice while providing some guardrails to prevent developers from accidentally using untrusted, malicious community images as components of their applications. This is an important benefit, compared with only allowing developers to use a handful of internally built images, for example.

Docker Image Access Management is available only to Docker Business customers.  

Docker automated testing 

Other Docker DevOps automation features include automated testing, including source code repository testing, that can be done through Docker Hub to automatically test changes to source code repositories using containers. Any Docker Hub repository can enable an autotest function to run tests on pull requests to the source code repository to create a continuous integration testing service.

Automated test files to perform the tests can be set up by creating a docker-compose.test.yml file, which defines a service that lists the tests to be run. The docker-compose.test.yml file should be placed in the same directory that contains the Dockerfile used to build the image.

Hardened Docker Desktop

To automate security within Docker, administrators can use a wide range of features within Hardened Docker Desktop, which is available to Docker Business subscribers. Hardened Docker Desktop security features aim to bolster the security of developer environments while causing minimal speed or performance impacts on developer experiences or productivity. 

These features allow administrators to enforce strict security settings, which prevent developers and containers from bypassing the controls intentionally or unintentionally. The features also enable enhanced container isolation capabilities to prevent potential security threats, such as malicious payloads, from breaching the Docker Desktop Linux VM and the underlying host.

Using Hardened Docker Desktop, security administrators can take more control and ownership over Docker Desktop configurations, removing and preventing potential changes by users, which is vital for security-conscious organizations.

Automated builds

Another automation and productivity tool is the Docker Automated builds feature, which automatically builds images from source code in an external repository and then pushes the built image to designated Docker repositories. Available in the Docker Business, Pro, or Teams tiers, Automated builds — also called autobuilds — create a list of branches and tags that can be built into Docker images using a series of commands. Automated builds can handle images of up to 10 GB in size.

Enhanced collaboration tools 

Throughout Docker’s unified suite, tools built to deliver enhanced collaboration are available to developers and operations teams to work together to get the most out of their projects and applications.

Everything from Docker Desktop to Docker Engine, Docker CLI, Docker Compose, Docker Build/BuildKit, Docker Desktop Extensions, and more are designed to enable developers and operations teams to accelerate productivity, reduce code errors, increase security, drive innovation, and save valuable time throughout the software development process. 

Easier scaling and orchestration with Kubernetes integration

Docker’s containerization platform also integrates well with the Kubernetes container orchestration platform, optimizing the developer experience for container development, deployment, and management. Docker and Kubernetes can work together using Docker Engine as a user-friendly and secure foundation for basic Kubernetes (K8s) functionality, or by using Docker Desktop for a more comprehensive approach that avoids potential challenges associated with do-it-yourself container configurations. Docker Desktop includes K8s setup at the push of a button, which is one of its numerous and useful automation features. 

Support and troubleshooting 

As Docker continues to mature, its knowledge base is constantly being expanded and deepened, with core documentation and resources freely available to Docker developers within the Docker ecosystem. And, because Docker uses a collaborative approach between developers and operations teams, developers can often find common answers to their inquiries and learn from each other to tackle most issues.

More information and help about using Docker can be found in the Docker Training page, which offers live and on-demand training and other resources to help developers and teams negotiate their Docker landscapes and learn fresh skills to resolve technical problems. 

Other resources: Docker Scout and Docker Build Cloud

Docker offers even more tools to help with automation, collaboration, and creating better and more nimble code for developer teams and operations managers.

Docker Scout, for example, is built to help organizations better protect their software supply chain security when using container images, which may contain software elements that are susceptible to security vulnerabilities. 

Docker Scout helps with this issue by proactively analyzing container images and compiling a Software Bill of Materials (SBOM), which is a detailed inventory of code included in an application or container. That SBOM is then matched against a continuously updated vulnerability database to pinpoint and correct security weaknesses to help make the code more secure.

Docker Build Cloud is a Docker service to help developers build container images more quickly, both locally and in the cloud. Those builds run on cloud infrastructure that requires no configuration and where the environment is optimally dimensioned for all workloads using a remote build cache. This approach ensures fast builds anywhere for all team members. 

To use Docker Build Cloud, developers take the same steps they would take for a regular build using the command docker buildx build. With a regular build command, the build runs on a local instance of BuildKit, bundled with the Docker daemon. But when using Docker Build Cloud, the build request is sent to a BuildKit instance running remotely, in the cloud, with all data encrypted in transit. Docker Build Cloud provides several benefits over local builds, including faster build speed, shared build cache, and native multi-platform builds.

Future trends in DevOps automation

As DevOps automation continues to mature, it will gain more capabilities from artificial intelligence (AI), machine learning (ML), serverless architectures, cloud-native platforms, and other technologies across the IT landscape. 

Such advancements can be found in Docker’s AI collaborations with NVIDIA. For example, Docker Desktop dovetails with the NVIDIA AI Workbench, which is an easy-to-use toolkit that lets developers create, test, and customize AI and machine learning models on a PC or workstation and then scale them to a data center or public cloud. NVIDIA AI Workbench makes interactive development workflows easier, while automating technical tasks that can halt beginners and derail experts. 

DevOps automation is ripe for further improvements and enhancements from AI and ML in areas of agility, process improvements, and more for developers and operations teams. AI and ML will drive further labor savings for software development teams by delivering fresh new automated, self-service tools that free them up from a broader range of routine tasks, giving them more time to conduct valuable and critical work that will drive their companies forward.

Docker will be an important part of this changing landscape as the unified suites and tools continue to expand and deliver further new benefits and capabilities to DevOps, the Docker ecosystem, and developers and operations teams around the world.

Wrapping up

Improving DevOps automation by using the Docker containerization platform inside your business organization is a smart strategy that helps developers and operations teams deliver their best work with efficiency, creativity, and broad collaboration.

Docker Business plays a leadership role in enhancing DevOps automation in companies around the world as they look to automate their DevOps operations effectively.

Ready to automate your team’s DevOps processes? Find out how Docker Business can transform your development, or if you still have questions, reach out to one of our experts to get started!

Learn more

Subscribe to the Docker Newsletter. 

Learn about the 2024 announcement of upgraded Docker plans that are simpler and offer even more value. 

Learn how other DevOps teams use Docker. 

Find the perfect pricing for your team. 

Try the latest version of Docker Desktop. 

Visit Docker Resources to explore more materials.

Quelle: https://blog.docker.com/feed/

Leveraging Testcontainers for Complex Integration Testing in Mattermost Plugins

This post was contributed by Jesús Espino, Principal Engineer at Mattermost.

In the ever-evolving software development landscape, ensuring robust and reliable plugin integration is no small feat. For Mattermost, relying solely on mocks for plugin testing became a limitation, leading to brittle tests and overlooked integration issues. Enter Testcontainers, an open source tool that provides isolated Docker environments, making complex integration testing not only feasible but efficient. 

In this blog post, we dive into how Mattermost has embraced Testcontainers to overhaul its testing strategy, achieving greater automation, improved accuracy, and seamless plugin integration with minimal overhead.

The previous approach

In the past, Mattermost relied heavily on mocks to test plugins. While this approach had its merits, it also had significant drawbacks. The tests were brittle, meaning they would often break when changes were made to the codebase. This made the tests challenging to develop and maintain, as developers had to constantly update the mocks to reflect the changes in the code.

Furthermore, the use of mocks meant that the integration aspect of testing was largely overlooked. The tests did not account for how the different components of the system interacted with each other, which could lead to unforeseen issues in the production environment. 

The previous approach additionally did not allow for proper integration testing in an automated way. The lack of automation made the testing process time-consuming and prone to human error. These challenges necessitated a shift in Mattermost’s testing strategy, leading to the adoption of Testcontainers for complex integration testing.

Mattermost’s approach to integration testing

Testcontainers for Go

Mattermost uses Testcontainers for Go to create an isolated testing environment for our plugins. This testing environment includes the Mattermost server, the PostgreSQL server, and, in certain cases, an API mock server. The plugin is then installed on the Mattermost server, and through regular API calls or end-to-end testing frameworks like Playwright, we perform the required testing.

We have created a specialized Testcontainers module for the Mattermost server. This module uses PostgreSQL as a dependency, ensuring that the testing environment closely mirrors the production environment. Our module allows the developer to install and configure any plugin you want in the Mattermost server easily.

To improve the system’s isolation, the Mattermost module includes a container for the server and a container for the PostgreSQL database, which are connected through an internal Docker network.

Additionally, the Mattermost module exposes utility functionality that allows direct access to the database, to the Mattermost API through the Go client, and some utility functions that enable admins to create users, channels, teams, and change the configuration, among other things. This functionality is invaluable for performing complex operations during testing, including API calls, users/teams/channel creation, configuration changes, or even SQL query execution. 

This approach provides a powerful set of tools with which to set up our tests and prepare everything for verifying the behavior that we expect. Combined with the disposable nature of the test container instances, this makes the system easy to understand while remaining isolated.

This comprehensive approach to testing ensures that all aspects of the Mattermost server and its plugins are thoroughly tested, thereby increasing their reliability and functionality. But, let’s see a code example of the usage.

We can start setting up our Mattermost environment with a plugin like this:

pluginConfig := map[string]any{}
options := []mmcontainer.MattermostCustomizeRequestOption{
mmcontainer.WithPlugin("sample.tar.gz", "sample", pluginConfig),
}
mattermost, err := mmcontainer.RunContainer(context.Background(), options…)
defer mattermost.Terminate(context.Background()

Once your Mattermost instance is initialized, you can create a test like this:

func TestSample(t *testing.T) {
client, err mattermost.GetClient()
require.NoError(t, err)
reqURL := client.URL + "/plugins/sample/sample-endpoint"
resp, err := client.DoAPIRequest(context.Background(), http.MethodGet, reqURL, "", "")
require.NoError(t, err, "cannot fetch url %s", reqURL)
defer resp.Body.Close()
bodyBytes, err := io.ReadAll(resp.Body)
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
assert.Contains(t, string(bodyBytes), "sample-response")
}

Here, you can decide when you tear down your Mattermost instance and recreate it. Once per test? Once per a set of tests? It is up to you and depends strictly on your needs and the nature of your tests.

Testcontainers for Node.js

In addition to using Testcontainers for Go, Mattermost leverages Testcontainers for Node.js to set up our testing environment. In case you’re unfamiliar, Testcontainers for Node.js is a Node.js library that provides similar functionality to Testcontainers for Go. Using Testcontainers for Node.js, we can set up our environment in the same way we did with Testcontainers for Go. This allows us to write Playwright tests using JavaScript and run them in the isolated Mattermost environment created by Testcontainers, enabling us to perform integration testing that interacts directly with the plugin user interface. The code is available on GitHub.  

This approach provides the same advantages as Testcontainers for Go, and it allows us to use a more interface-based testing tool — like Playwright in this case. Let me show a bit of code with the Node.js and Playwright implementation:

We start and stop the containers for each test:

test.beforeAll(async () => { mattermost = await RunContainer() })
test.afterAll(async () => { await mattermost.stop(); })

Then we can use our Mattermost instance like any other server running to run our Playwright tests:

test.describe('sample slash command', () => {
test('try to run a sample slash command', async ({ page }) => {
const url = mattermost.url()
await login(page, url, "regularuser", "regularuser")
await expect(page.getByLabel('town square public channel')).toBeVisible();
await page.getByTestId('post_textbox').fill("/sample run")
await page.getByTestId('SendMessageButton').click();
await expect(page.getByText('Sample command result', { exact: true })).toBeVisible();
await logout(page)
});
});

With these two approaches, we can create integration tests covering the API and the interface without having to mock or use any other synthetic environment. Also, we can test things in absolute isolation because we consciously decide whether we want to reuse the Testcontainers instances. We can also reach a high degree of isolation and thereby avoid the flakiness induced by contaminated environments when doing end-to-end testing.

Examples of usage

Currently, we are using this approach for two plugins.

1. Mattermost AI Copilot

This integration helps users in their daily tasks using AI large language models (LLMs), providing things like thread and meeting summarization and context-based interrogation.

This plugin has a rich interface, so we used the Testcontainers for Node and Playwright approach to ensure we could properly test the system through the interface. Also, this plugin needs to call the AI LLM through an API. To avoid that resource-heavy task, we use an API mock, another container that simulates any API.

This approach gives us confidence in the server-side code but in the interface side as well, because we can ensure that we aren’t breaking anything during the development.

2. Mattermost MS Teams plugin

This integration is designed to connect MS Teams and Mattermost in a seamless way, synchronizing messages between both platforms.

For this plugin, we mainly need to do API calls, so we used Testcontainers for Go and directly hit the API using a client written in Go. In this case, again, our plugin depends on a third-party service: the Microsoft Graph API from Microsoft. For that, we also use an API mock, enabling us to test the whole plugin without depending on the third-party service.

We still have some integration tests with the real Teams API using the same Testcontainers infrastructure to ensure that we are properly handling the Microsoft Graph calls.

Benefits of using Testcontainers libraries

Using Testcontainers for integration testing offers benefits, such as:

Isolation: Each test runs in its own Docker container, which means that tests are completely isolated from each other. This approach prevents tests from interfering with one another and ensures that each test starts with a clean slate.

Repeatability: Because the testing environment is set up automatically, the tests are highly repeatable. This means that developers can run the tests multiple times and get the same results, which increases the reliability of the tests.

Ease of use: Testcontainers is easy to use, as it handles all the complexities of setting up and tearing down Docker containers. This allows developers to focus on writing tests rather than managing the testing environment.

Testing made easy with Testcontainers

Mattermost’s use of Testcontainers libraries for complex integration testing in their plugins is a testament to the power and versatility of Testcontainers.

By creating a well-isolated and repeatable testing environment, Mattermost ensures that our plugins are thoroughly tested and highly reliable.

Learn more

Subscribe to the Docker Newsletter. 

Visit the Testcontainers website.

Get started with Testcontainers Cloud by creating a free account.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

A New Era at Docker: How We’re Investing in Innovation and Customer Relationships

I recently joined Docker in January as Chief Revenue Officer. My role is responsible for the entire customer journey, from your first interaction with Docker’s sales org to post-sales support and onboarding. As I speak with customers and hear stories about their journey with Docker over the past decade, I’m often reminded of the immense trust you’ve placed in us. Whether you’ve been with us from the days of Docker Swarm or have more recently started using Docker Desktop, your partnership has been invaluable in shaping who we are today. 

I want to take a moment to personally thank you for being part of our story, especially as we continue to evolve in a rapidly changing ecosystem.

We know that change can bring challenges. Over the years, as containers became the backbone of modern software development, Docker has evolved alongside them. This evolution has not always been easy and I understand that shifts in our product offerings, changes in pricing, and recent adjustments to our subscription plans have impacted many of you. Our priority now, as it always has been, is to deliver unrivaled value to you. 

We recognize that to continue innovating and addressing the complex needs of modern developers, we must continue to invest in Docker products and our relationships with customers like you. This investment isn’t just about tools and features; it’s about creating a holistic ecosystem — a unified suite — that makes your development process more productive, secure, and manageable at an enterprise scale, while building a go-to-market organization that is equipped to support our growing customer base. 

To that end, we’ve redefined our strategy to focus on a deeper, more meaningful engagement with you. We’re committed to building stronger relationships, listening carefully to your feedback, and ensuring that the solutions we bring to market truly address your pain points. By focusing on your needs, we’re working to make every interaction with Docker more valuable, whether it’s through enhanced support, new features, or better licensing management. If you’d like to discuss this with me further, I’m happy to schedule time. (Reach out by email or connect with your Account Executive to set this up.)

Additionally, we’ve made key investments in our enterprise suite of products that surrounds Docker Desktop. We understand that the demands of modern development extend beyond the individual developer’s experience. Docker is the only container-first platform built specifically for development teams, improving developer experience and productivity while meeting the security and control needs of modern enterprises. Docker offers a comprehensive suite of enterprise-ready tools, cloud services, trusted content, and a collaborative community that helps streamline workflows and maximize development efficiency.

As we continue to invest in both vectors above, we’re excited about what lies ahead in our product roadmap. Our aim is simple: to help your teams develop with confidence, knowing that Docker is a trusted partner invested in your success. I am personally dedicated to ensuring that our roadmap reflects your needs and that our solutions empower your teams to reach their full potential.

Thank you again for your continued trust and partnership. We wouldn’t be here without you, and I look forward to what we will achieve together.

Learn more

Read Announcing Upgraded Docker Plans: Simpler, More Value, Better Development and Productivity.

Subscribe to the Docker Newsletter. 

Quelle: https://blog.docker.com/feed/

Using an AI Assistant to Script Tools

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

LLMs are now quite good at transforming data. For example, we were recently working with some data generated by the Pylint tool. This tool generates big arrays of code violations. 

Here’s an example showing the kind of data that gets returned.

[
{
"type": "convention",
"module": "app",
"line": 1,
"column": 0,
"endLine": 1,
"endColumn": 13,
"path": "src/app.py",
"symbol": "missing-module-docstring",
"message": "Missing module docstring",
"message-id": "C0114"
},
{

},

]

During this session with our AI assistant, we decided that it would be helpful to create a database and insert the data to make it easier for the AI to analyze (LLMs are very good at writing SQL). As is now our habit, we wrote a quick prompt to see if the assistant could generate the SQL:

1. Read the json data from /thread/violations.json
2. For each element in the array, transform each element into two SQL INSERT statements.

* The first should insert columns PATH, START_LINE, END_LINE, START_COLUMN, END_COLUMN into a table named RANGES.
* The second should insert columns MESSAGE, TYPE, RANGE, and VIOLATION_ID into a table named VIOLATIONS.

LLMs are obviously good at tasks of this kind, and this was no exception. Our prompt engine had been augmented with a function to read local files but, besides that, this was a pretty straightforward prompt (we used GPT-3.5). The LLM responded with the correct INSERT statements.

INSERT INTO RANGES (PATH, START_LINE, END_LINE, START_COLUMN, END_COLUMN)
VALUES ('src/app.py', 1, 1, 0, 13);
INSERT INTO VIOLATIONS (MESSAGE, TYPE, RANGE, VIOLATION_ID)
VALUES ('Missing module docstring', 'convention', (SELECT ID FROM RANGES WHERE PATH='src/app.py' AND START_LINE=1 AND START_COLUMN=0), 'C0114');

We’re starting to get accustomed to this kind of capability, so the result wasn’t too surprising. However, what about the context window here? It really doesn’t make a lot of sense to pass all of this data to the LLM, especially if this is a task that we’ll need to do continuously. Plus, this is really not how a programmer would have solved the problem.

Programmers would write programs.

So, instead of asking the LLM to do a thing, we should try asking it to write a program to do that same thing. Starting with the same prompt, let’s prefix it with “Please write a JavaScript program to …” In other words, let’s ask the LLM to describe how it would accomplish the task in JavaScript, and hopefully automate itself out of the loop. Here’s the adjusted prompt:

Write a javascript function that reads json from a file named `/thread/violations.json`.

For each element in the array, transform each element into two SQL INSERT statements.

* The first should insert columns PATH, START_LINE, END_LINE, START_COLUMN, END_COLUMN into a table named RANGES.
* The second should insert columns MESSAGE, TYPE, RANGE, and VIOLATION_ID into a table named VIOLATIONS.

and write them to the file /thread/insert.sql

Now execute the javascript code in a container.

Tool code

Current LLMs can do tasks of this kind. Here’s a sample of the output generated by GPT-3.5. A quick scan will convince many of you that this is probably going to do the trick. However, if our prompt engine is already running in Docker, we might as well just run it:

const fs = require('fs');
const crypto = require('crypto');

function getRandomId() {
return crypto.randomBytes(16).toString('hex');
}

function escapeString(str) {
return str.replace(/'/g, \"''\");
}

let data = JSON.parse(fs.readFileSync('/thread/violations.json', 'utf8'));
let sqlStatements = '';

data.forEach((violation) => {
let id = getRandomId();
sqlStatements += `INSERT INTO RANGES(ID, PATH, START_LINE, END_LINE, START_COLUMN, END_COLUMN) VALUES('${id}', '${escapeString(violation.path)}', ${violation.line}, ${violation.endLine}, ${violation.column}, ${violation.endColumn});\n`;
sqlStatements += `INSERT INTO VIOLATIONS(MESSAGE, TYPE, RANGE) VALUES('${escapeString(violation.message)}', '${escapeString(violation.type)}', '${id}');\n`;
});

fs.writeFileSync('/thread/insert.sql', sqlStatements);

Running the tool

You may not have noticed but at the end of our prompt, we added a final instruction: “Now execute the JavaScript code in a container”. This is a nice addition to our session, because it means we get to see the results.

This is also where tool calling comes back into the picture. To give our AI the capacity to try running the program that it has just written, we have defined a new function to create an isolated runtime sandbox for trying out our new tool.

Here’s the agent’s new tool definition:

tools:
– name: run-javascript-sandbox
description: execute javascript code in a container
parameters:
type: object
properties:
javascript:
type: string
description: the javascript code to run
container:
image: vonwig/javascript-runner
command:
– "{{javascript|safe}}"

We’ve asked the AI assistant to generate a tool from a description of that tool. As long as the description of the tools doesn’t change, the workflow won’t have to go back to the AI to ask it to build a new tool version.

The role of Docker in this pattern is to create the sandbox for this code to run. This function really doesn’t need much of a runtime, so we give it a pretty small sandbox.

No access to a network.

No access to the host file system (does have access to isolated volumes for sharing data between tools).

No access to GPU.

Almost no access to software besides the Node.js runtime (no shell for example).

The ability for one tool to create another tool is not just a trick. It has very practical implications for the kinds of workflows that we can build up because it gives us a way for us to control the volume of data sent to LLMs, and it gives the assistant a way to “automate” itself out of the loop.

Next steps

This example was a bit abstract but in our next post, we will describe the practical scenarios that have driven us to look at this idea of prompts generating new tools. Most of the workflows we’re exploring are still just off-the-shelf tools like Pylint, SQLite, and tree_sitter (which we embed using Docker, of course!). For example:

Use pylint to extract violations from my codebase.

Transform the violations into SQL and then send that to a new SQLite.

Find the most common violations of type error and show me the top level code blocks containing them.

However, you’ll also see that part of being able to author workflows of this kind is being able to recognize when you just need to add a custom tool to the mix.

Read the Docker Labs GenAI series to see more of what we’ve been working on.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Best Practices: Using Tags and Labels to Manage Docker Image Sprawl

With many organizations moving to container-based workflows, keeping track of the different versions of your images can become a problem. Even smaller organizations can have hundreds of container images spanning from one-off development tests, through emergency variants to fix problems, all the way to core production images. This leads us to the question: How can we tame our image sprawl while still rapidly iterating our images?

A common misconception is that by using the “latest” tag, you are guaranteeing that you are pulling the “latest” version of the image. Unfortunately, this assumption is wrong — all latest means is “the last image pushed to this registry.”

Read on to learn more about how to avoid this pitfall when using Docker and how to get a handle on your Docker images.

Using tags

One way to address this issue is to use tags when creating an image. Adding one or more tags to an image helps you remember what it is intended for and helps others as well. One approach is always to tag images with their semantic versioning (semver), which lets you know what version you are deploying. This sounds like a great approach, and, to some extent, it is, but there is a wrinkle.

Unless you’ve configured your registry for immutable tags, tags can be changed. For example, you could tag my-great-app as v1.0.0 and push the image to the registry. However, nothing stops your colleague from pushing their updated version of the app with tag v1.0.0 as well. Now that tag points to their image, not yours. If you add in the convenience tag latest, things get a bit more murky.

Let’s look at an example:

FROM busybox:stable-glibc

# Create a script that outputs the version
RUN echo -e "#!/bin/shn" > /test.sh &&
echo "echo "This is version 1.0.0"" >> /test.sh &&
chmod +x /test.sh

# Set the entrypoint to run the script
ENTRYPOINT ["/bin/sh", "/test.sh"]

We build the above with docker build -t tagexample:1.0.0 . and run it.

$ docker run –rm tagexample:1.0.0
This is version 1.0.0

What if we run it without a tag specified?

$ docker run –rm tagexample
Unable to find image 'tagexample:latest' locally
docker: Error response from daemon: pull access denied for tagexample, repository does not exist or may require 'docker login'.
See 'docker run –help'.

Now we build with docker build . without specifying a tag and run it.

$ docker run –rm tagexample
This is version 1.0.0

The latest tag is always applied to the most recent push that did not specify a tag. So, in our first test, we had one image in the repository with a tag of 1.0.0, but because we did not have any pushes without a tag, the latest tag did not point to an image. However, once we push an image without a tag, the latest tag is automatically applied to it.

Although it is tempting to always pull the latest tag, it’s rarely a good idea. The logical assumption — that this points to the most recent version of the image — is flawed. For example, another developer can update the application to version 1.0.1, build it with the tag 1.0.1, and push it. This results in the following:

$ docker run –rm tagexample:1.0.1
This is version 1.0.1

$ docker run –rm tagexample:latest
This is version 1.0.0

If you made the assumption that latest pointed to the highest version, you’d now be running an out-of-date version of the image.

The other issue is that there is no mechanism in place to prevent someone from inadvertently pushing with the wrong tag. For example, we could create another update to our code bringing it up to 1.0.2. We update the code, build the image, and push it — but we forget to change the tag to reflect the new version. Although it’s a small oversight, this action results in the following:

$ docker run –rm tagexample:1.0.1
This is version 1.0.2

Unfortunately, this happens all too frequently.

Using labels

Because we can’t trust tags, how should we ensure that we are able to identify our images? This is where the concept of adding metadata to our images becomes important.

The first attempt at using metadata to help manage images was the MAINTAINER instruction. This instruction sets the “Author” field (org.opencontainers.image.authors) in the generated image. However, this instruction has been deprecated in favor of the more powerful LABEL instruction. Unlike MAINTAINER, the LABEL instruction allows you to set arbitrary key/value pairs that can then be read with docker inspect as well as other tooling.

Unlike with tags, labels become part of the image, and when implemented properly, can provide a much better way to determine the version of an image. To return to our example above, let’s see how the use of a label would have made a difference.

To do this, we add the LABEL instruction to the Dockerfile, along with the key version and value 1.0.2.

FROM busybox:stable-glibc

LABEL version="1.0.2"

# Create a script that outputs the version
RUN echo -e "#!/bin/shn" > /test.sh &&
echo "echo "This is version 1.0.2"" >> /test.sh &&
chmod +x /test.sh

# Set the entrypoint to run the script
ENTRYPOINT ["/bin/sh", "/test.sh"]

Now, even if we make the same mistake above where we mistakenly tag the image as version 1.0.1, we have a way to check that does not involve running the container to see which version we are using.

$ docker inspect –format='{{json .Config.Labels}}' tagexample:1.0.1
{"version":"1.0.2"}

Best practices

Although you can use any key/value as a LABEL, there are recommendations. The OCI provides a set of suggested labels within the org.opencontainers.image namespace, as shown in the following table:

LabelContentorg.opencontainers.image.createdThe date and time on which the image was built (string, RFC 3339 date-time).org.opencontainers.image.authorsContact details of the people or organization responsible for the image (freeform string).org.opencontainers.image.urlURL to find more information on the image (string).org.opencontainers.image.documentationURL to get documentation on the image (string).org.opencontainers.image.sourceURL to the source code for building the image (string).org.opencontainers.image.versionVersion of the packaged software (string).org.opencontainers.image.revisionSource control revision identifier for the image (string).org.opencontainers.image.vendorName of the distributing entity, organization, or individual (string).org.opencontainers.image.licensesLicense(s) under which contained software is distributed (string, SPDX License List).org.opencontainers.image.ref.nameName of the reference for a target (string).org.opencontainers.image.titleHuman-readable title of the image (string).org.opencontainers.image.descriptionHuman-readable description of the software packaged in the image (string).

Because LABEL takes any key/value, it is also possible to create custom labels. For example, labels specific to a team within a company could use the com.myorg.myteam namespace. Isolating these to a specific namespace ensures that they can easily be related back to the team that created the label.

Final thoughts

Image sprawl is a real problem for organizations, and, if not addressed, it can lead to confusion, rework, and potential production problems. By using tags and labels in a consistent manner, it is possible to eliminate these issues and provide a well-documented set of images that make work easier and not harder.

Learn more

Read the Docker Best Practices collection.

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Exploring Docker for DevOps: What It Is and How It Works

DevOps aims to dramatically improve the software development lifecycle by bringing together the formerly separated worlds of development and operations using principles that strive to make software creation more efficient. DevOps practices form a useful roadmap to help developers in every phase of the development lifecycle, from code planning to building, task automation, testing, monitoring, releasing, and deploying applications.

As DevOps use continues to expand, many developers and organizations find that the Docker containerization platform integrates well as a crucial component of DevOps practices. Using Docker, developers have the advantage of being able to collaborate in standardized environments using local containers and remote container tools where they can write their code, share their work, and collaborate. 

In this blog post, we will explore the use of Docker within DevOps practices and explain how the combination can help developers create more efficient and powerful workflows.

What is DevOps?

DevOps practices are beneficial in the world of developers and code creation because they encourage smart planning, collaboration, and orderly processes and management throughout the software development pipeline. Without unified DevOps principles, code is typically created in individual silos that can hamper creativity, efficient management, speed, and quality.

Bringing software developers, operations teams, and processes together under DevOps principles, can improve both developer and organizational efficiency through increased collaboration, agility, and innovation. DevOps brings these positive changes to organizations by constantly integrating user feedback regarding application features, shortcomings, and code glitches and — by making changes as needed on the fly — reducing operational and security risks in production code.

CI/CD

In addition to collaboration, DevOps principles are built around procedures for continuous integration/improvement (CI) and continuous deployment/delivery (CD) of code, shortening the cycle between development and production. This CI/CD approach lets teams more quickly adapt to feedback and thus build better applications from code conception all the way through to end-user experiences.

Using CI, developers can frequently and automatically integrate their changes into the source code as they create new code, while the CD side tests and delivers those vetted changes to the production environment. By integrating CI/CD practices, developers can create cleaner and safer code and resolve bugs ahead of production through automation, collaboration, and strong QA pipelines. 

What is Docker?

The Docker containerization platform is a suite of tools, standards, and services that enable DevOps practices for application developers. Docker is used to develop, ship, and run applications within lightweight containers. This approach allows developers to separate their applications from their business infrastructure, giving them the power to deliver better code more quickly. 

The Docker platform enables developers to package and run their application code in lightweight, local, standardized containers, which provide a loosely isolated environment that contains everything needed to run the application — including tools, packages, and libraries. By using Docker containers on a Docker client, developers can run an application without worrying about what is installed on the host, giving them huge flexibility, security, and collaborative advantages over virtual machines. 

In this controlled environment, developers can use Docker to create, monitor, and push their applications into a test environment, run automated and manual tests as needed, correct bugs, and then validate the code before deploying it for use in production. 

Docker also allows developers to run many containers simultaneously on a host, while allowing those same containers to be shared with others. Such a collaborative workspace can foster healthy and direct communications between developers, allowing development processes to become easier, more accurate, and more secure. 

Containers vs. virtualization

Containers are an abstraction that packages application code and dependencies together. Instances of the container can then be created, started, stopped, moved, or deleted using the Docker API or command-line interface (CLI). Containers can be connected to one or more networks, be attached to storage, or create new images based on their current states. 

Containers differ from virtual machines, which use a software abstraction layer on top of computer hardware, allowing the hardware to be shared more efficiently in multiple instances that will run individual applications. Docker containers require fewer physical hardware resources than virtual machines, and they also offer faster startup times and lower overhead. This makes Docker ideal for high-velocity environments, where rapid software development cycles and scalability are crucial. 

Basic components of Docker 

The basic components of Docker include:

Docker images: Docker images are the blueprints for your containers. They are read-only templates that contain the instructions for creating a Docker container. You can think of a container image as a snapshot of a specific state of your application.

Containers: Containers are the instances of Docker images. They are lightweight and portable, encapsulating your application along with its dependencies. Containers can be created, started, stopped, moved, and deleted using simple Docker commands.

Dockerfiles: A Dockerfile is a text document containing a series of instructions on how to build a Docker image. It includes commands for specifying the base image, copying files, installing dependencies, and setting up the environment. 

Docker Engine: Docker Engine is the core component of Docker. It’s a client-server application that includes a server with a long-running daemon process, APIs for interacting with the daemon, and a CLI client.

Docker Desktop: Docker Desktop is a commercial product sold and supported by Docker, Inc. It includes the Docker Engine and other open source components, proprietary components, and features like an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, security features, and administrative settings management. 

Docker Hub: Docker Hub is a public registry where you can store and share Docker images. It serves as a central place to find official Docker images and user-contributed images. You can also use Docker Hub to automate your workflows by connecting it to your CI/CD pipelines.

Basic Docker commands

Docker commands are simple and intuitive. For example:

docker run: Runs a Docker container from a specified image. For example, docker run hello-world will run a container from the “hello-world” image.

docker build: Builds an image from a Dockerfile. For example, docker build -t my-app . will build an image named “my-app” from the Dockerfile in the current directory.

docker pull: Pulls an image from Docker Hub. For example, docker pull nginx will download the latest NGINX image from Docker Hub.

docker ps: Lists all running containers. For example, docker ps -a will list all containers, including stopped ones.

docker stop: Stops a running Docker container. For example, docker stop <container_id> will stop the container with the specified ID.

docker rm: Removes a stopped container. For example, docker rm <container_id> will remove the container with the specified ID.

How Docker is used in DevOps

One of Docker’s most important benefits for developers is its critical role in facilitating CI/CD in the application development process. This makes it easier and more seamless for developers to work together to create better code.

Docker is a build environment where developers can get predictable results building and testing their applications inside Docker containers and where it is easier to get consistent, reproducible results compared to other development environments. Developers can use Dockerfiles to define the exact requirements needed for their build environments, including programming runtimes, operating systems, binaries, and more.

Using Docker as a build environment also makes application maintenance easier. For example, you can update to a new version of a programming runtime by just changing a tag or digest in a Dockerfile. That is easier than the process required on a virtual machine to manually reinstall a newer version and update the related configuration files.

Automated testing is also easier using Docker Hub, which can automatically test changes to source code repositories using containers or push applications into a test environment and run automated and manual tests.

Docker can be integrated with DevOps tools including Jenkins, GitLab, Kubernetes, and others, simplifying DevOps processes by automating pipelines and scaling operations as needed. 

Benefits of using Docker for DevOps 

Because the Docker containers used for development are the same ones that are moved along for testing and production, the Docker platform provides consistency across environments and delivers big benefits to developer teams and operations managers. Each Docker container is isolated from others being run, eliminating conflicting dependencies. Developers are empowered to build, run, and test their code while collaborating with others and using all the resources available to them within the Docker platform environment. 

Other benefits to developers include speed and agility, resource efficiency, error reduction, integrated version control, standardization, and the ability to write code once and run it on any system. Additionally, applications built on Docker can be pushed easily to customers on any computing environment, assuring quick, easy, and consistent delivery and deployment process. 

4 Common Docker challenges in DevOps

Implementing Docker in a DevOps environment can offer numerous benefits, but it also presents several challenges that teams must navigate:

1. Learning curve and skills gap

Docker introduces new concepts and technologies that require teams to acquire new skills. This can be a significant hurdle, especially if the team lacks experience with containerization. Docker’s robust documentation and guides and our international community can help new users quickly ramp up.

2. Security concerns

Ensuring the security of containerized applications involves addressing vulnerabilities in container images, managing secrets, and implementing network policies. Misconfigurations and running containers with root privileges can lead to security risks. Docker does, however, provide security guardrails for both administrators and developers.

The Docker Business subscription provides security and management at scale. For example, administrators can enforce sign-ins across Docker products for developers and efficiently manage, scale, and secure Docker Desktop instances using DevOps security controls like Enhanced Container Isolation and Registry Access Management.

Additionally, Docker offers security-focused tools, like Docker Scout, which helps administrators and developers secure the software supply chain by proactively monitoring image vulnerabilities and implementing remediation strategies. Introduced in 2024, Docker Scout health scores rate the security and compliance status of container images within Docker Hub, providing a single, quantifiable metric to represent the “health” of an image. This feature addresses one of the key friction points in developer-led software security — the lack of security expertise — and makes it easier for developers to turn critical insights from tools into actionable steps.

3. Microservice architectures

Containers and the ecosystem around them are specifically geared towards microservice architectures. You can run a monolith in a container, but you will not be able to leverage all of the benefits and paradigms of containers in that way. Instead, containers can be a useful gateway to microservices. Users can start pulling out individual pieces from a monolith into more containers over time.

4. Image management

Image management in Docker can also be a challenge for developers and teams as they search private registries and community repositories for images to use in building their applications. Docker Image Access Management can help with this challenge as it gives administrators control over which types of images — such as Docker Official Images, Docker Verified Publisher Images, or community images — their developers can pull for use from Docker Hub. Docker Hub tries to help by publishing only official images and verifying content from trusted partners. 

Using Image Access Management controls helps prevent developers from accidentally using an untrusted, malicious community image as a component of their application. Note that Docker Image Access Management is available only to customers of the company’s top Docker Business services offering.

Another important tool here is Docker Scout. It is built to help organizations better protect their software supply chain security when using container images, which consist of layers and software packages that may be susceptible to security vulnerabilities. Docker Scout helps with this issue by proactively analyzing container images and compiling a Software Bill of Materials (SBOM), which is a detailed inventory of code included in an application or container. That SBOM is then matched against a continuously updated vulnerability database to pinpoint and correct security weaknesses to make the code more secure.

More information and help about using Docker can be found in the Docker Trainings page, which offers training webcasts and other resources to assist developers and teams to negotiate their Docker landscapes and learn fresh skills to solve their technical inquiries. 

Examples of DevOps using Docker

Improving DevOps workflows is a major goal for many enterprises as they struggle to improve operations and developer productivity and to produce cleaner, more secure, and better code.

The Warehouse Group

At The Warehouse Group, New Zealand’s largest retail store chain with some 300 stores, Docker was introduced in 2016 to revamp its systems and processes after previous VMware deployments resulted in long setup times, inconsistent environments, and slow deployment cycles. 

“One of the key benefits we have seen from using Docker is that it enables a very flexible work environment,” said Matt Law, the chapter lead of DevOps for the company. “Developers can build and test applications locally on their own machines with consistency across environments, thanks to Docker’s containerization approach.”

Docker brought new autonomy to the company’s developers so they could test ideas and find new and better ways to solve bottlenecks, said Law. “That is a key philosophy that we have here — enabling developers to experiment with tooling to help them prove or disprove their philosophies or theories.”

Ataccama Corporation

Another Docker customer, Ataccama Corp., a Toronto-based data management software vendor, adopted Docker and DevOps practices when it moved to scale its business by moving from physical servers to cloud platforms like AWS and Azure to gain agility, scalability, and cost efficiencies using containerization. 

For Ataccama, Docker delivered rapid deployment, simplified application management, and seamless portability between environments, which brought accelerated feature development, increased efficiency and performance, valuable microservices capabilities, and required security and high availability. To boost the value of Docker for its developers and IT managers, Ataccama provided container and DevOps skills training and promoted collaboration to make Docker an integral tool and platform for the company and its operations.

“What makes Docker a class apart is its support for open standards like Open Container Initiative (OCI) and its amazing flexibility,” said Vladimir Mikhalev, senior DevOps engineer at Ataccama. “It goes far beyond just running containers. With Docker, we can build, share, and manage containerized apps seamlessly across infrastructure in a way that most tools can’t match.”

The most impactful feature of Docker is its ability to bundle an app, configuration, and dependencies into a single standardized unit, said Mikhalev. “This level of encapsulation has been a game-changer for eliminating environment inconsistencies.”

Wrapping up

Docker provides a transformative impact for enterprises that have adopted DevOps practices. The Docker platform enables developers to create, collaborate, test, monitor, ship, and run applications within lightweight containers, giving them the power to deliver better code more quickly. 

Docker simplifies and empowers development processes, enhancing productivity and improving the reliability of applications across different environments. 

Find the right Docker subscription to bolster your DevOps workflow. 

Learn more

Read Docker for Web Developers: Getting Started with the Basics.

Subscribe to the Docker Newsletter. 

Learn how your team can succeed with Docker. 

Find the perfect pricing for your team. 

Download the latest version of Docker Desktop. 

Visit Docker Resources to explore more materials.

Quelle: https://blog.docker.com/feed/

2024 Docker State of Application Development Survey: Share Your Thoughts on Development

Welcome to the third annual Docker State of Application Development survey!

Please help us better understand and serve the application development  community with just 20-30 minutes of your time. We want to know where you’re focused, what you’re working on, and what is most important to you. Your thoughts and feedback will help us build the best products and experiences for you.

And, we don’t just keep this information for ourselves — we share with you1! We hope you saw our recent report on the 2023 State of Application Development Survey. The engagement of our community allowed us to better understand where developers are facing challenges, what tools they like, and what they’re excited about. We’ve been using this information to give our community the tools and features they need.

Take the Docker State of Application Development survey now!

By participating in the survey, you can be entered into a raffle for a chance to win2 one of the following prizes:

1 laptop computer (an Apple M3 Macbook Pro 16″)

2 game consoles with VR headsets: Playstation Virtual Reality Headset and a Playstation 5

5 $300 Amazon.com gift cards 

50 exclusive Docker swag sets 

Additionally, the first 200 respondents to complete the survey will receive an exclusive pair of Docker socks!

The survey is open from September 23rd, 2024 (7AM PST) to November 20, 2024 (11:59PM PST). 

We’ll choose the winners randomly in accordance with the promotion official rules.* Winners will be notified via email by January 10, 2025.

The Docker State of Application Development Survey only takes about 20-30 minutes to complete. We appreciate every contribution and opinion. Your voice counts!

Data will be reported publicly only in aggregate and without personally identifying information. ↩︎Docker State of Application Development Promotion Official Rules. ↩︎
Quelle: https://blog.docker.com/feed/