Docker Desktop for Mac: QEMU Virtualization Option to be Deprecated in 90 Days

We are announcing the upcoming deprecation of QEMU as a virtualization option for Docker Desktop on Apple Silicon Macs. After serving as our legacy virtualization solution during the early transition to Apple Silicon, QEMU will be fully deprecated 90 days from today, on July 14, 2025. This deprecation does not affect QEMU’s role in emulating non-native architectures for multi-platform builds. By moving to Apple Virtualization Framework or Docker VMM, you will ensure optimal performance.

Why We’re Making This Change

Our telemetry shows that a very small percentage of users are still using the QEMU option. We’ve maintained QEMU support for backward compatibility, but both Docker VMM and Apple Virtualization Framework now offer:

Significantly better performance

Improved stability

Enhanced compatibility with macOS updates

Better integration with Apple Silicon architecture

What This Means For You

If you’re currently using QEMU as your Virtual Machine Manager (VMM) on Docker Desktop for Mac:

Your current installation will continue to work normally during the 90-day transition period

After July 1, 2025, Docker Desktop releases will automatically migrate your environment to Apple Virtualization Framework

You’ll experience improved performance and stability with the newer virtualization options

Migration Plan

The migration process will be smooth and straightforward:

Users on the latest Docker Desktop release will be automatically migrated to Apple Virtualization Framework after the 90-day period

During the transition period, you can manually switch to either Docker VMM (our fastest option for Apple Silicon Macs) or Apple Virtualization Framework through Settings > General > Virtual Machine Options

For 30 days after the deprecation date, the QEMU option will remain available in settings for users who encounter migration issues

After this extended period, the QEMU option will be fully removed

Note: This deprecation does not affect QEMU’s role in emulating non-native architectures for multi-platform builds.

What You Should Do Now

We recommend proactively switching to one of our newer VMM options before the automatic migration:

Update to the latest version of Docker Desktop for Mac

Open Docker Desktop Settings > General

Under “Choose Virtual Machine Manager (VMM)” select either:

Docker VMM (BETA) – Our fastest option for Apple Silicon Macs

Apple Virtualization Framework – A mature, high-performance alternative

Questions or Concerns?

If you have questions or encounter any issues during migration, please:

Visit our documentation

Reach out to us via GitHub issues

Join the conversation on the Docker Community Forums

We’re committed to making this transition as seamless as possible while delivering the best development experience on macOS.
Quelle: https://blog.docker.com/feed/

New Docker Extension for Visual Studio Code

Today, we are excited to announce the release of a new, open-source Docker Language Server and Docker DX VS Code extension. In a joint collaboration between Docker and the Microsoft Container Tools team, this new integration enhances the existing Docker extension with improved Dockerfile linting, inline image vulnerability checks, Docker Bake file support, and outlines for Docker Compose files. By working directly with Microsoft, we’re ensuring a native, high-performance experience that complements the existing developer workflow. It’s the next evolution of Docker tooling in VS Code — built to help you move faster, catch issues earlier, and focus on what matters most: building great software.

What’s the Docker DX extension?

The Docker DX extension is focused on providing developers with faster feedback as they edit. Whether you’re authoring a complex Compose file or fine-tuning a Dockerfile, the extension surfaces relevant suggestions, validations, and warnings in real time. 

Key features include:

Dockerfile linting: Get build warnings and best-practice suggestions directly from BuildKit and Buildx.

Image vulnerability remediation (experimental): Flags references to container images with known vulnerabilities directly in Dockerfiles.

Bake file support: Includes code completion, variable navigation, and inline suggestions for generating targets based on your Dockerfile stages.

Compose file outline: Easily navigate complex Compose files with an outline view in the editor.

If you’re already using the Docker VS Code extension, the new features are included — just update the extension and start using them!

Dockerfile linting and vulnerability remediation

The inline Dockerfile linting provides warnings and best-practice guidance for writing Dockerfiles from the experts at Docker, powered by Build Checks. Potential vulnerabilities are highlighted directly in the editor with context about their severity and impact, powered by Docker Scout.

Figure 1: Providing actionable recommendations for fixing vulnerabilities and optimizing Dockerfiles

Early feedback directly in Dockerfiles keeps you focused and saves you and your team time debugging and remediating later.

Docker Bake files

The Docker DX extension makes authoring and editing Docker Bake files quick and easy. It provides code completion, code navigation, and error reporting to make editing Bake files a breeze. The extension will also look at your Dockerfile and suggest Bake targets based on the build stages you have defined in your Dockerfile.

Figure 2: Editing Bake files is simple and intuitive with the rich language features that the Docker DX extension provides.

Figure 3: Creating new Bake files is straightforward as your Dockerfile’s build stages are analyzed and suggested as Bake targets.

Compose outlines

Quickly navigate complex Compose files with the extension’s support for outlines available directly through VS Code’s command palette.

Figure 4: Navigate complex Compose files with the outline panel.

Don’t use VS Code? Try the Language Server!

The features offered by the Docker DX extension are powered by the brand-new Docker Language Server, built on the Language Server Protocol (LSP). This means the same smart editing experience — like real-time feedback, validation, and suggestions for Dockerfiles, Compose, and Bake files — is available in your favorite editor.

Wrapping up

Install the extension from Docker DX – Visual Studio Marketplace today! The functionality is also automatically installed with the existing Docker VS Code extension from Microsoft.

Share your feedback on how it’s working for you, and share what features you’d like to see next. If you’d like to learn more or contribute to the project, check out our GitHub repo.

Learn more

Install the Docker DX VS Code extension

Give us feedback or ask for features

Contribute to the extension project

Try the Docker Language Server or contribute to it

Subscribe to the Docker Navigator Newsletter.

New to Docker? Create an account.

Quelle: https://blog.docker.com/feed/

Introducing Docker Model Runner: A Better Way to Build and Run GenAI Models Locally

Generative AI is transforming software development, but building and running AI models locally is still harder than it should be. Today’s developers face fragmented tooling, hardware compatibility headaches, and disconnected application development workflows, all of which hinder iteration and slow down progress.  

That’s why we’re launching Docker Model Runner — a faster, simpler way to run and test AI models locally, right from your existing workflow. Whether you’re experimenting with the latest LLMs or deploying to production, Model Runner brings the performance and control you need, without the friction.

We’re also teaming up with some of the most influential names in AI and software development, including Google, Continue, Dagger, Qualcomm, HuggingFace, Spring AI, and VMware Tanzu AI Solutions, to give developers direct access to the latest models, frameworks, and tools. These partnerships aren’t just integrations, they’re a shared commitment to making AI innovation more accessible, powerful, and developer-friendly. With Docker Model Runner, you can tap into the best of the AI ecosystem from right inside your Docker workflow.

LLM development is evolving: We’re making it local-first 

Local development for applications powered by LLMs is gaining momentum, and for good reason. It offers several advantages on key dimensions such as performance, cost, and data privacy. But today, local setup is complex.  

Developers are often forced to manually integrate multiple tools, configure environments, and manage models separately from container workflows. Running a model varies by platform and depends on available hardware. Model storage is fragmented because there is no standard way to store, share, or serve models. 

The result? Rising cloud inference costs and a disjoined developer experience. With our first release, we’re focused on reducing that friction, making local model execution simpler, faster, and easier to fit into the way developers already build.

Docker Model Runner: The simple, secure way to run AI models locally

Docker Model Runner is designed to make AI model execution as simple as running a container. With this Beta release, we’re giving developers a fast, low-friction way to run models, test them, and iterate on application code that uses models locally, without all the usual setup headaches. Here’s how:

Running models locally 

With Docker Model Runner, running AI models locally is now as simple as running any other service in your inner loop. Docker Model Runner delivers this by including an inference engine as part of Docker Desktop, built on top of llama.cpp and accessible through the familiar OpenAI API. No extra tools, no extra setup, and no disconnected workflows. Everything stays in one place, so you can test and iterate quickly, right on your machine.

Enabling GPU acceleration (Apple silicon)

GPU acceleration on Apple silicon helps developers get fast inference and the most out of their local hardware. By using host-based execution, we avoid the performance limitations of running models inside virtual machines. This translates to faster inference, smoother testing, and better feedback loops.

Standardizing model packaging with OCI Artifacts

Model distribution today is messy. Models are often shared as loose files or behind proprietary download tools with custom authentication. With Docker Model Runner, we package models as OCI Artifacts, an open standard that allows you to distribute and version them through the same registries and workflows you already use for containers. Today, you can easily pull ready-to-use models from Docker Hub. Soon, you’ll also be able to push your own models, integrate with any container registry, connect them to your CI/CD pipelines, and use familiar tools for access control and automation.

Building momentum with a thriving GenAI ecosystem

To make local development seamless, it needs an ecosystem. That starts with meeting developers where they are, whether they’re testing model performance on their local machines or building applications that run these models. 

That’s why we’re launching Docker Model Runner with a powerful ecosystem of partners on both sides of the AI application development process. On the model side, we’re collaborating with industry leaders like Google and community platforms like HuggingFace to bring you high-quality, optimized models ready for local use. These models are published as OCI artifacts, so you can pull and run them using standard Docker commands, just like any container image.

But we aren’t stopping at models. We’re also working with application, language, and tooling partners like Dagger, Continue, and Spring AI and VMware Tanzu to ensure applications built with Model Runner integrate seamlessly into real-world developer workflows. Additionally, we’re working with hardware partners like Qualcomm to ensure high performance inference on all platforms.

As Docker Model Runner evolves, we’ll work to expand its ecosystem of partners, allowing for ample distribution and added functionality.

Where We’re Going

This is just the beginning. With Docker Model Runner, we’re making it easier for developers to bring AI model execution into everyday workflows, securely, locally, and with a low barrier of entry. Soon, you’ll be able to run models on more platforms, including Windows with GPU acceleration, customize and publish your own models, and integrate AI into your dev loop with even greater flexibility (including Compose and Testcontainers). With each Docker Desktop release, we’ll continue to unlock new capabilities that make GenAI development easier, faster, and way more fun to build with.

Try it out now! 

Docker Model Runner is now available as a Beta feature in Docker Desktop 4.40. To get started:

On a Mac with Apple silicon

Update to Docker Desktop 4.40

Pull models developed by our partners at Docker’s GenAI Hub and start experimenting

For more information, check out our documentation here.

Try it out and let us know what you think!

How can I learn more about Docker Model Runner?

Check out our available assets today! 

Turn your Mac into an AI playground YouTube tutorialA Quickstart Guide to Docker Model Runner Docker Model Runner on Docker Docs 

Come meet us at Google Cloud Next! 

Swing by booth 1530 in the Mandalay Convention Center for hands-on demos and exclusive content.
Quelle: https://blog.docker.com/feed/

Run Gemma 3 with Docker Model Runner: Fully Local GenAI Developer Experience

The landscape of generative AI development is evolving rapidly but comes with significant challenges. API usage costs can quickly add up, especially during development. Privacy concerns arise when sensitive data must be sent to external services. And relying on external APIs can introduce connectivity issues and latency.

Enter Gemma 3 and Docker Model Runner, a powerful combination that brings state-of-the-art language models to your local environment, addressing these challenges head-on.

In this blog post, we’ll explore how to run Gemma 3 locally using Docker Model Runner. We’ll also walk through a practical case study: a Comment Processing System that analyzes user feedback about a fictional AI assistant named Jarvis.

The power of local GenAI development

Before diving into the implementation, let’s look at why local GenAI development is becoming increasingly important:

Cost efficiency: With no per-token or per-request charges, you can experiment freely without worrying about usage fees.

Data privacy: Sensitive data stays within your environment, with no third-party exposure.

Reduced network latency: Eliminates reliance on external APIs and enables offline use.

Full control: Run the model on your terms, with no intermediaries and full transparency.

Setting up Docker Model Runner with Gemma 3

Docker Model Runner provides an OpenAI-compatible API interface to run models locally.It is included in Docker Desktop for macOS, starting with version 4.40.0.Here’s how to set it up with Gemma 3:

docker desktop enable model-runner –tcp 12434
docker model pull ai/gemma3

Once setup is complete, the OpenAI-compatible API provided by the Model Runner is available at: http://localhost:12434/engines/v1

Case study: Comment processing system

To demonstrate the power of local GenAI development, we’ve built a Comment Processing System that leverages Gemma 3 for multiple NLP tasks. This system:

Generates synthetic user comments about a fictional AI assistant

Categorizes comments as positive, negative, or neutral

Clusters similar comments together using embeddings

Identifies potential product features from the comments

Generates contextually appropriate responses

All tasks are performed locally with no external API calls.

Implementation details

Configuring the OpenAI SDK to use local models

To make this work, we configure the OpenAI SDK to point to the Docker Model Runner:

// config.js

export default {
// Model configuration
openai: {
baseURL: "http://localhost:12434/engines/v1", // Base URL for Docker Model Runner
apiKey: 'ignored',
model: "ai/gemma3",
commentGeneration: { // Each task has its own configuration, for example temperature is set to a high value when generating comments for creativity
temperature: 0.3,
max_tokens: 250,
n: 1,
},
embedding: {
model: "ai/mxbai-embed-large", // Model for generating embeddings
},
},
// … other configuration options
};

import OpenAI from 'openai';
import config from './config.js';

// Initialize OpenAI client with local endpoint
const client = new OpenAI({
baseURL: config.openai.baseURL,
apiKey: config.openai.apiKey,
});

Task-specific configuration

One key benefit of running models locally is the ability to experiment freely with different configurations for each task without worrying about API costs or rate limits.

In our case:

Synthetic comment generation uses a higher temperature for creativity.

Categorization uses a lower temperature and a 10-token limit for consistency.

Clustering allows up to 20 tokens to improve semantic richness in embeddings.

This flexibility lets us iterate quickly, tune for performance, and tailor the model’s behavior to each use case.

Generating synthetic comments

To simulate user feedback, we use Gemma 3’s ability to follow detailed, context-aware prompts.

/**
* Create a prompt for comment generation
* @param {string} type – Type of comment (positive, negative, neutral)
* @param {string} topic – Topic of the comment
* @returns {string} – Prompt for OpenAI
*/
function createPromptForCommentGeneration(type, topic) {
let sentiment = '';

switch (type) {
case 'positive':
sentiment = 'positive and appreciative';
break;
case 'negative':
sentiment = 'negative and critical';
break;
case 'neutral':
sentiment = 'neutral and balanced';
break;
default:
sentiment = 'general';
}

return `Generate a realistic ${sentiment} user comment about an AI assistant called Jarvis, focusing on its ${topic}.

The comment should sound natural, as if written by a real user who has been using Jarvis.
Keep the comment concise (1-3 sentences) and focused on the specific topic.
Do not include ratings (like "5/5 stars") or formatting.
Just return the comment text without any additional context or explanation.`;
}

Examples:

"Honestly, Jarvis is just a lot of empty promises. It keeps suggesting irrelevant articles and failing to actually understand my requests for help with my work – it’s not helpful at all."

"Jarvis is seriously impressive – the speed at which it responds is incredible! I’ve never used an AI assistant that’s so quick and efficient, it’s a game changer.

The ability to produce realistic feedback on demand is incredibly useful for simulating user data with zero API cost.

Generating contextual responses

We also use Gemma 3 to simulate polite, on-brand support responses to user comments. Here’s the prompt logic:

const response = await client.chat.completions.create({
model: config.openai.model,
messages: [
{
role: "system",
content: `You are a customer support representative for an AI assistant called Jarvis. Your task is to generate polite, helpful responses to user comments.

Guidelines for responses:
1. Be empathetic and acknowledge the user's feedback
2. Thank the user for their input
3. If the comment is positive, express appreciation
4. If the comment is negative, apologize for the inconvenience and assure them you're working on improvements
5. If the comment is neutral, acknowledge their observation
6. If relevant, mention that their feedback will be considered for future updates
7. Keep responses concise (2-4 sentences) and professional
8. Do not make specific promises about feature implementation or timelines
9. Sign the response as "The Jarvis Team"`
},
{
role: "user",
content: `User comment: "${comment.text}"
Comment category: ${comment.category || 'unknown'}

${featuresContext}

Generate a polite, helpful response to this user comment.`
}
],
temperature: 0.7,
max_tokens: 200
});

Examples:

For a positive comment:

Thank you so much for your positive feedback regarding Jarvis’s interface! We’re thrilled to hear you find it clean and intuitive – that’s exactly what we’re aiming for. We appreciate you pointing out your desire for more visual customization options, and your feedback will definitely be considered as we continue to develop Jarvis.

The Jarvis Team

For a negative comment:

Thank you for your feedback – we appreciate you taking the time to share your observations about Jarvis. We sincerely apologize for the glitches and freezes you’ve experienced; we understand how frustrating that can be. Your input is valuable, and we’re actively working on improvements to enhance Jarvis’s reliability and accuracy.

The Jarvis Team

This approach ensures a consistent, human-like support experience generated entirely locally.

Extracting product features from user feedback

Beyond generating and responding to comments, we also use Gemma 3 to analyze user feedback and identify actionable insights. This helps simulate the role of a product analyst, surfacing recurring themes, user pain points, and opportunities for improvement.

Here, we provide a prompt instructing the model to identify up to three potential features or improvements based on a set of user comments. 

/**
* Extract features from comments
* @param {string} commentsText – Text of comments
* @returns {Promise<Array>} – Array of identified features
*/
async function extractFeaturesFromComments(commentsText) {
const response = await client.chat.completions.create({
model: config.openai.model,
messages: [
{
role: "system",
content: `You are a product analyst for an AI assistant called Jarvis. Your task is to identify potential product features or improvements based on user comments.

For each set of comments, identify up to 3 potential features or improvements that could address the user feedback.

For each feature, provide:
1. A short name (2-5 words)
2. A brief description (1-2 sentences)
3. The type of feature (New Feature, Improvement, Bug Fix)
4. Priority (High, Medium, Low)

Format your response as a JSON array of features, with each feature having the fields: name, description, type, and priority.`
},
{
role: "user",
content: `Here are some user comments about Jarvis. Identify potential features or improvements based on these comments:

${commentsText}`
}
],
response_format: { type: "json_object" },
temperature: 0.5
});

try {
const result = JSON.parse(response.choices[0].message.content);
return result.features || [];
} catch (error) {
console.error('Error parsing feature identification response:', error);
return [];
}
}

Here’s an example of what the model might return:

"features": [
{
"name": "Enhanced Visual Customization",
"description": "Allows users to personalize the Jarvis interface with more themes, icon styles, and display options to improve visual appeal and user preference.",
"type": "Improvement",
"priority": "Medium",
"clusters": [
"1"
]
},

And just like everything else in this project, it’s generated locally with no external services.

Conclusion

By combining Gemma 3 with Docker Model Runner, we’ve unlocked a local GenAI workflow that’s fast, private, cost-effective, and fully under our control. In building our Comment Processing System, we experienced firsthand the benefits of this approach:

Rapid iteration without worrying about API costs or rate limits

Flexibility to test different configurations for each task

Offline development with no dependency on external services

Significant cost savings during development

And this is just one example of what’s possible. Whether you’re prototyping a new AI product, building internal tools, or exploring advanced NLP use cases, running models locally puts you in the driver’s seat.

As open-source models and local tooling continue to evolve, the barrier to entry for building powerful AI systems keeps getting lower.

Don’t just consume AI; develop, shape, and own the process.

Try it yourself: clone the repository and start experimenting today.
Quelle: https://blog.docker.com/feed/

Run LLMs Locally with Docker: A Quickstart Guide to Model Runner

AI is quickly becoming a core part of modern applications, but running large language models (LLMs) locally can still be a pain. Between picking the right model, navigating hardware quirks, and optimizing for performance, it’s easy to get stuck before you even start building. At the same time, more and more developers want the flexibility to run LLMs locally for development, testing, or even offline use cases. That’s where Docker Model Runner comes in.

Now available in Beta with Docker Desktop 4.40 for macOS on Apple silicon, Model Runner makes it easy to pull, run, and experiment with LLMs on your local machine. No infrastructure headaches, no complicated setup. Here’s what Model Runner offers in our initial beta release.

Local LLM inference powered by an integrated engine built on top of llama.cpp, exposed through an OpenAI-compatible API.

GPU acceleration on Apple silicon by executing the inference engine directly as a host process.

A growing collection of popular, usage-ready models packaged as standard OCI artifacts, making them easy to distribute and reuse across existing Container Registry infrastructure.

Keep reading to learn how to run LLMs locally on your own computer using Docker Model Runner!

Enabling Docker Model Runner for LLMs

Docker Model Runner is enabled by default and shipped as part of Docker Desktop 4.40 for macOS on Apple silicon hardware. However, in case you’ve disabled it, you can easily enable it through the CLI with a single command:

docker desktop enable model-runner

In its default configuration, the Model Runner will only be accessible through the Docker socket on the host, or to containers via the special model-runner.docker.internal endpoint. If you want to interact with it via TCP from a host process (maybe because you want to point some OpenAI SDK within your codebase straight to it), you can also enable it via CLI by specifying the intended port:

docker desktop enable model-runner –tcp 12434

A first look at the command line interfaceCLI

The Docker Model Runner CLI will feel very similar to working with containers, but there are also some caveats regarding the execution model, so let’s check it out. For this guide, I’m going to use a very small model to ensure it runs on hardware with limited resources and provides a fast and responsive user experience. To be more specific, we’ll use the SmolLM model, published by HuggingFace in 2024.

We’ll want to start by pulling a model. As with Docker Images, you can omit the specific tag, and it’ll default to latest. But for this example, let’s be specific:

docker model pull ai/smollm2:360M-Q4_K_M

Here I am pulling the SmolLM2 model with 360M parameters and 4-bit quantization. Tags for models distributed by Docker follow this scheme with regard to model metadata:

{model}:{parameters}-{quantization}

After having the model pulled, let’s give it a spin by asking it a question:

docker model run ai/smollm2:360M-Q4_K_M "Give me a fact about whales."

Whales are magnificent marine animals that have fascinated humans for centuries. They belong to the order Cetacea and have a unique body structure that allows them to swim and move around the ocean. Some species of whales, like the blue whale, can grow up to 100 feet (30 meters) long and weigh over 150 tons (140 metric tons) each. They are known for their powerful tails that propel them through the water, allowing them to dive deep to find food or escape predators.

Is this whale fact actually true? Honestly, I have no clue; I’m not a marine biologist. But it’s a fun example to illustrate a broader point: LLMs can sometimes generate inaccurate or unpredictable information. As with anything, especially smaller local models with a limited number of parameters or small quantization values, it’s important to verify what you’re getting back.

So what actually happened when we ran the docker model run command? It makes sense to have a closer look at the technical underpinnings, since it differs from what you might expect after using docker container run commands for years. In the case of the Model Runner, this command won’t spin up any kind of container. Instead, it’ll call an Inference Server API endpoint, hosted by the Model Runner through Docker Desktop, and provide an OpenAI compatible API. The Inference Server will use llama.cpp as the Inference Engine, running as a native host process, load the requested model on demand, and then perform the inference on the received request. Then, the model will stay in memory until another model is requested, or until a pre-defined inactivity timeout (currently 5 minutes) is reached. 

That also means that there isn’t a need to perform a docker model run before interacting with a specific model from a host process or from within a container. Model Runner will transparently load the requested model on-demand, assuming it has been pulled beforehand and is locally available. 

Speaking of interacting with models from other processes, let’s have a look at how to integrate with Model Runner from within your application code.

Having fun with GenAI development

Model Runner exposes an OpenAI endpoint under http://model-runner.docker.internal/engines/v1 for containers, and under http://localhost:12434/engines/v1 for host processes (assuming you have enabled TCP host access on default port 12434). You can use this endpoint to hook up any OpenAI-compatible clients or frameworks. 

In this example, I’m using Java and LangChain4j. Since I develop and run my Java application directly on the host, all I have to do is configure the Model Runner OpenAI endpoint as the baseUrl and specify the model to use, following the Docker Model addressing scheme we’ve already seen in the CLI usage examples. 

And that’s all there is to it, pretty straightforward, right?

Please note that the model has to be already locally present for this code to work.

OpenAiChatModel model = OpenAiChatModel.builder()
.baseUrl("http://localhost:12434/engines/v1")
.modelName("ai/smollm2:360M-Q4_K_M")
.build();

String answer = model.chat("Give me a fact about whales.");
System.out.println(answer);

Finding more models

Now, you probably don’t just want to use SmolLM models, so you might wonder what other models are currently available to use with Model Runner.? The easiest way to get started is by checking out the ai/ namespace on Docker Hub.

On Docker Hub, you can find a curated list of the most popular models that are well-suited for local use cases. They’re offered in different flavors to suit different hardware and performance needs. You can also find more details about each in the model card, available on the overview page of the model repository.

What’s next?

Of course, this was just a first look at what Docker Model Runner can do. We have many more features in the works and can’t wait to see what you build with it. For hands-on guidance, check out our latest YouTube tutorial on running LLMs locally with Model Runner. And be sure to keep an eye on the blog, we’ll be sharing more updates, tips, and deep dives as we continue to expand Model Runner’s capabilities

Resources

Learn more on Docker Docs 

Subscribe to the Docker Navigator Newsletter.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Master Docker and VS Code: Supercharge Your Dev Workflow

Hey there, fellow developers and DevOps champions! I’m excited to share a workflow that has transformed my teams’ development experience time and time again: pairing Docker and Visual Studio Code (VS Code). If you’re looking to speed up local development, minimize those “it works on my machine” excuses, and streamline your entire workflow, you’re in the right place. 

I’ve personally encountered countless environment mismatches during my career, and every time, Docker plus VS Code saved the day — no more wasted hours debugging bizarre OS/library issues on a teammate’s machine.

In this post, I’ll walk you through how to get Docker and VS Code humming in perfect harmony, cover advanced debugging scenarios, and even drop some insider tips on security, performance, and ephemeral development environments. By the end, you’ll be equipped to tackle DevOps challenges confidently. 

Let’s jump in!

Why use Docker with VS Code?

Whether you’re working solo or collaborating across teams, Docker and Visual Studio Code work together to ensure your development workflow remains smooth, scalable, and future-proof. Let’s take a look at some of the most notable benefits to using Docker with VS Code.

Consistency Across EnvironmentsDocker’s containers standardize everything from OS libraries to dependencies, creating a consistent dev environment. No more “it works on my machine!” fiascos.

Faster Development Feedback LoopVS Code’s integrated terminal, built-in debugging, and Docker extension cut down on context-switching—making you more productive in real time.

Multi-Language, Multi-FrameworkWhether you love Node.js, Python, .NET, or Ruby, Docker packages your app identically. Meanwhile, VS Code’s extension ecosystem handles language-specific linting and debugging.

Future-ProofBy 2025, ephemeral development environments, container-native CI/CD, and container security scanning (e.g., Docker Scout) will be table stakes for modern teams. Combining Docker and VS Code puts you on the fast track.

Together, Docker and VS Code create a streamlined, flexible, and reliable development environment.

How Docker transformed my development workflow

Let me tell you a quick anecdote:

A few years back, my team spent two days diagnosing a weird bug that only surfaced on one developer’s machine. It turned out they had an outdated system library that caused subtle conflicts. We wasted hours! In the very next sprint, we containerized everything using Docker. Suddenly, everyone’s environment was the same — no more “mismatch” drama. That was the day I became a Docker convert for life.

Since then, I’ve recommended Dockerized workflows to every team I’ve worked with, and thanks to VS Code, spinning up, managing, and debugging containers has become even more seamless.

How to set up Docker in Visual Studio Code

Setting up Docker in Visual Studio Code can be done in a matter of minutes. In two steps, you should be well on your way to leveraging the power of this dynamic duo. You can use the command line to manage containers and build images directly within VS Code. Also, you can create a new container with specific Docker run parameters to maintain configuration and settings.

But first, let’s make sure you’re using the right version!

Ensuring version compatibility

As of early 2025, here’s what I recommend to minimize friction:

Docker Desktop: v4.37+ or newer (Windows, macOS, or Linux)

Docker Engine: 27.5+ (if running Docker directly on Linux servers)

Visual Studio Code: 1.96+

Lastly, check the Docker release notes and VS Code updates to stay aligned with the latest features and patches. Trust me, staying current prevents a lot of “uh-oh” moments later.

1. Docker Desktop (or Docker Engine on Linux)

Download from Docker’s official site.

Install and follow prompts.

2. Visual Studio Code

Install from VS Code’s website.

Docker Extension: Press Ctrl+Shift+X (Windows/Linux) or Cmd+Shift+X (macOS) in VS Code → search for “Docker” → Install.

Explore: You’ll see a whale icon on the left toolbar, giving you a GUI-like interface for Docker containers, images, and registries.Pro Tip: If you’re going fully DevOps, keep Docker Desktop and VS Code up to date. Each release often adds new features or performance/security enhancements.

Examples: Building an app with a Docker container

Node.js example

We’ll create a basic Node.js web server using Express, then build and run it in Docker. The server listens on port 3000 and returns a simple greeting.

Create a Project Folder:

mkdir node-docker-app
cd node-docker-app

Initialize a Node.js Application:

npm init -y
npm install express

‘npm init -y’ generates a default package.json.

‘npm install express’ pulls in the Express framework and saves it to ‘package.json’.

Create the Main App File (index.js):

cat <<EOF >index.js
const express = require('express');
const app = express();
const port = 3000;

app.get('/', (req, res) => {
res.send('Hello from Node + Docker + VS Code!');
});

app.listen(port, () => {
console.log(`App running on port ${port}`);
});
EOF

This code starts a server on port 3000. When a user hits http://localhost:3000, they see a “Hello from Node + Docker + VS Code!” message.

Add an Annotated Dockerfile:

# Use a lightweight Node.js 23.x image based on Alpine Linux
FROM node:23.6.1-alpine

# Set the working directory inside the container
WORKDIR /usr/src/app

# Copy only the package files first (for efficient layer caching)
COPY package*.json ./

# Install Node.js dependencies
RUN npm install

# Copy the entire project (including index.js) into the container
COPY . .

# Expose port 3000 for the Node.js server (metadata)
EXPOSE 3000

# The default command to run your app
CMD ["node", "index.js"]

Build and run the container image:

docker build -t node-docker-app .

Run the container, mapping port 3000 on the container to port 3000 locally:

docker run -p 3000:3000 node-docker-app

Open http://localhost:3000 in your browser. You’ll see “Hello from Node + Docker + VS Code!”

Python example

Here, we’ll build a simple Flask application that listens on port 3001 and displays a greeting. We’ll then package it into a Docker container.

Create a Project Folder:

mkdir python-docker-app
cd python-docker-app

Create a Basic Flask App (app.py):

cat <<EOF >app.py
from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello():
return 'Hello from Python + Docker!'

if __name__ == '__main__':
app.run(host='0.0.0.0', port=3001)
EOF

This code defines a minimal Flask server that responds to requests on port 3001.

Add Dependencies:

echo "Flask==2.3.0" > requirements.txt

Lists your Python dependencies. In this case, just Flask.

Create an Annotated Dockerfile:

# Use Python 3.11 on Alpine for a smaller base image
FROM python:3.11-alpine

# Set the container's working directory
WORKDIR /app

# Copy only the requirements file first to leverage caching
COPY requirements.txt .

# Install Python libraries
RUN pip install -r requirements.txt

# Copy the rest of the application code
COPY . .

# Expose port 3001, where Flask will listen
EXPOSE 3001

# Run the Flask app
CMD ["python", "app.py"]

Build and Run:

docker build -t python-docker-app .
docker run -p 3001:3001 python-docker-app

Visit http://localhost:3001 and you’ll see “Hello from Python + Docker!”

Manage containers in VS Code with the Docker Extension

Once you install the Docker extension in Visual Studio Code, you can easily manage containers, images, and registries — bringing the full power of Docker into VS Code. You can also open a code terminal within VS Code to execute commands and manage applications seamlessly through the container’s isolated filesystem. And the context menu can be used to perform actions like starting, stopping, and removing containers.

With the Docker extension installed, open VS Code and click the Docker icon:

Containers: View running/stopped containers, view logs, stop, or remove them at a click. Each container is associated with a specific container name, which helps you identify and manage them effectively.

Images: Inspect your local images, tag them, or push to Docker Hub/other registries.

Registries: Quickly pull from Docker Hub or private repos.

No more memorizing container IDs or retyping long CLI commands. This is especially handy for visual folks or those new to Docker.

Advanced Debugging (Single & Multi-Service)

Debugging containerized applications can be challenging, but with Docker and Visual Studio Code, you can streamline the process. Here’s some advanced debugging that you can try for yourself!

Containerized Debug Ports

For Node, expose port 9229 (EXPOSE 9229) and run with –inspect.

In VS Code, create a “Docker: Attach to Node” debug config to step through code in real time.

Microservices / Docker Compose

For multiple services, define them in compose.yml.

Spin them up with docker compose up.

Configure VS Code to attach debuggers to each service’s port (e.g., microservice A on 9229, microservice B on 9230).

Remote – Containers (Dev Containers)

Use VS Code’s Dev Containers extension for a fully containerized development environment.

Perfect for large teams: Everyone codes in the same environment with preinstalled tools/libraries. Onboarding new devs is a breeze.

Insider Trick: If you’re juggling multiple services, label your containers meaningfully (–name web-service, –name auth-service) so they’re easy to spot in VS Code’s Docker panel.

Pro Tips: Security & Performance

Security

Use Trusted Base ImagesOfficial images like node:23.6.1-alpine, python:3.11-alpine, etc., reduce the risk of hidden vulnerabilities.

Scan ImagesTools like Docker Scout or Trivy spot vulnerabilities. Integrate into CI/CD to catch them early.

Secrets ManagementNever hardcode tokens or credentials—use Docker secrets, environment variables, or external vaults.

Encryption & SSLFor production apps, consider a reverse proxy (Nginx, Traefik) or in-container SSL termination.

Performance

Resource AllocationDocker Desktop on Windows/macOS allows you to tweak CPU/RAM usage. Don’t starve your containers if you run multiple services.

Multi-Stage Builds & .dockerignoreKeep images lean and build times short by ignoring unneeded files (node_modules, .git).

Dev ContainersOffload heavy dependencies to a container, freeing your host machine.

Docker Compose v2It’s the new standard—faster, more intuitive commands. Great for orchestrating multi-service setups on your dev box.

Real-World Impact: In a recent project, containerizing builds and using ephemeral agents slashed our development environment setup time by 80%. That’s more time coding, and less time wrangling dependencies!

Going Further: CI/CD and Ephemeral Dev Environments

CI/CD Integration:Automate Docker builds in Jenkins, GitHub Actions, or Azure DevOps. Run your tests within containers for consistent results.

# Example GitHub Actions snippet
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
– uses: actions/checkout@v4
– name: Build Docker Image
run: docker build -t myorg/myapp:${{ github.sha }} .
– name: Run Tests
run: docker run –rm myorg/myapp:${{ github.sha }} npm test
– name: Push Image
run: |
docker login -u $USER -p $TOKEN
docker push myorg/myapp:${{ github.sha }}

Ephemeral Build Agents:Use Docker containers as throwaway build agents. Each CI job spins up a clean environment—no leftover caches or config drift.

Docker in Kubernetes:If you need to scale horizontally, deploying containers to Kubernetes (K8s) can handle higher traffic or more complex microservice architectures.

Dev Containers & Codespaces:Tools like GitHub Codespaces or the “Remote – Containers” extension let you develop in ephemeral containers in the cloud. Perfect for distributed teams—everyone codes in the same environment, minus the “It’s fine on my laptop!” tension.

Conclusion

Congrats! You’ve just stepped through how Docker and Visual Studio Code can supercharge your development:

Consistent Environments: Docker ensures no more environment mismatch nightmares.

Speedy Debugging & Management: VS Code’s integrated Docker extension and robust debug tools keep you in the zone.

Security & Performance: Multi-stage builds, scans, ephemeral dev containers—these are the building blocks of a healthy, future-proofed DevOps pipeline.

CI/CD & Beyond: Extend the same principles to your CI/CD flows, ensuring smooth releases and easy rollbacks.

Once you see the difference in speed, reliability, and consistency, you’ll never want to go back to old-school local setups.

And that’s it! We’ve packed in real-world anecdotes, pro tips, stories, and future-proof best practices to impress seasoned IT pros on Docker.com. Now go forth, embrace containers, and code with confidence — happy shipping!

Learn more

Install or upgrade Docker Desktop & VS Code to the latest versions.

Try containerizing an app from scratch — your future self (and your teammates) will thank you.

Experiment with ephemeral dev containers, advanced debugging, or Docker security scanning (Docker Scout).

Join the Docker community on Slack or the forums. Swap stories or pick up fresh tips!

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.40: Model Runner to run LLMs locally, more powerful Docker AI Agent, and expanded AI Tools Catalog

At Docker, we’re focused on making life easier for developers and teams building high-quality applications, including those powered by generative AI. That’s why, in the Docker Desktop 4.40 release, we’re introducing new tools that simplify GenAI app development and support secure, scalable development. 

Keep reading to find updates on new tooling like Model Runner and a more powerful Docker AI Agent with MCP capabilities. Plus, with the AI Tool Catalog, teams can now easily build smarter AI-powered applications and agents with MCPs. And with Docker Desktop Setting Reporting, admins now get greater visibility into compliance and policy enforcement.

Docker Model Runner (Beta): Bringing local AI model execution to developers 

Now in beta with Docker Desktop 4.40, Docker Model Runner makes it easier for developers to run AI models locally. No extra setup, no jumping between tools, and no need to wrangle infrastructure. This first iteration is all about helping developers quickly experiment and iterate on models right from their local machines.

The beta includes three core capabilities:

Local model execution, right out of the box

GPU acceleration on Apple Silicon for faster performance

Standardized model packaging using OCI Artifacts

Powered by llama.cpp and accessible via the OpenAI API, the built-in inference engine makes running models feel as simple as running a container. On Mac, Model Runner uses host-based execution to tap directly into your hardware — speeding things up with zero extra effort.

Models are also packaged as OCI Artifacts, so you can version, store, and ship them using the same trusted registries and CI/CD workflows you already use. Check out our docs for more detailed info!

Figure 1: Using Docker Model Runner and CLI commands to experiment with models locally

This release lays the groundwork for what’s ahead: support for additional platforms like Windows with GPU, the ability to customize and publish your own models, and deeper integration into the development loop. We’re just getting started with Docker Model Runner and look forward to sharing even more updates and enhancements in the coming weeks.

Docker AI Agent: Smarter and more powerful with MCP integration + AI Tool Catalog

Our vision for the Docker AI Agent is simple: be context-aware, deeply knowledgeable, and available wherever developers build. With this release, we’re one step closer! The Docker AI Agent is now even more capable, making it easier for developers to tap into the Docker ecosystem and streamline their workflows beyond Docker. 

Your trusted AI Agent for all things Docker 

The Docker AI agent now has built-in support for many new popular developer capabilities like:

Running shell commands

Performing Git operations

Downloading resources

Managing local files

Thanks to a Docker Scout integration, we also now support other tools from the Docker ecosystem, such as performing security analysis on your Dockerfiles or images. 

Expanding the Docker AI Agent beyond Docker 

The Docker AI Agent now fully embraces the Model Context Protocol (MCP). This new standard for connecting AI agents and models to external data and tools makes them more powerful and tailored to specific needs. In addition to acting as an MCP client, many of Docker AI Agent’s capabilities are now exposed as MCP Servers. This means you can interact with the agent in Docker Desktop GUI or CLI or your favorite client, such as Claude Desktop and Cursor.

Figure 2: Extending Docker AI Agent’s capabilities with many tools, including the MCP Catalog. 

AI Tool Catalog: Your launchpad for experimenting with MCP servers

Thanks to the AI Tool Catalog extension in Docker Desktop, you can explore different MCP servers and seamlessly connect the Docker AI agent to other tools or other LLMs to the Docker ecosystem. No more manually configuring multiple MCP servers! We’ve also added secure handling and injection of MPC servers’ secrets, such as API keys, to simplify log-ins and credential management.

The AI Tool Catalog includes containerized servers that have been pushed to Docker Hub, and we’ll continue to expand them. If you’re working in this space or have an MCP server that you’d like to distribute, please reach out in our public GitHub repo. To install the AI Tool Catalog, go to the extensions menu of Docker Desktop or use this for installation.

Figure 3: Explore and discover MCP servers in the AI Tools Catalog extension in Docker Desktop

Bring compliance into focus with Docker Desktop Setting Reporting

Building on the Desktop Settings Management capabilities introduced in Docker Desktop 4.36, Docker Desktop 4.40 brings robust compliance reporting for Docker Business customers. This new powerful feature gives administrators comprehensive visibility into user compliance with assigned settings policies across the organization.

Key benefits

Real-time compliance tracking: Easily monitor which users are compliant with their assigned settings policies. This allows administrators to quickly identify and address non-compliant systems and users.

Streamlined troubleshooting: Detailed compliance status information helps administrators diagnose why certain users might be non-compliant, reducing resolution time and IT overhead.

Figure 4: Desktop settings reporting provides an overview of policy assignment and compliance status, helping organizations stay compliant. 

Get started with Docker Desktop Setting Reporting

The Desktop Setting Reporting dashboard is currently being rolled out through Early Access. Administrators can see which settings policies are assigned to each user and whether those policies are being correctly applied.

Soon, administrators will be able to access the reporting dashboard by navigating to the Admin Console > Docker Desktop > Reporting. The dashboard provides a clear view of all users’ compliance status, with options to:

Search by username or email address

Filter by assigned policies

Toggle visibility of compliant users to focus on potential issues

View detailed compliance information for specific users

Download comprehensive compliance data as a CSV file

The dashboard also provides targeted resolution steps for non-compliant users to help administrators quickly address issues and ensure organizational compliance.

This new reporting capability underscores Docker’s commitment to providing enterprise-grade management tools that simplify administration while maintaining security and compliance across diverse development environments. Learn more about Desktop settings reporting here.

Wrapping up 

Docker is expanding its AI tooling to simplify application development and improve team workflows. New additions like Model Runner, the Docker AI Agent with MCP server and client support, and the AI Tool Catalog extension in Docker Desktop help streamline how developers build with AI. We continue to make enterprise tools more useful and robust, giving admins better visibility into compliance and policy enforcement through Docker Desktop Settings Reporting. We can’t wait to see what you build next!

Learn more

Authenticate and update today to receive your subscription level’s newest Docker Desktop features.

Subscribe to the Docker Navigator Newsletter.

Learn about our sign-in enforcement options.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

8 Ways to Empower Engineering Teams to Balance Productivity, Security, and Innovation

This post was contributed by Lance Haig, a solutions engineer at Docker.

In today’s fast-paced development environments, balancing productivity with security while rapidly innovating is a constant juggle for senior leaders. Slow feedback loops, inconsistent environments, and cumbersome tooling can derail progress. As a solutions engineer at Docker, I’ve learned from my conversations with industry leaders that a key focus for senior leaders is on creating processes and providing tools that let developers move faster without compromising quality or security. 

Let’s explore how Docker’s suite of products and Docker Business empowers industry leaders and their development teams to innovate faster, stay secure, and deliver impactful results.

1. Create a foundation for reliable workflows

A recurring pain point I’ve heard from senior leaders is the delay between code commits and feedback. One leader described how their team’s feedback loops stretched to eight hours, causing delays, frustration, and escalating costs.

Optimizing feedback cycles often involves localizing testing environments and offloading heavy build tasks. Teams leveraging containerized test environments — like Testcontainers Cloud — reduce this feedback loop to minutes, accelerating developer output. Similarly, offloading complex builds to managed cloud services ensures infrastructure constraints don’t block developers. The time saved here is directly reinvested in faster iteration cycles.

Incorporating Docker’s suite of products can significantly enhance development efficiency by reducing feedback loops. For instance, The Warehouse Group, New Zealand’s largest retail chain, transformed its development process by adopting Docker. This shift enabled developers to test applications locally, decreasing feedback loops from days to minutes. Consequently, deployments that previously took weeks were streamlined to occur within an hour of code submission.

2. Shorten feedback cycles to drive results

Inconsistent development environments continue to plague engineering organizations. These mismatches lead to wasted time troubleshooting “works-on-my-machine” errors or inefficiencies across CI/CD pipelines. Organizations achieve consistent environments across local, staging, and production setups by implementing uniform tooling, such as Docker Desktop.

For senior leaders, the impact isn’t just technical: predictable workflows simplify onboarding, reduce new hires’ time to productivity, and establish an engineering culture focused on output rather than firefighting. 

For example, Ataccama, a data management company, leveraged Docker to expedite its deployment process. With containerized applications, Ataccama reduced application deployment lead times by 75%, achieving a 50% faster transition from development to production. By reducing setup time and simplifying environment configuration, Docker allows the team to spin up new containers instantly and shift focus to delivering value. This efficiency gain allowed the team to focus more on delivering value and less on managing infrastructure.

3. Empower teams to collaborate in distributed workflows

Today’s hybrid and remote workforces make developer collaboration more complex. Secure, pre-configured environments help eliminate blockers when working across teams. Leaders who adopt centralized, standardized configurations — even in zero-trust environments — reduce setup time and help teams remain focused.

Docker Build Cloud further simplifies collaboration in distributed workflows by enabling developers to offload resource-intensive builds to a secure, managed cloud environment. Teams can leverage parallel builds, shared caching, and multi-architecture support to streamline workflows, ensuring that builds are consistent and fast across team members regardless of their location or platform. By eliminating the need for complex local build setups, Docker Build Cloud allows developers to focus on delivering high-quality code, not managing infrastructure.

Beyond tools, fostering collaboration requires a mix of practices: sharing containerized services, automating repetitive tasks, and enabling quick rollbacks. The right combination allows engineering teams to align better, focus on goals, and deliver outcomes quickly.

Empowering engineering teams with streamlined workflows and collaborative tools is only part of the equation. Leaders must also evaluate how these efficiencies translate into tangible cost savings, ensuring their investments drive measurable business value.

To learn more about how Docker simplifies the complex, read From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity.

4. Reduce costs

Every organization feels pressured to manage budgets effectively while delivering on demanding expectations. However, leaders can realize cost savings in unexpected areas, including hiring, attrition, and infrastructure optimization, by adopting consumption-based pricing models, streamlining operations, and leveraging modern tooling.

Easy access to all Docker products provides flexibility and scalability 

Updated Docker plans make it easier for development teams to access everything they need under one subscription. Consumption is included for each new product, and more can be added as needed. This allows organizations to scale resources as their needs evolve and effectively manage their budgets. 

Cost savings through streamlined operations

Organizations adopting Docker Business have reported significant reductions in infrastructure costs. For instance, a leading beauty company achieved a 25% reduction in infrastructure expenses by transitioning to a container-first development approach with Docker. 

Bitso, a leading financial services company powered by cryptocurrency, switched to Docker Business from an alternative solution and reduced onboarding time from two weeks to a few hours per engineer, saving an estimated 7,700 hours in the eight months while scaling the team. Returning to Docker after spending almost two years with the alternative open-source solution proved more cost-effective, decreasing the time spent onboarding, troubleshooting, and debugging. Further, after transitioning back to Docker, Bitso has experienced zero new support tickets related to Docker, significantly reducing the platform support burden. 

Read the Bitso case study to learn why Bitso returned to Docker Business.

Reducing infrastructure costs with modern tooling

Organizations that adopt Docker’s modern tooling realize significant infrastructure cost savings by optimizing resource usage, reducing operational overhead, and eliminating inefficiencies tied to legacy processes. 

By leveraging Docker Build Cloud, offloading resource-intensive builds to a managed cloud service, and leveraging shared cache, teams can achieve builds up to 39 times faster, saving approximately one hour per day per developer. For example, one customer told us they saw their overall build times improve considerably through the shared cache feature. Previously on their local machine, builds took 15-20 minutes. Now, with Docker Build Cloud, it’s down to 110 seconds — a massive improvement.

Check out our calculator to estimate your savings with Build Cloud.

5. Retain talent through frictionless environments

High developer turnover is expensive and often linked to frustration with outdated or inefficient tools. I’ve heard countless examples of developers leaving not because of the work but due to the processes and tooling surrounding it. Providing modern, efficient environments that allow experimentation while safeguarding guardrails improves satisfaction and retention.

Year after year, developers rank Docker as their favorite developer tool. For example, more than 65,000 developers participated in Stack Overflow’s 2024 Developer Survey, which recognized Docker as the most-used and most-desired developer tool for the second consecutive year, and as the most-admired developer tool.

Providing modern, efficient environments with Docker tools can enhance developer satisfaction and retention. While specific metrics vary, streamlined workflows and reduced friction are commonly cited as factors that improve team morale and reduce turnover. Retaining experienced developers not only preserves institutional knowledge but also reduces the financial burden of hiring and onboarding replacements.

6. Efficiently manage infrastructure 

Consolidating development and operational tooling reduces redundancy and lowers overall IT spend. Organizations that migrate to standardized platforms see a decrease in toolchain maintenance costs and fewer internal support tickets. Simplified workflows mean IT and DevOps teams spend less time managing environments and more time delivering strategic value.

Some leaders, however, attempt to build rather than buy solutions for developer workflows, seeing it as cost-saving. This strategy carries risks: reliance on a single person or small team to maintain open-source tooling can result in technical debt, escalating costs, and subpar security. By contrast, platforms like Docker Business offer comprehensive protection and support, reducing long-term risks.

Cost management and operational efficiency go hand-in-hand with another top priority: security. As development environments grow more sophisticated, ensuring airtight security becomes critical — not just for protecting assets but also for maintaining business continuity and customer trust.

7. Secure developer environments

Security remains a top priority for all senior leaders. As organizations transition to zero-trust architectures, the role of developer workstations within this model grows. Developer systems, while powerful, are not exempt from being targets for potential vulnerabilities. Securing developer environments without stifling productivity is an ongoing leadership challenge.

Tightening endpoint security without reducing autonomy

Endpoint security starts with visibility, and Docker makes it seamless. With Image Access Management, Docker ensures that only trusted and compliant images are used throughout your development lifecycle, reducing exposure to vulnerabilities. However, these solutions are only effective if they don’t create bottlenecks for developers.

Recently, a business leader told me that taking over a team without visibility into developer environments and security revealed significant risks. Developers were operating without clear controls, exposing the organization to potential vulnerabilities and inefficiencies. By implementing better security practices and centralized oversight, the leaders improved visibility and reduced operational risks, enabling a more secure and productive environment for developer teams. This shift also addressed compliance concerns by ensuring the organization could effectively meet regulatory requirements and demonstrate policy adherence.

Securing the software supply chain

From trusted content repositories to real-time SBOM insights, securing dependencies is critical for reducing attack surfaces. In conversations with security-focused leaders, the message is clear: Supply chain vulnerabilities are both a priority and a pain point. Leaders are finding success when embedding security directly into developer workflows rather than adding it as a reactive step. Tools like Docker Scout provide real-time visibility into vulnerabilities within your software supply chain, enabling teams to address risks before they escalate. 

Securing developer environments strengthens the foundation of your engineering workflows. But for many industries, these efforts must also align with compliance requirements, where visibility and control over processes can mean the difference between growth and risk.

Improving compliance

Compliance may feel like an operational requirement, but for senior leadership, it’s a strategic asset. In regulated industries, compliance enables growth. In less regulated sectors, it builds customer trust. Regardless of the driver, visibility, and control are the cornerstones of effective compliance.

Proactive compliance, not reactive audits

Audits shouldn’t feel like fire drills. Proactive compliance ensures teams stay ahead of risks and disruptions. With the right processes in place — automated logging, integrated open-source software license checks, and clear policy enforcement — audit readiness becomes a part of daily operations. This proactive approach ensures teams stay ahead of compliance risks while reducing unnecessary disruptions.

While compliance ensures a stable and trusted operational baseline, innovation drives competitive advantage. Forward-thinking leaders understand that fostering creativity within a secure and compliant framework is the key to sustained growth.

8. Accelerating innovation

Every senior leader seeks to balance operational excellence and fostering innovation. Enabling engineers to move fast requires addressing two critical tensions: reducing barriers to experimentation and providing guardrails that maintain focus.

Building a culture of safe experimentation

Experimentation thrives in environments where developers feel supported and unencumbered. By establishing trusted guardrails — such as pre-approved images and automated rollbacks — teams gain the confidence to test bold ideas without introducing unnecessary risks.

From MVP to market quickly

Reducing friction in prototyping accelerates the time-to-market for Minimum Viable Products (MVPs). Leaders prioritizing local testing environments and streamlined approval processes create conditions where engineering creativity translates directly into a competitive advantage.

Innovation is no longer just about moving fast; it’s about moving deliberately. Senior leaders must champion the tools, practices, and environments that unlock their teams’ full potential.

Unlock the full potential of your teams

As a senior leader, you have a unique position to balance productivity, security, and innovation within your teams. Reflect on your current workflows and ask: Are your developers empowered with the right tools to innovate securely and efficiently? How does your organization approach compliance and risk management without stifling creativity?

Tools like Docker Business can be a strategic enabler, helping you address these challenges while maintaining focus on your goals.

Learn more

Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.

Docker Health Scores: A security grading system for container images that offers teams clear insights into their image security posture.

Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.

Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for containerized applications.

Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.

Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.

Quelle: https://blog.docker.com/feed/

Leveraging Docker with TensorFlow Models & TensorFlow.js for a Snake AI Game

The emergence of containerization has brought about a significant transformation in software development and deployment by providing a consistent and scalable environment on many platforms. For developers trying to optimize their operations, Docker in particular has emerged as the preferred technology.  By using containers as the foundation, developers gain the same key benefits — portability, scalability, efficiency, and security — that they rely on for other workloads, seamlessly extending them to ML and AI applications. Using a real-world example involving a Snake AI game, we’ll examine in this article how TensorFlow.js can be used with Docker to run AI/ML in a web browser.

Why Docker for TensorFlow.js conversion?

With the help of TensorFlow.js, a robust toolkit, machine learning models can be executed in a web browser, opening up a plethora of possibilities for applications such as interactive demonstrations and real-time inference. Docker offers a sophisticated solution by guaranteeing consistency and user-friendliness throughout the conversion process by enclosing it inside a container.

The Snake AI neural network game

The Snake AI game brings a modern twist to the classic Snake game by integrating artificial intelligence that learns and improves its gameplay over time. In this version, you can either play manually using arrow keys or let the AI take control. 

The AI running with TensorFlow continuously improves by making strategic movements to maximize the score while avoiding collisions. The game runs in a browser using TensorFlow.js, allowing you to test different trained models and observe how the AI adapts to various challenges. 

Whether you’re playing for fun or experimenting with AI models, this game is a great way to explore the intersection of gaming and machine learning. In our approach, we’ve used the neural network to play the traditional Snake game.

Before we dive into the AI in this Snake game, let’s understand the basics of a neural network.

What is a neural network?

A neural network is a type of machine learning model inspired by the way the human brain works. It’s made up of layers of nodes (or “neurons”), where each node takes some inputs, processes them, and passes an output to the next layer.

Key components of a neural network:

Input layer: Receives the raw data (in our game, the snake’s surroundings).

Hidden layers: Process the input and extract patterns.

Output layer: Gives the final prediction

Figure 1: Key components of a neural network

Imagine each neuron as a small decision-maker. The more neurons and layers, the better the network can recognize patterns.

Types of neural networks

There are several types of neural networks, each suited for different tasks!

Feedforward Neural Networks (FNNs):

The simplest type, where data flows in one direction, from input to output.

Great for tasks like classification, regression, and pattern recognition.

Convolutional Neural Networks (CNNs):

Designed to work with image data.

Uses filters to detect spatial patterns, like edges or textures.

Recurrent Neural Networks (RNNs)

Good for sequence prediction (e.g., stock prices, text generation).

Remembers previous inputs, allowing it to handle time-series data.

Long Short-Term Memory Networks (LSTMs)

A specialized type of RNN that can learn long-term dependencies.

Generative Adversarial Networks (GANs)

Used for generating new data, such as creating images or deepfakes.

When to use each type:

CNNs: Image classification, object detection, facial recognition.

RNNs/LSTMs: Language models, stock market prediction, time-series data.

FNNs: Games like Snake, where the task is to predict the next action based on the current state.

How does the game work?

The snake game offers two ways to play:

Manual mode:

You control the snake using your keyboard’s arrow keys.

The goal is to eat the fruit (red square) without hitting the walls or yourself.

Every time you eat a fruit, the snake grows longer, and your score increases.

If the snake crashes, the game ends, and you can restart to try again.

AI mode:

The game plays itself using a neural network (a type of AI brain) built with TensorFlow.js.

The AI looks at the snake’s surroundings (walls, fruit location, and snake body) and predicts the best move: left, forward, or right.

After each game, the AI learns from its mistakes and becomes smarter over time.

With enough training, the AI can avoid crashing and improve its score.

Figure 2: The classic Nokia snake game with an AI twist

Getting started

Let’s go through how this game is built step by step. You’ll first need to install Docker to run the game in a web browser. Here’s a summary of the steps. 

Clone the repository

Install Docker Desktop

Create a Dockerfile

Build the Docker image

Run the container

Access the game using the web browser

Cloning the repository

git clone https://github.com/dockersamples/snake-game-tensorflow-docker

Install Docker Desktop

Prerequisites:

A supported version of Mac, Linux, or Windows

At least 4 GB RAM

Click here to download Docker Desktop. Select the version appropriate for your system (Apple Silicon or Intel chip for Mac users, Windows, or Linux distribution)

Figure 3: Download Docker Desktop from the Docker website for your machine

Quick run

After installing Docker Desktop, run the pre-built Docker image and execute the following command in your command prompt. It’ll pull the image and start a new container running the snake-game:v1 Docker image and expose port 8080 on the host machine. 

Run the following command to bring up the application:

docker compose up

Next, open the browser and go to http://localhost:8080 to see the output of the snake game and start your first game.

Why use Docker to run the snake game?

No need to install Nginx on your machine — Docker handles it.

The game runs the same way on any system that supports Docker.

You can easily share your game as a Docker image, and others can run it with a single command.

The game logic

The index.html file acts as the foundation of the game, defining the layout and structure of the webpage. It fetches the library TensorFlow.js, which powers the AI, along with script.js for handling gameplay logic and ai.js for AI-based movements. The game UI is simple yet functional, featuring a mode selector that lets players switch between manual control (using arrow keys) and AI mode. The scoreboard dynamically updates the score, high score, and generation count when the AI is training. Also, the game itself runs on an HTML <canvas> element, making it highly interactive. As we move forward, we’ll explore how the JavaScript files bring this game to life!

File : index.html

The HTML file sets up the structure of the game, like the game canvas and control buttons. It also fetches the library from Tensorflow which will be further used by the code to train the snake. 

Canvas: Where the game is drawn.

Mode Selector: Lets you switch between manual and AI gameplay.

TensorFlow.js: The library that powers the AI brain!

File : script.js

This file handles everything in the game—drawing the board, moving the snake, placing the fruit, and keeping score.

const canvas = document.getElementById('gameCanvas');
const ctx = canvas.getContext('2d');

let snake = [{ x: 5, y: 5 }];
let fruit = { x: 10, y: 10 };
let direction = { x: 1, y: 0 };
let score = 0;

Snake Position: Where the snake starts.

Fruit Position: Where the apple is.

Direction: Which way the snake is moving.

The game loop

The game loop keeps the game running, updating the snake’s position, checking for collisions, and handling the score.

function gameLoopManual() {
const head = { x: snake[0].x + direction.x, y: snake[0].y + direction.y };

if (head.x === fruit.x && head.y === fruit.y) {
score++;
fruit = placeFruit();
} else {
snake.pop();
}
snake.unshift(head);
}

Moving the Snake: Adds a new head to the snake and removes the tail (unless it eats an apple).

Collision: If the head hits the wall or its own body, the game ends.

Switching between modes

document.getElementById('mode').addEventListener('change', function() {
gameMode = this.value;
});

Manual Mode: Use arrow keys to control the snake.

AI Mode: The neural network predicts the next move.

Game over and restart

function gameOver() {
clearInterval(gameInterval);
alert('Game Over');
}

function resetGame() {
score = 0;
snake = [{ x: 5, y: 5 }];
fruit = placeFruit();
}

Game Over: Stops the game when the snake crashes.

Reset: Resets the game back to the beginning.

Training the AI

File : ai.js

This file creates and trains the neural network — the AI brain that learns how to play Snake!

var movementOptions = ['left', 'forward', 'right'];

const neuralNet = tf.sequential();
neuralNet.add(tf.layers.dense({units: 256, inputShape: [5]}));
neuralNet.add(tf.layers.dense({units: 512}));
neuralNet.add(tf.layers.dense({units: 256}));
neuralNet.add(tf.layers.dense({units: 3}));

Neural Network: This is a simulated brain with four layers of neurons.

Input: Information about the game state (walls, fruit position, snake body).

Output: One of three choices: turn left, go forward, or turn right.

const optAdam = tf.train.adam(.001);
neuralNet.compile({
optimizer: optAdam,
loss: 'meanSquaredError'
});

Optimizer: Helps the brain learn efficiently by adjusting its weights.

Loss Function: Measures how wrong the AI’s predictions are, helping it improve.

Every time the snake plays a game, it remembers its moves and trains itself.

async function trainNeuralNet(moveRecord) {
for (var i = 0; i < moveRecord.length; i++) {
const expected = tf.oneHot(tf.tensor1d([deriveExpectedMove(moveRecord[i])], 'int32'), 3).cast('float32');
posArr = tf.tensor2d([moveRecord[i]]);
await neuralNet.fit(posArr, expected, { batchSize: 3, epochs: 1 });
expected.dispose();
posArr.dispose();
}
}

After each game, the AI looks at what happened, adjusts its internal connections, and tries to improve for the next game.

The movementOptions array defines the possible movement directions for the snake: ‘left’, ‘forward’, and ‘right’.

An Adam optimizer with a learning rate of 0.001 compiles the model, and a mean squared error loss function is specified. The trainNeuralNet function is defined to train the neural network using a given moveRecord array. It iterates over the moveRecord array, creates one-hot encoded tensors for the expected movement, and trains the model using the TensorFlow.js fit method.

Predicting the next move

When playing, the AI predicts what the best move should be.

function computePrediction(input) {
let inputs = tf.tensor2d([input]);
const outputs = neuralNet.predict(inputs);
return movementOptions[outputs.argMax(1).dataSync()[0]];
}

The computePrediction function makes predictions using the trained neural network. It takes an input array, creates a tensor from the input, predicts the movement using the neural network, and returns the movement option based on the predicted output.

The code demonstrates the creation of a neural network model, training it with a given move record, and making predictions using the trained model. This approach can enhance the snake AI game’s performance and intelligence by learning from its history and making informed decisions.

File : Dockerfile

FROM nginx:latest
COPY . /usr/share/nginx/html

FROM nginx:latest

This tells Docker to use the latest version of Nginx as the base image.

Nginx is a web server that’s great for serving static files like HTML, CSS, and JavaScript ( (which is what the Snake game is made of).

Instead of creating a server from scratch, the following line saves time by using an existing, reliable Nginx setup.

COPY . /usr/share/nginx/html

This line copies everything from your current project directory (the one with the Snake game files: index.html, script.js, ai.js, etc.) into the Nginx web server’s default folder for serving web content.

/usr/share/nginx/html is the default folder where Nginx looks for web files to display.

Development setup

Here’s how you can set up the development environment for the Snake game using Docker!

To make the development smoother, you can use Docker Volumes to avoid rebuilding the Docker image every time you change the game files. 

Run the command from the folder where Snake-AI-TensorFlow-Docker code exists:

docker run -it –rm -d -p 8080:80 –name web -v ./:/usr/share/nginx/html nginx

If you hit an error like

docker: Error response from daemon: Mounts denied:
The path /Users/harsh/Downloads/Snake-AI-TensorFlow-Docker is not shared from the host and is not known to Docker.
You can configure shared paths from Docker -> Preferences… -> Resources -> File Sharing.
See https://docs.docker.com/desktop/settings/mac/#file-sharing for more info.

Open Docker Desktop, go to Settings -> Resources -> File Sharing -> Select the location where you cloned the repository code, and click on Apply & restart.

Figure 4: Using Docker Desktop to set up a development environment for the AI snake game

Run the command again, you won’t face any errors now 

docker run -it –rm -d -p 8080:80 –name web -v ./:/usr/share/nginx/html nginx

Check if the container is running with the following command

harsh@Harshs-MacBook-Air snake-game-tensorflow-docker % docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c47e2711b2db nginx "/docker-entrypoint.…" 3 seconds ago Up 2 seconds 0.0.0.0:8080->80/tcp web

Open the browser and go to the URL http://localhost:8080, you’ll see the snake game output. This setup is perfect for development because it keeps everything fast and dynamic. 

Figure 5: Accessing the snake game via a web browser.

Changes in the code will be reflected right away in the browser without rebuilding the container-v ~/.:/usr/share/nginx/html — This is the magic part! It mounts your local directory (Snake-AI-TensorFlow-Docker) into the Nginx HTML directory (/usr/share/nginx/html) inside the container.

Any changes you make to HTML, CSS, or JavaScript files in code ~/Snake-AI-TensorFlow-Docker immediately reflect in the running app without needing to rebuild the container.

Conclusion 

In conclusion, building a Snake AI game using TensorFlow.js and Docker demonstrates how seamlessly ML and AI can be integrated into interactive web applications. Through this project, we’ve not only explored the fundamentals of reinforcement learning but also seen firsthand how Docker can simplify the development and deployment process.

By containerizing, Docker ensures a consistent environment across different systems, eliminating the common “it works on my machine” problem. This consistency makes it easier to collaborate with others, deploy to production, and manage dependencies without worrying about version mismatches or local configuration issues for web applications, machine learning projects, or AI applications.
Quelle: https://blog.docker.com/feed/

Shift-Left Testing with Testcontainers: Catching Bugs Early with Local Integration Tests

Modern software development emphasizes speed and agility, making efficient testing crucial. DORA research reveals that elite teams thrive with both high performance and reliability. They can achieve 127x faster lead times, 182x more deployments per year, 8x lower change failure rates and most impressively, 2,293x faster recovery times after incidents. The secret sauce is they “shift left.” 

Shift-Left is a practice that moves integration activities like testing and security earlier in the development cycle, allowing teams to detect and fix issues before they reach production. By incorporating local and integration tests early, developers can prevent costly late-stage defects, accelerate development, and improve software quality. 

In this article, you’ll learn how integration tests can help you catch defects earlier in the development inner loop and how Testcontainers can make them feel as lightweight and easy as unit tests. Finally, we’ll break down the impact that shifting left integration tests has on the development process velocity and lead time for changes according to DORA metrics. 

Real-world example: Case sensitivity bug in user registration

In a traditional workflow, integration and E2E tests are often executed in the outer loop of the development cycle, leading to delayed bug detection and expensive fixes. For example, if you are building a user registration service where users enter their email addresses, you must ensure that the emails are case-insensitive and not duplicated when stored. 

If case sensitivity is not handled properly and is assumed to be managed by the database, testing a scenario where users can register with duplicate emails differing only in letter case would only occur during E2E tests or manual checks. At that stage, it’s too late in the SDLC and can result in costly fixes.

By shifting testing earlier and enabling developers to spin up real services locally — such as databases, message brokers, cloud emulators, or other microservices — the testing process becomes significantly faster. This allows developers to detect and resolve defects sooner, preventing expensive late-stage fixes.

Let’s dive deep into this example scenario and how different types of tests would handle it.

Scenario

A new developer is implementing a user registration service and preparing for production deployment.

Code Example of the registerUser method

async registerUser(email: string, username: string): Promise<User> {
const existingUser = await this.userRepository.findOne({
where: {
email: email
}
});

if (existingUser) {
throw new Error("Email already exists");
}

}

The Bug

The registerUser method doesn’t handle case sensitivity properly and relies on the database or the UI framework to handle case insensitivity by default. So, in practice, users can register duplicate emails with both lower and upper letters  (e.g., user@example.com and USER@example.com).

Impact

Authentication issues arise because email case mismatches cause login failures.

Security vulnerabilities appear due to duplicate user identities.

Data inconsistencies complicate user identity management.

Testing method 1: Unit tests. 

These tests only validate the code itself, so email case sensitivity verification relies on the database where SQL queries are executed. Since unit tests don’t run against a real database, they can’t catch issues like case sensitivity. 

Testing method 2: End-to-end test or manual checks. 

These verifications will only catch the issue after the code is deployed to a staging environment. While automation can help, detecting issues this late in the development cycle delays feedback to developers and makes fixes more time-consuming and costly.

Testing method 3: Using mocks to simulate database interactions with Unit Tests. 

One approach that could work and allow us to iterate quickly would be to mock the database layer and define a mock repository that responds with the error. Then, we could write a unit test that executes really fast:

test('should prevent registration with same email in different case', async () => {
const userService = new UserRegistrationService(new MockRepository());
await userService.registerUser({ email: 'user@example.com', password: 'password123' });
await expect(userService.registerUser({ email: 'USER@example.com', password: 'password123' }))
.rejects.toThrow('Email already exists');
});

In the above example, the User service is created with a mock repository that’ll hold an in-memory representation of the database, i.e. as a map of users. This mock repository will detect if a user has passed twice, probably using the username as a non-case-sensitive key, returning the expected error. 

Here, we have to code the validation logic in the mock, replicating what the User service or the database should do. Whenever the user’s validation needs a change, e.g. not including special characters, we have to change the mock too. Otherwise, our tests will assert against an outdated state of the validations. If the usage of mocks is spread across the entire codebase, this maintenance could be very hard to do.

To avoid that, we consider that integration tests with real representations of the services we depend on. In the above example,  using the database repository is much better than mocks, because it provides us with more confidence on what we are testing.

Testing method 4: Shift-left local integration tests with Testcontainers 

Instead of using mocks, or waiting for staging to run the integration or E2E tests, we can detect the issue earlier.  This is achieved by enabling developers to run the integration tests for the project locally in the developer’s inner loop, using Testcontainers with a real PostgreSQL database.

Benefits

Time Savings: Tests run in seconds, catching the bug early.

More Realistic Testing: Uses an actual database instead of mocks.

Confidence in Production Readiness: Ensures business-critical logic behaves as expected.

Example integration test

First, let’s set up a PostgreSQL container using the Testcontainers library and create a userRepository to connect to this PostgreSQL instance:

let userService: UserRegistrationService;

beforeAll(async () => {
container = await new PostgreSqlContainer("postgres:16")
.start();

dataSource = new DataSource({
type: "postgres",
host: container.getHost(),
port: container.getMappedPort(5432),
username: container.getUsername(),
password: container.getPassword(),
database: container.getDatabase(),
entities: [User],
synchronize: true,
logging: true,
connectTimeoutMS: 5000
});
await dataSource.initialize();
const userRepository = dataSource.getRepository(User);
userService = new UserRegistrationService(userRepository);
}, 30000);

Now, with initialized userService, we can use the registerUser method to test user registration with the real PostgreSQL instance:

test('should prevent registration with same email in different case', async () => {
await userService.registerUser({ email: 'user@example.com', password: 'password123' });
await expect(userService.registerUser({ email: 'USER@example.com', password: 'password123' }))
.rejects.toThrow('Email already exists');
});

Why This Works

Uses a real PostgreSQL database via Testcontainers

Validates case-insensitive email uniqueness

Verifies email storage format

How Testcontainers helps

Testcontainers modules provide preconfigured implementations for the most popular technologies, making it easier than ever to write robust tests. Whether your application relies on databases, message brokers, cloud services like AWS (via LocalStack), or other microservices, Testcontainers has a module to streamline your testing workflow.

With Testcontainers, you can also mock and simulate service-level interactions or use contract tests to verify how your services interact with others. Combining this approach with local testing against real dependencies, Testcontainers provides a comprehensive solution for local integration testing and eliminates the need for shared integration testing environments, which are often difficult and costly to set up and manage. To run Testcontainers tests, you need a Docker context to spin up containers. Docker Desktop ensures seamless compatibility with Testcontainers for local testing. 

Testcontainers Cloud: Scalable Testing for High-Performing Teams

Testcontainers is a great solution to enable integration testing with real dependencies locally. If you want to take testing a step further — scaling Testcontainers usage across teams, monitoring images used for testing, or seamlessly running Testcontainers tests in CI — you should consider using Testcontainers Cloud. It provides ephemeral environments without the overhead of managing dedicated test infrastructure. Using Testcontainers Cloud locally and in CI ensures consistent testing outcomes, giving you greater confidence in your code changes. Additionally, Testcontainers Cloud allows you to seamlessly run integration tests in CI across multiple pipelines, helping to maintain high-quality standards at scale. Finally, Testcontainers Cloud is more secure and ideal for teams and enterprises who have more stringent requirements for containers’ security mechanisms.   

Measuring the business impact of shift-left testing

As we have seen, shift-left testing with Testcontainers significantly improves defect detection rate and time and reduces context switching for developers. Let’s take the example above and compare different production deployment workflows and how early-stage testing would impact developer productivity. 

Traditional workflow (shared integration environment)

Process breakdown:

The traditional workflow comprises writing feature code, running unit tests locally, committing changes, and creating pull requests for the verification flow in the outer loop. If a bug is detected in the outer loop, developers have to go back to their IDE and repeat the process of running the unit test locally and other steps to verify the fix. 

Figure 1: Workflow of a traditional shared integration environment broken down by time taken for each step.

Lead Time for Changes (LTC): It takes at least 1 to 2 hours to discover and fix the bug (more depending on CI/CD load and established practices). In the best-case scenario, it would take approximately 2 hours from code commit to production deployment. In the worst-case scenario, it may take several hours or even days if multiple iterations are required.

Deployment Frequency (DF) Impact: Since fixing a pipeline failure can take around 2 hours and there’s a daily time constraint (8-hour workday), you can realistically deploy only 3 to 4 times per day. If multiple failures occur, deployment frequency can drop further.

Additional associated costs: Pipeline workers’ runtime minutes and Shared Integration Environment maintenance costs.

Developer Context Switching: Since bug detection occurs about 30 minutes after the code commit, developers lose focus. This leads to an increased cognitive load after they have to constantly context switch, debug, and then context switch again.

Shift-left workflow (local integration testing with Testcontainers)

Process breakdown:

The shift-left workflow is much simpler and starts with writing code and running unit tests. Instead of running integration tests in the outer loop, developers can run them locally in the inner loop to troubleshoot and fix issues. The changes are verified again before proceeding to the next steps and the outer loop. 

Figure 2: Shift-Left Local Integration Testing with Testcontainers workflow broken down by time taken for each step. The feedback loop is much faster and saves developers time and headaches downstream.

Lead Time for Changes (LTC): It takes less than 20 minutes to discover and fix the bug in the developers’ inner loop. Therefore, local integration testing enables at least 65% faster defect identification than testing on a Shared Integration Environment.  

Deployment Frequency (DF) Impact: Since the defect was identified and fixed locally within 20 minutes, the pipeline would run to production, allowing for 10 or more deployments daily.

Additional associated costs: 5 Testcontainers Cloud minutes are consumed.  

Developer Context Switching: No context switching for the developer, as tests running locally provide immediate feedback on code changes and let the developer stay focused within the IDE and in the inner loop.

Key Takeaways

Traditional Workflow (Shared Integration Environment)Shift-Left Workflow (Local Integration Testing with Testcontainers)Improvements and further referencesFaster Lead Time for Changes (LTCCode changes validated in hours or days. Developers wait for shared CI/CD environments.Code changes validated in minutes. Testing is immediate and local.>65% Faster Lead Time for Changes (LTC) – Microsoft reduced lead time from days to hours by adopting shift-left practices.Higher Deployment Frequency (DF)Deployment happens daily, weekly, or even monthly due to slow validation cycles.Continuous testing allows multiple deployments per day.2x Higher Deployment Frequency  – 2024 DORA Report shows shift-left practices more than double deployment frequency. Elite teams deploy 182x more often.Lower Change Failure Rate (CFR)Bugs that escape into production can lead to costly rollbacks and emergency fixes.More bugs are caught earlier in CI/CD, reducing production failures.Lower Change Failure Rate – IBM’s Systems Sciences Institute estimates defects found in production cost 15x more to fix than those caught early.Faster Mean Time to Recovery (MTTR)Fixes take hours, days, or weeks due to complex debugging in shared environments.Rapid bug resolution with local testing. Fixes verified in minutes.Faster MTTR—DORA’s elite performers restore service in less than one hour, compared to weeks to a month for low performers.Cost SavingsExpensive shared environments, slow pipeline runs, high maintenance costs.Eliminates costly test environments, reducing infrastructure expenses.Significant Cost Savings – ThoughtWorks Technology Radar highlights shared integration environments as fragile and expensive.

Table 1: Summary of key metrics improvement by using shifting left workflow with local testing using Testcontainers

Conclusion

Shift-left testing improves software quality by catching issues earlier, reducing debugging effort, enhancing system stability, and overall increasing developer productivity. As we’ve seen, traditional workflows relying on shared integration environments introduce inefficiencies, increasing lead time for changes, deployment delays, and cognitive load due to frequent context switching. In contrast, by introducing Testcontainers for local integration testing, developers can achieve:

Faster feedback loops – Bugs are identified and resolved within minutes, preventing delays.

More reliable application behavior – Testing in realistic environments ensures confidence in releases.

Reduced reliance on expensive staging environments – Minimizing shared infrastructure cuts costs and streamlines the CI/CD process.

Better developer flow state – Easily setting up local test scenarios and re-running them fast for debugging helps developers stay focused on innovation.

Testcontainers provides an easy and efficient way to test locally and catch expensive issues earlier. To scale across teams,  developers can consider using Docker Desktop and Testcontainers Cloud to run unit and integration tests locally, in the CI, or ephemeral environments without the complexity of maintaining dedicated test infrastructure. Learn more about Testcontainers and Testcontainers Cloud in our docs. 

Further Reading

Sign up for a Testcontainers Cloud account.

Follow the guide: Mastering Testcontainers Cloud by Docker: streamlining integration testing with containers

Connect on the Testcontainers Slack.

Get started with the Testcontainers guide.

Learn about Testcontainers best practices.

Learn about Spring Boot Application Testing and Development with Testcontainers

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/