Docker Desktop 4.40: Model Runner to run LLMs locally, more powerful Docker AI Agent, and expanded AI Tools Catalog

At Docker, we’re focused on making life easier for developers and teams building high-quality applications, including those powered by generative AI. That’s why, in the Docker Desktop 4.40 release, we’re introducing new tools that simplify GenAI app development and support secure, scalable development. 

Keep reading to find updates on new tooling like Model Runner and a more powerful Docker AI Agent with MCP capabilities. Plus, with the AI Tool Catalog, teams can now easily build smarter AI-powered applications and agents with MCPs. And with Docker Desktop Setting Reporting, admins now get greater visibility into compliance and policy enforcement.

Docker Model Runner (Beta): Bringing local AI model execution to developers 

Now in beta with Docker Desktop 4.40, Docker Model Runner makes it easier for developers to run AI models locally. No extra setup, no jumping between tools, and no need to wrangle infrastructure. This first iteration is all about helping developers quickly experiment and iterate on models right from their local machines.

The beta includes three core capabilities:

Local model execution, right out of the box

GPU acceleration on Apple Silicon for faster performance

Standardized model packaging using OCI Artifacts

Powered by llama.cpp and accessible via the OpenAI API, the built-in inference engine makes running models feel as simple as running a container. On Mac, Model Runner uses host-based execution to tap directly into your hardware — speeding things up with zero extra effort.

Models are also packaged as OCI Artifacts, so you can version, store, and ship them using the same trusted registries and CI/CD workflows you already use. Check out our docs for more detailed info!

Figure 1: Using Docker Model Runner and CLI commands to experiment with models locally

This release lays the groundwork for what’s ahead: support for additional platforms like Windows with GPU, the ability to customize and publish your own models, and deeper integration into the development loop. We’re just getting started with Docker Model Runner and look forward to sharing even more updates and enhancements in the coming weeks.

Docker AI Agent: Smarter and more powerful with MCP integration + AI Tool Catalog

Our vision for the Docker AI Agent is simple: be context-aware, deeply knowledgeable, and available wherever developers build. With this release, we’re one step closer! The Docker AI Agent is now even more capable, making it easier for developers to tap into the Docker ecosystem and streamline their workflows beyond Docker. 

Your trusted AI Agent for all things Docker 

The Docker AI agent now has built-in support for many new popular developer capabilities like:

Running shell commands

Performing Git operations

Downloading resources

Managing local files

Thanks to a Docker Scout integration, we also now support other tools from the Docker ecosystem, such as performing security analysis on your Dockerfiles or images. 

Expanding the Docker AI Agent beyond Docker 

The Docker AI Agent now fully embraces the Model Context Protocol (MCP). This new standard for connecting AI agents and models to external data and tools makes them more powerful and tailored to specific needs. In addition to acting as an MCP client, many of Docker AI Agent’s capabilities are now exposed as MCP Servers. This means you can interact with the agent in Docker Desktop GUI or CLI or your favorite client, such as Claude Desktop and Cursor.

Figure 2: Extending Docker AI Agent’s capabilities with many tools, including the MCP Catalog. 

AI Tool Catalog: Your launchpad for experimenting with MCP servers

Thanks to the AI Tool Catalog extension in Docker Desktop, you can explore different MCP servers and seamlessly connect the Docker AI agent to other tools or other LLMs to the Docker ecosystem. No more manually configuring multiple MCP servers! We’ve also added secure handling and injection of MPC servers’ secrets, such as API keys, to simplify log-ins and credential management.

The AI Tool Catalog includes containerized servers that have been pushed to Docker Hub, and we’ll continue to expand them. If you’re working in this space or have an MCP server that you’d like to distribute, please reach out in our public GitHub repo. To install the AI Tool Catalog, go to the extensions menu of Docker Desktop or use this for installation.

Figure 3: Explore and discover MCP servers in the AI Tools Catalog extension in Docker Desktop

Bring compliance into focus with Docker Desktop Setting Reporting

Building on the Desktop Settings Management capabilities introduced in Docker Desktop 4.36, Docker Desktop 4.40 brings robust compliance reporting for Docker Business customers. This new powerful feature gives administrators comprehensive visibility into user compliance with assigned settings policies across the organization.

Key benefits

Real-time compliance tracking: Easily monitor which users are compliant with their assigned settings policies. This allows administrators to quickly identify and address non-compliant systems and users.

Streamlined troubleshooting: Detailed compliance status information helps administrators diagnose why certain users might be non-compliant, reducing resolution time and IT overhead.

Figure 4: Desktop settings reporting provides an overview of policy assignment and compliance status, helping organizations stay compliant. 

Get started with Docker Desktop Setting Reporting

The Desktop Setting Reporting dashboard is currently being rolled out through Early Access. Administrators can see which settings policies are assigned to each user and whether those policies are being correctly applied.

Soon, administrators will be able to access the reporting dashboard by navigating to the Admin Console > Docker Desktop > Reporting. The dashboard provides a clear view of all users’ compliance status, with options to:

Search by username or email address

Filter by assigned policies

Toggle visibility of compliant users to focus on potential issues

View detailed compliance information for specific users

Download comprehensive compliance data as a CSV file

The dashboard also provides targeted resolution steps for non-compliant users to help administrators quickly address issues and ensure organizational compliance.

This new reporting capability underscores Docker’s commitment to providing enterprise-grade management tools that simplify administration while maintaining security and compliance across diverse development environments. Learn more about Desktop settings reporting here.

Wrapping up 

Docker is expanding its AI tooling to simplify application development and improve team workflows. New additions like Model Runner, the Docker AI Agent with MCP server and client support, and the AI Tool Catalog extension in Docker Desktop help streamline how developers build with AI. We continue to make enterprise tools more useful and robust, giving admins better visibility into compliance and policy enforcement through Docker Desktop Settings Reporting. We can’t wait to see what you build next!

Learn more

Authenticate and update today to receive your subscription level’s newest Docker Desktop features.

Subscribe to the Docker Navigator Newsletter.

Learn about our sign-in enforcement options.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

8 Ways to Empower Engineering Teams to Balance Productivity, Security, and Innovation

This post was contributed by Lance Haig, a solutions engineer at Docker.

In today’s fast-paced development environments, balancing productivity with security while rapidly innovating is a constant juggle for senior leaders. Slow feedback loops, inconsistent environments, and cumbersome tooling can derail progress. As a solutions engineer at Docker, I’ve learned from my conversations with industry leaders that a key focus for senior leaders is on creating processes and providing tools that let developers move faster without compromising quality or security. 

Let’s explore how Docker’s suite of products and Docker Business empowers industry leaders and their development teams to innovate faster, stay secure, and deliver impactful results.

1. Create a foundation for reliable workflows

A recurring pain point I’ve heard from senior leaders is the delay between code commits and feedback. One leader described how their team’s feedback loops stretched to eight hours, causing delays, frustration, and escalating costs.

Optimizing feedback cycles often involves localizing testing environments and offloading heavy build tasks. Teams leveraging containerized test environments — like Testcontainers Cloud — reduce this feedback loop to minutes, accelerating developer output. Similarly, offloading complex builds to managed cloud services ensures infrastructure constraints don’t block developers. The time saved here is directly reinvested in faster iteration cycles.

Incorporating Docker’s suite of products can significantly enhance development efficiency by reducing feedback loops. For instance, The Warehouse Group, New Zealand’s largest retail chain, transformed its development process by adopting Docker. This shift enabled developers to test applications locally, decreasing feedback loops from days to minutes. Consequently, deployments that previously took weeks were streamlined to occur within an hour of code submission.

2. Shorten feedback cycles to drive results

Inconsistent development environments continue to plague engineering organizations. These mismatches lead to wasted time troubleshooting “works-on-my-machine” errors or inefficiencies across CI/CD pipelines. Organizations achieve consistent environments across local, staging, and production setups by implementing uniform tooling, such as Docker Desktop.

For senior leaders, the impact isn’t just technical: predictable workflows simplify onboarding, reduce new hires’ time to productivity, and establish an engineering culture focused on output rather than firefighting. 

For example, Ataccama, a data management company, leveraged Docker to expedite its deployment process. With containerized applications, Ataccama reduced application deployment lead times by 75%, achieving a 50% faster transition from development to production. By reducing setup time and simplifying environment configuration, Docker allows the team to spin up new containers instantly and shift focus to delivering value. This efficiency gain allowed the team to focus more on delivering value and less on managing infrastructure.

3. Empower teams to collaborate in distributed workflows

Today’s hybrid and remote workforces make developer collaboration more complex. Secure, pre-configured environments help eliminate blockers when working across teams. Leaders who adopt centralized, standardized configurations — even in zero-trust environments — reduce setup time and help teams remain focused.

Docker Build Cloud further simplifies collaboration in distributed workflows by enabling developers to offload resource-intensive builds to a secure, managed cloud environment. Teams can leverage parallel builds, shared caching, and multi-architecture support to streamline workflows, ensuring that builds are consistent and fast across team members regardless of their location or platform. By eliminating the need for complex local build setups, Docker Build Cloud allows developers to focus on delivering high-quality code, not managing infrastructure.

Beyond tools, fostering collaboration requires a mix of practices: sharing containerized services, automating repetitive tasks, and enabling quick rollbacks. The right combination allows engineering teams to align better, focus on goals, and deliver outcomes quickly.

Empowering engineering teams with streamlined workflows and collaborative tools is only part of the equation. Leaders must also evaluate how these efficiencies translate into tangible cost savings, ensuring their investments drive measurable business value.

To learn more about how Docker simplifies the complex, read From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity.

4. Reduce costs

Every organization feels pressured to manage budgets effectively while delivering on demanding expectations. However, leaders can realize cost savings in unexpected areas, including hiring, attrition, and infrastructure optimization, by adopting consumption-based pricing models, streamlining operations, and leveraging modern tooling.

Easy access to all Docker products provides flexibility and scalability 

Updated Docker plans make it easier for development teams to access everything they need under one subscription. Consumption is included for each new product, and more can be added as needed. This allows organizations to scale resources as their needs evolve and effectively manage their budgets. 

Cost savings through streamlined operations

Organizations adopting Docker Business have reported significant reductions in infrastructure costs. For instance, a leading beauty company achieved a 25% reduction in infrastructure expenses by transitioning to a container-first development approach with Docker. 

Bitso, a leading financial services company powered by cryptocurrency, switched to Docker Business from an alternative solution and reduced onboarding time from two weeks to a few hours per engineer, saving an estimated 7,700 hours in the eight months while scaling the team. Returning to Docker after spending almost two years with the alternative open-source solution proved more cost-effective, decreasing the time spent onboarding, troubleshooting, and debugging. Further, after transitioning back to Docker, Bitso has experienced zero new support tickets related to Docker, significantly reducing the platform support burden. 

Read the Bitso case study to learn why Bitso returned to Docker Business.

Reducing infrastructure costs with modern tooling

Organizations that adopt Docker’s modern tooling realize significant infrastructure cost savings by optimizing resource usage, reducing operational overhead, and eliminating inefficiencies tied to legacy processes. 

By leveraging Docker Build Cloud, offloading resource-intensive builds to a managed cloud service, and leveraging shared cache, teams can achieve builds up to 39 times faster, saving approximately one hour per day per developer. For example, one customer told us they saw their overall build times improve considerably through the shared cache feature. Previously on their local machine, builds took 15-20 minutes. Now, with Docker Build Cloud, it’s down to 110 seconds — a massive improvement.

Check out our calculator to estimate your savings with Build Cloud.

5. Retain talent through frictionless environments

High developer turnover is expensive and often linked to frustration with outdated or inefficient tools. I’ve heard countless examples of developers leaving not because of the work but due to the processes and tooling surrounding it. Providing modern, efficient environments that allow experimentation while safeguarding guardrails improves satisfaction and retention.

Year after year, developers rank Docker as their favorite developer tool. For example, more than 65,000 developers participated in Stack Overflow’s 2024 Developer Survey, which recognized Docker as the most-used and most-desired developer tool for the second consecutive year, and as the most-admired developer tool.

Providing modern, efficient environments with Docker tools can enhance developer satisfaction and retention. While specific metrics vary, streamlined workflows and reduced friction are commonly cited as factors that improve team morale and reduce turnover. Retaining experienced developers not only preserves institutional knowledge but also reduces the financial burden of hiring and onboarding replacements.

6. Efficiently manage infrastructure 

Consolidating development and operational tooling reduces redundancy and lowers overall IT spend. Organizations that migrate to standardized platforms see a decrease in toolchain maintenance costs and fewer internal support tickets. Simplified workflows mean IT and DevOps teams spend less time managing environments and more time delivering strategic value.

Some leaders, however, attempt to build rather than buy solutions for developer workflows, seeing it as cost-saving. This strategy carries risks: reliance on a single person or small team to maintain open-source tooling can result in technical debt, escalating costs, and subpar security. By contrast, platforms like Docker Business offer comprehensive protection and support, reducing long-term risks.

Cost management and operational efficiency go hand-in-hand with another top priority: security. As development environments grow more sophisticated, ensuring airtight security becomes critical — not just for protecting assets but also for maintaining business continuity and customer trust.

7. Secure developer environments

Security remains a top priority for all senior leaders. As organizations transition to zero-trust architectures, the role of developer workstations within this model grows. Developer systems, while powerful, are not exempt from being targets for potential vulnerabilities. Securing developer environments without stifling productivity is an ongoing leadership challenge.

Tightening endpoint security without reducing autonomy

Endpoint security starts with visibility, and Docker makes it seamless. With Image Access Management, Docker ensures that only trusted and compliant images are used throughout your development lifecycle, reducing exposure to vulnerabilities. However, these solutions are only effective if they don’t create bottlenecks for developers.

Recently, a business leader told me that taking over a team without visibility into developer environments and security revealed significant risks. Developers were operating without clear controls, exposing the organization to potential vulnerabilities and inefficiencies. By implementing better security practices and centralized oversight, the leaders improved visibility and reduced operational risks, enabling a more secure and productive environment for developer teams. This shift also addressed compliance concerns by ensuring the organization could effectively meet regulatory requirements and demonstrate policy adherence.

Securing the software supply chain

From trusted content repositories to real-time SBOM insights, securing dependencies is critical for reducing attack surfaces. In conversations with security-focused leaders, the message is clear: Supply chain vulnerabilities are both a priority and a pain point. Leaders are finding success when embedding security directly into developer workflows rather than adding it as a reactive step. Tools like Docker Scout provide real-time visibility into vulnerabilities within your software supply chain, enabling teams to address risks before they escalate. 

Securing developer environments strengthens the foundation of your engineering workflows. But for many industries, these efforts must also align with compliance requirements, where visibility and control over processes can mean the difference between growth and risk.

Improving compliance

Compliance may feel like an operational requirement, but for senior leadership, it’s a strategic asset. In regulated industries, compliance enables growth. In less regulated sectors, it builds customer trust. Regardless of the driver, visibility, and control are the cornerstones of effective compliance.

Proactive compliance, not reactive audits

Audits shouldn’t feel like fire drills. Proactive compliance ensures teams stay ahead of risks and disruptions. With the right processes in place — automated logging, integrated open-source software license checks, and clear policy enforcement — audit readiness becomes a part of daily operations. This proactive approach ensures teams stay ahead of compliance risks while reducing unnecessary disruptions.

While compliance ensures a stable and trusted operational baseline, innovation drives competitive advantage. Forward-thinking leaders understand that fostering creativity within a secure and compliant framework is the key to sustained growth.

8. Accelerating innovation

Every senior leader seeks to balance operational excellence and fostering innovation. Enabling engineers to move fast requires addressing two critical tensions: reducing barriers to experimentation and providing guardrails that maintain focus.

Building a culture of safe experimentation

Experimentation thrives in environments where developers feel supported and unencumbered. By establishing trusted guardrails — such as pre-approved images and automated rollbacks — teams gain the confidence to test bold ideas without introducing unnecessary risks.

From MVP to market quickly

Reducing friction in prototyping accelerates the time-to-market for Minimum Viable Products (MVPs). Leaders prioritizing local testing environments and streamlined approval processes create conditions where engineering creativity translates directly into a competitive advantage.

Innovation is no longer just about moving fast; it’s about moving deliberately. Senior leaders must champion the tools, practices, and environments that unlock their teams’ full potential.

Unlock the full potential of your teams

As a senior leader, you have a unique position to balance productivity, security, and innovation within your teams. Reflect on your current workflows and ask: Are your developers empowered with the right tools to innovate securely and efficiently? How does your organization approach compliance and risk management without stifling creativity?

Tools like Docker Business can be a strategic enabler, helping you address these challenges while maintaining focus on your goals.

Learn more

Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.

Docker Health Scores: A security grading system for container images that offers teams clear insights into their image security posture.

Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.

Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for containerized applications.

Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.

Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.

Quelle: https://blog.docker.com/feed/

Leveraging Docker with TensorFlow Models & TensorFlow.js for a Snake AI Game

The emergence of containerization has brought about a significant transformation in software development and deployment by providing a consistent and scalable environment on many platforms. For developers trying to optimize their operations, Docker in particular has emerged as the preferred technology.  By using containers as the foundation, developers gain the same key benefits — portability, scalability, efficiency, and security — that they rely on for other workloads, seamlessly extending them to ML and AI applications. Using a real-world example involving a Snake AI game, we’ll examine in this article how TensorFlow.js can be used with Docker to run AI/ML in a web browser.

Why Docker for TensorFlow.js conversion?

With the help of TensorFlow.js, a robust toolkit, machine learning models can be executed in a web browser, opening up a plethora of possibilities for applications such as interactive demonstrations and real-time inference. Docker offers a sophisticated solution by guaranteeing consistency and user-friendliness throughout the conversion process by enclosing it inside a container.

The Snake AI neural network game

The Snake AI game brings a modern twist to the classic Snake game by integrating artificial intelligence that learns and improves its gameplay over time. In this version, you can either play manually using arrow keys or let the AI take control. 

The AI running with TensorFlow continuously improves by making strategic movements to maximize the score while avoiding collisions. The game runs in a browser using TensorFlow.js, allowing you to test different trained models and observe how the AI adapts to various challenges. 

Whether you’re playing for fun or experimenting with AI models, this game is a great way to explore the intersection of gaming and machine learning. In our approach, we’ve used the neural network to play the traditional Snake game.

Before we dive into the AI in this Snake game, let’s understand the basics of a neural network.

What is a neural network?

A neural network is a type of machine learning model inspired by the way the human brain works. It’s made up of layers of nodes (or “neurons”), where each node takes some inputs, processes them, and passes an output to the next layer.

Key components of a neural network:

Input layer: Receives the raw data (in our game, the snake’s surroundings).

Hidden layers: Process the input and extract patterns.

Output layer: Gives the final prediction

Figure 1: Key components of a neural network

Imagine each neuron as a small decision-maker. The more neurons and layers, the better the network can recognize patterns.

Types of neural networks

There are several types of neural networks, each suited for different tasks!

Feedforward Neural Networks (FNNs):

The simplest type, where data flows in one direction, from input to output.

Great for tasks like classification, regression, and pattern recognition.

Convolutional Neural Networks (CNNs):

Designed to work with image data.

Uses filters to detect spatial patterns, like edges or textures.

Recurrent Neural Networks (RNNs)

Good for sequence prediction (e.g., stock prices, text generation).

Remembers previous inputs, allowing it to handle time-series data.

Long Short-Term Memory Networks (LSTMs)

A specialized type of RNN that can learn long-term dependencies.

Generative Adversarial Networks (GANs)

Used for generating new data, such as creating images or deepfakes.

When to use each type:

CNNs: Image classification, object detection, facial recognition.

RNNs/LSTMs: Language models, stock market prediction, time-series data.

FNNs: Games like Snake, where the task is to predict the next action based on the current state.

How does the game work?

The snake game offers two ways to play:

Manual mode:

You control the snake using your keyboard’s arrow keys.

The goal is to eat the fruit (red square) without hitting the walls or yourself.

Every time you eat a fruit, the snake grows longer, and your score increases.

If the snake crashes, the game ends, and you can restart to try again.

AI mode:

The game plays itself using a neural network (a type of AI brain) built with TensorFlow.js.

The AI looks at the snake’s surroundings (walls, fruit location, and snake body) and predicts the best move: left, forward, or right.

After each game, the AI learns from its mistakes and becomes smarter over time.

With enough training, the AI can avoid crashing and improve its score.

Figure 2: The classic Nokia snake game with an AI twist

Getting started

Let’s go through how this game is built step by step. You’ll first need to install Docker to run the game in a web browser. Here’s a summary of the steps. 

Clone the repository

Install Docker Desktop

Create a Dockerfile

Build the Docker image

Run the container

Access the game using the web browser

Cloning the repository

git clone https://github.com/dockersamples/snake-game-tensorflow-docker

Install Docker Desktop

Prerequisites:

A supported version of Mac, Linux, or Windows

At least 4 GB RAM

Click here to download Docker Desktop. Select the version appropriate for your system (Apple Silicon or Intel chip for Mac users, Windows, or Linux distribution)

Figure 3: Download Docker Desktop from the Docker website for your machine

Quick run

After installing Docker Desktop, run the pre-built Docker image and execute the following command in your command prompt. It’ll pull the image and start a new container running the snake-game:v1 Docker image and expose port 8080 on the host machine. 

Run the following command to bring up the application:

docker compose up

Next, open the browser and go to http://localhost:8080 to see the output of the snake game and start your first game.

Why use Docker to run the snake game?

No need to install Nginx on your machine — Docker handles it.

The game runs the same way on any system that supports Docker.

You can easily share your game as a Docker image, and others can run it with a single command.

The game logic

The index.html file acts as the foundation of the game, defining the layout and structure of the webpage. It fetches the library TensorFlow.js, which powers the AI, along with script.js for handling gameplay logic and ai.js for AI-based movements. The game UI is simple yet functional, featuring a mode selector that lets players switch between manual control (using arrow keys) and AI mode. The scoreboard dynamically updates the score, high score, and generation count when the AI is training. Also, the game itself runs on an HTML <canvas> element, making it highly interactive. As we move forward, we’ll explore how the JavaScript files bring this game to life!

File : index.html

The HTML file sets up the structure of the game, like the game canvas and control buttons. It also fetches the library from Tensorflow which will be further used by the code to train the snake. 

Canvas: Where the game is drawn.

Mode Selector: Lets you switch between manual and AI gameplay.

TensorFlow.js: The library that powers the AI brain!

File : script.js

This file handles everything in the game—drawing the board, moving the snake, placing the fruit, and keeping score.

const canvas = document.getElementById('gameCanvas');
const ctx = canvas.getContext('2d');

let snake = [{ x: 5, y: 5 }];
let fruit = { x: 10, y: 10 };
let direction = { x: 1, y: 0 };
let score = 0;

Snake Position: Where the snake starts.

Fruit Position: Where the apple is.

Direction: Which way the snake is moving.

The game loop

The game loop keeps the game running, updating the snake’s position, checking for collisions, and handling the score.

function gameLoopManual() {
const head = { x: snake[0].x + direction.x, y: snake[0].y + direction.y };

if (head.x === fruit.x && head.y === fruit.y) {
score++;
fruit = placeFruit();
} else {
snake.pop();
}
snake.unshift(head);
}

Moving the Snake: Adds a new head to the snake and removes the tail (unless it eats an apple).

Collision: If the head hits the wall or its own body, the game ends.

Switching between modes

document.getElementById('mode').addEventListener('change', function() {
gameMode = this.value;
});

Manual Mode: Use arrow keys to control the snake.

AI Mode: The neural network predicts the next move.

Game over and restart

function gameOver() {
clearInterval(gameInterval);
alert('Game Over');
}

function resetGame() {
score = 0;
snake = [{ x: 5, y: 5 }];
fruit = placeFruit();
}

Game Over: Stops the game when the snake crashes.

Reset: Resets the game back to the beginning.

Training the AI

File : ai.js

This file creates and trains the neural network — the AI brain that learns how to play Snake!

var movementOptions = ['left', 'forward', 'right'];

const neuralNet = tf.sequential();
neuralNet.add(tf.layers.dense({units: 256, inputShape: [5]}));
neuralNet.add(tf.layers.dense({units: 512}));
neuralNet.add(tf.layers.dense({units: 256}));
neuralNet.add(tf.layers.dense({units: 3}));

Neural Network: This is a simulated brain with four layers of neurons.

Input: Information about the game state (walls, fruit position, snake body).

Output: One of three choices: turn left, go forward, or turn right.

const optAdam = tf.train.adam(.001);
neuralNet.compile({
optimizer: optAdam,
loss: 'meanSquaredError'
});

Optimizer: Helps the brain learn efficiently by adjusting its weights.

Loss Function: Measures how wrong the AI’s predictions are, helping it improve.

Every time the snake plays a game, it remembers its moves and trains itself.

async function trainNeuralNet(moveRecord) {
for (var i = 0; i < moveRecord.length; i++) {
const expected = tf.oneHot(tf.tensor1d([deriveExpectedMove(moveRecord[i])], 'int32'), 3).cast('float32');
posArr = tf.tensor2d([moveRecord[i]]);
await neuralNet.fit(posArr, expected, { batchSize: 3, epochs: 1 });
expected.dispose();
posArr.dispose();
}
}

After each game, the AI looks at what happened, adjusts its internal connections, and tries to improve for the next game.

The movementOptions array defines the possible movement directions for the snake: ‘left’, ‘forward’, and ‘right’.

An Adam optimizer with a learning rate of 0.001 compiles the model, and a mean squared error loss function is specified. The trainNeuralNet function is defined to train the neural network using a given moveRecord array. It iterates over the moveRecord array, creates one-hot encoded tensors for the expected movement, and trains the model using the TensorFlow.js fit method.

Predicting the next move

When playing, the AI predicts what the best move should be.

function computePrediction(input) {
let inputs = tf.tensor2d([input]);
const outputs = neuralNet.predict(inputs);
return movementOptions[outputs.argMax(1).dataSync()[0]];
}

The computePrediction function makes predictions using the trained neural network. It takes an input array, creates a tensor from the input, predicts the movement using the neural network, and returns the movement option based on the predicted output.

The code demonstrates the creation of a neural network model, training it with a given move record, and making predictions using the trained model. This approach can enhance the snake AI game’s performance and intelligence by learning from its history and making informed decisions.

File : Dockerfile

FROM nginx:latest
COPY . /usr/share/nginx/html

FROM nginx:latest

This tells Docker to use the latest version of Nginx as the base image.

Nginx is a web server that’s great for serving static files like HTML, CSS, and JavaScript ( (which is what the Snake game is made of).

Instead of creating a server from scratch, the following line saves time by using an existing, reliable Nginx setup.

COPY . /usr/share/nginx/html

This line copies everything from your current project directory (the one with the Snake game files: index.html, script.js, ai.js, etc.) into the Nginx web server’s default folder for serving web content.

/usr/share/nginx/html is the default folder where Nginx looks for web files to display.

Development setup

Here’s how you can set up the development environment for the Snake game using Docker!

To make the development smoother, you can use Docker Volumes to avoid rebuilding the Docker image every time you change the game files. 

Run the command from the folder where Snake-AI-TensorFlow-Docker code exists:

docker run -it –rm -d -p 8080:80 –name web -v ./:/usr/share/nginx/html nginx

If you hit an error like

docker: Error response from daemon: Mounts denied:
The path /Users/harsh/Downloads/Snake-AI-TensorFlow-Docker is not shared from the host and is not known to Docker.
You can configure shared paths from Docker -> Preferences… -> Resources -> File Sharing.
See https://docs.docker.com/desktop/settings/mac/#file-sharing for more info.

Open Docker Desktop, go to Settings -> Resources -> File Sharing -> Select the location where you cloned the repository code, and click on Apply & restart.

Figure 4: Using Docker Desktop to set up a development environment for the AI snake game

Run the command again, you won’t face any errors now 

docker run -it –rm -d -p 8080:80 –name web -v ./:/usr/share/nginx/html nginx

Check if the container is running with the following command

harsh@Harshs-MacBook-Air snake-game-tensorflow-docker % docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c47e2711b2db nginx "/docker-entrypoint.…" 3 seconds ago Up 2 seconds 0.0.0.0:8080->80/tcp web

Open the browser and go to the URL http://localhost:8080, you’ll see the snake game output. This setup is perfect for development because it keeps everything fast and dynamic. 

Figure 5: Accessing the snake game via a web browser.

Changes in the code will be reflected right away in the browser without rebuilding the container-v ~/.:/usr/share/nginx/html — This is the magic part! It mounts your local directory (Snake-AI-TensorFlow-Docker) into the Nginx HTML directory (/usr/share/nginx/html) inside the container.

Any changes you make to HTML, CSS, or JavaScript files in code ~/Snake-AI-TensorFlow-Docker immediately reflect in the running app without needing to rebuild the container.

Conclusion 

In conclusion, building a Snake AI game using TensorFlow.js and Docker demonstrates how seamlessly ML and AI can be integrated into interactive web applications. Through this project, we’ve not only explored the fundamentals of reinforcement learning but also seen firsthand how Docker can simplify the development and deployment process.

By containerizing, Docker ensures a consistent environment across different systems, eliminating the common “it works on my machine” problem. This consistency makes it easier to collaborate with others, deploy to production, and manage dependencies without worrying about version mismatches or local configuration issues for web applications, machine learning projects, or AI applications.
Quelle: https://blog.docker.com/feed/

Shift-Left Testing with Testcontainers: Catching Bugs Early with Local Integration Tests

Modern software development emphasizes speed and agility, making efficient testing crucial. DORA research reveals that elite teams thrive with both high performance and reliability. They can achieve 127x faster lead times, 182x more deployments per year, 8x lower change failure rates and most impressively, 2,293x faster recovery times after incidents. The secret sauce is they “shift left.” 

Shift-Left is a practice that moves integration activities like testing and security earlier in the development cycle, allowing teams to detect and fix issues before they reach production. By incorporating local and integration tests early, developers can prevent costly late-stage defects, accelerate development, and improve software quality. 

In this article, you’ll learn how integration tests can help you catch defects earlier in the development inner loop and how Testcontainers can make them feel as lightweight and easy as unit tests. Finally, we’ll break down the impact that shifting left integration tests has on the development process velocity and lead time for changes according to DORA metrics. 

Real-world example: Case sensitivity bug in user registration

In a traditional workflow, integration and E2E tests are often executed in the outer loop of the development cycle, leading to delayed bug detection and expensive fixes. For example, if you are building a user registration service where users enter their email addresses, you must ensure that the emails are case-insensitive and not duplicated when stored. 

If case sensitivity is not handled properly and is assumed to be managed by the database, testing a scenario where users can register with duplicate emails differing only in letter case would only occur during E2E tests or manual checks. At that stage, it’s too late in the SDLC and can result in costly fixes.

By shifting testing earlier and enabling developers to spin up real services locally — such as databases, message brokers, cloud emulators, or other microservices — the testing process becomes significantly faster. This allows developers to detect and resolve defects sooner, preventing expensive late-stage fixes.

Let’s dive deep into this example scenario and how different types of tests would handle it.

Scenario

A new developer is implementing a user registration service and preparing for production deployment.

Code Example of the registerUser method

async registerUser(email: string, username: string): Promise<User> {
const existingUser = await this.userRepository.findOne({
where: {
email: email
}
});

if (existingUser) {
throw new Error("Email already exists");
}

}

The Bug

The registerUser method doesn’t handle case sensitivity properly and relies on the database or the UI framework to handle case insensitivity by default. So, in practice, users can register duplicate emails with both lower and upper letters  (e.g., user@example.com and USER@example.com).

Impact

Authentication issues arise because email case mismatches cause login failures.

Security vulnerabilities appear due to duplicate user identities.

Data inconsistencies complicate user identity management.

Testing method 1: Unit tests. 

These tests only validate the code itself, so email case sensitivity verification relies on the database where SQL queries are executed. Since unit tests don’t run against a real database, they can’t catch issues like case sensitivity. 

Testing method 2: End-to-end test or manual checks. 

These verifications will only catch the issue after the code is deployed to a staging environment. While automation can help, detecting issues this late in the development cycle delays feedback to developers and makes fixes more time-consuming and costly.

Testing method 3: Using mocks to simulate database interactions with Unit Tests. 

One approach that could work and allow us to iterate quickly would be to mock the database layer and define a mock repository that responds with the error. Then, we could write a unit test that executes really fast:

test('should prevent registration with same email in different case', async () => {
const userService = new UserRegistrationService(new MockRepository());
await userService.registerUser({ email: 'user@example.com', password: 'password123' });
await expect(userService.registerUser({ email: 'USER@example.com', password: 'password123' }))
.rejects.toThrow('Email already exists');
});

In the above example, the User service is created with a mock repository that’ll hold an in-memory representation of the database, i.e. as a map of users. This mock repository will detect if a user has passed twice, probably using the username as a non-case-sensitive key, returning the expected error. 

Here, we have to code the validation logic in the mock, replicating what the User service or the database should do. Whenever the user’s validation needs a change, e.g. not including special characters, we have to change the mock too. Otherwise, our tests will assert against an outdated state of the validations. If the usage of mocks is spread across the entire codebase, this maintenance could be very hard to do.

To avoid that, we consider that integration tests with real representations of the services we depend on. In the above example,  using the database repository is much better than mocks, because it provides us with more confidence on what we are testing.

Testing method 4: Shift-left local integration tests with Testcontainers 

Instead of using mocks, or waiting for staging to run the integration or E2E tests, we can detect the issue earlier.  This is achieved by enabling developers to run the integration tests for the project locally in the developer’s inner loop, using Testcontainers with a real PostgreSQL database.

Benefits

Time Savings: Tests run in seconds, catching the bug early.

More Realistic Testing: Uses an actual database instead of mocks.

Confidence in Production Readiness: Ensures business-critical logic behaves as expected.

Example integration test

First, let’s set up a PostgreSQL container using the Testcontainers library and create a userRepository to connect to this PostgreSQL instance:

let userService: UserRegistrationService;

beforeAll(async () => {
container = await new PostgreSqlContainer("postgres:16")
.start();

dataSource = new DataSource({
type: "postgres",
host: container.getHost(),
port: container.getMappedPort(5432),
username: container.getUsername(),
password: container.getPassword(),
database: container.getDatabase(),
entities: [User],
synchronize: true,
logging: true,
connectTimeoutMS: 5000
});
await dataSource.initialize();
const userRepository = dataSource.getRepository(User);
userService = new UserRegistrationService(userRepository);
}, 30000);

Now, with initialized userService, we can use the registerUser method to test user registration with the real PostgreSQL instance:

test('should prevent registration with same email in different case', async () => {
await userService.registerUser({ email: 'user@example.com', password: 'password123' });
await expect(userService.registerUser({ email: 'USER@example.com', password: 'password123' }))
.rejects.toThrow('Email already exists');
});

Why This Works

Uses a real PostgreSQL database via Testcontainers

Validates case-insensitive email uniqueness

Verifies email storage format

How Testcontainers helps

Testcontainers modules provide preconfigured implementations for the most popular technologies, making it easier than ever to write robust tests. Whether your application relies on databases, message brokers, cloud services like AWS (via LocalStack), or other microservices, Testcontainers has a module to streamline your testing workflow.

With Testcontainers, you can also mock and simulate service-level interactions or use contract tests to verify how your services interact with others. Combining this approach with local testing against real dependencies, Testcontainers provides a comprehensive solution for local integration testing and eliminates the need for shared integration testing environments, which are often difficult and costly to set up and manage. To run Testcontainers tests, you need a Docker context to spin up containers. Docker Desktop ensures seamless compatibility with Testcontainers for local testing. 

Testcontainers Cloud: Scalable Testing for High-Performing Teams

Testcontainers is a great solution to enable integration testing with real dependencies locally. If you want to take testing a step further — scaling Testcontainers usage across teams, monitoring images used for testing, or seamlessly running Testcontainers tests in CI — you should consider using Testcontainers Cloud. It provides ephemeral environments without the overhead of managing dedicated test infrastructure. Using Testcontainers Cloud locally and in CI ensures consistent testing outcomes, giving you greater confidence in your code changes. Additionally, Testcontainers Cloud allows you to seamlessly run integration tests in CI across multiple pipelines, helping to maintain high-quality standards at scale. Finally, Testcontainers Cloud is more secure and ideal for teams and enterprises who have more stringent requirements for containers’ security mechanisms.   

Measuring the business impact of shift-left testing

As we have seen, shift-left testing with Testcontainers significantly improves defect detection rate and time and reduces context switching for developers. Let’s take the example above and compare different production deployment workflows and how early-stage testing would impact developer productivity. 

Traditional workflow (shared integration environment)

Process breakdown:

The traditional workflow comprises writing feature code, running unit tests locally, committing changes, and creating pull requests for the verification flow in the outer loop. If a bug is detected in the outer loop, developers have to go back to their IDE and repeat the process of running the unit test locally and other steps to verify the fix. 

Figure 1: Workflow of a traditional shared integration environment broken down by time taken for each step.

Lead Time for Changes (LTC): It takes at least 1 to 2 hours to discover and fix the bug (more depending on CI/CD load and established practices). In the best-case scenario, it would take approximately 2 hours from code commit to production deployment. In the worst-case scenario, it may take several hours or even days if multiple iterations are required.

Deployment Frequency (DF) Impact: Since fixing a pipeline failure can take around 2 hours and there’s a daily time constraint (8-hour workday), you can realistically deploy only 3 to 4 times per day. If multiple failures occur, deployment frequency can drop further.

Additional associated costs: Pipeline workers’ runtime minutes and Shared Integration Environment maintenance costs.

Developer Context Switching: Since bug detection occurs about 30 minutes after the code commit, developers lose focus. This leads to an increased cognitive load after they have to constantly context switch, debug, and then context switch again.

Shift-left workflow (local integration testing with Testcontainers)

Process breakdown:

The shift-left workflow is much simpler and starts with writing code and running unit tests. Instead of running integration tests in the outer loop, developers can run them locally in the inner loop to troubleshoot and fix issues. The changes are verified again before proceeding to the next steps and the outer loop. 

Figure 2: Shift-Left Local Integration Testing with Testcontainers workflow broken down by time taken for each step. The feedback loop is much faster and saves developers time and headaches downstream.

Lead Time for Changes (LTC): It takes less than 20 minutes to discover and fix the bug in the developers’ inner loop. Therefore, local integration testing enables at least 65% faster defect identification than testing on a Shared Integration Environment.  

Deployment Frequency (DF) Impact: Since the defect was identified and fixed locally within 20 minutes, the pipeline would run to production, allowing for 10 or more deployments daily.

Additional associated costs: 5 Testcontainers Cloud minutes are consumed.  

Developer Context Switching: No context switching for the developer, as tests running locally provide immediate feedback on code changes and let the developer stay focused within the IDE and in the inner loop.

Key Takeaways

Traditional Workflow (Shared Integration Environment)Shift-Left Workflow (Local Integration Testing with Testcontainers)Improvements and further referencesFaster Lead Time for Changes (LTCCode changes validated in hours or days. Developers wait for shared CI/CD environments.Code changes validated in minutes. Testing is immediate and local.>65% Faster Lead Time for Changes (LTC) – Microsoft reduced lead time from days to hours by adopting shift-left practices.Higher Deployment Frequency (DF)Deployment happens daily, weekly, or even monthly due to slow validation cycles.Continuous testing allows multiple deployments per day.2x Higher Deployment Frequency  – 2024 DORA Report shows shift-left practices more than double deployment frequency. Elite teams deploy 182x more often.Lower Change Failure Rate (CFR)Bugs that escape into production can lead to costly rollbacks and emergency fixes.More bugs are caught earlier in CI/CD, reducing production failures.Lower Change Failure Rate – IBM’s Systems Sciences Institute estimates defects found in production cost 15x more to fix than those caught early.Faster Mean Time to Recovery (MTTR)Fixes take hours, days, or weeks due to complex debugging in shared environments.Rapid bug resolution with local testing. Fixes verified in minutes.Faster MTTR—DORA’s elite performers restore service in less than one hour, compared to weeks to a month for low performers.Cost SavingsExpensive shared environments, slow pipeline runs, high maintenance costs.Eliminates costly test environments, reducing infrastructure expenses.Significant Cost Savings – ThoughtWorks Technology Radar highlights shared integration environments as fragile and expensive.

Table 1: Summary of key metrics improvement by using shifting left workflow with local testing using Testcontainers

Conclusion

Shift-left testing improves software quality by catching issues earlier, reducing debugging effort, enhancing system stability, and overall increasing developer productivity. As we’ve seen, traditional workflows relying on shared integration environments introduce inefficiencies, increasing lead time for changes, deployment delays, and cognitive load due to frequent context switching. In contrast, by introducing Testcontainers for local integration testing, developers can achieve:

Faster feedback loops – Bugs are identified and resolved within minutes, preventing delays.

More reliable application behavior – Testing in realistic environments ensures confidence in releases.

Reduced reliance on expensive staging environments – Minimizing shared infrastructure cuts costs and streamlines the CI/CD process.

Better developer flow state – Easily setting up local test scenarios and re-running them fast for debugging helps developers stay focused on innovation.

Testcontainers provides an easy and efficient way to test locally and catch expensive issues earlier. To scale across teams,  developers can consider using Docker Desktop and Testcontainers Cloud to run unit and integration tests locally, in the CI, or ephemeral environments without the complexity of maintaining dedicated test infrastructure. Learn more about Testcontainers and Testcontainers Cloud in our docs. 

Further Reading

Sign up for a Testcontainers Cloud account.

Follow the guide: Mastering Testcontainers Cloud by Docker: streamlining integration testing with containers

Connect on the Testcontainers Slack.

Get started with the Testcontainers guide.

Learn about Testcontainers best practices.

Learn about Spring Boot Application Testing and Development with Testcontainers

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Desktop 4.39: Smarter AI Agent, Docker Desktop CLI in GA, and Effortless Multi-Platform Builds

Developers need a fast, secure, and reliable way to build, share, and run applications — and Docker makes that easy. With the Docker Desktop 4.39 release, we’re excited to announce a few developer productivity enhancements including Docker AI Agent with Model Context Protocol (MCP) and Kubernetes support, general availability of Docker Desktop CLI, and `platform` flag support for more seamless multi-platform image management.

Docker AI Agent: Smarter, more capable, and now with MCP & Kubernetes

In our last release, we introduced the Docker AI Agent in beta as an AI-powered, context-aware assistant built into Docker Desktop and the CLI. It simplifies container management, troubleshooting, and workflows with guidance and automation. And the response has been incredible: a 9x increase in weekly active users. With each Docker Desktop release, we’re making Docker AI Agent smarter, more helpful, and more versatile across developer container workflows. And if you’re using Docker for GitHub Copilot, you’ll get these upgrades automatically — so you’re always working with the latest and greatest.

Docker AI Agent now supports Model Context Protocol (MCP) and Kubernetes, along with usability upgrades like multiline prompts and easy copying. The agent can now also interact with the Docker Engine to list and clean up containers, images, and volumes. Plus, with access to the Kubernetes cluster, Docker AI Agent can list namespaces, deploy and expose, for example, an Nginx service, and analyze pod logs. 

How Docker AI Agent Uses MCP

MCP is a new standard for connecting AI agents and models to external data and tools. It lets AI-powered apps and agents retrieve data and information from external sources, perform operations with third-party services, and interact with local filesystems, unlocking new and expanded capabilities. MCP works by introducing the concept of MCP clients and MCP Servers, this way clients request resources and the servers handle the request and perform the requested action.

The Docker AI Agent acts as an MCP client and can interact with MCP servers running as containers. When running the docker ai command in the terminal or in the Docker Desktop AI Agent window to ask a question, the agent looks for a gordon-mcp.yml file in the working directory for a list of MCP servers that should be used when in that context. For example, as a specialist in all things Docker, Docker AI Agent can:

Access the internet via the MCP fetch server.

Create a project on GitHub with the MCP Github server

To make MCP adoption easier and more secure, Docker has collaborated with Anthropic to build container images for the reference implementations of MCP servers, available on Docker Hub under the mcp namespace. Check out our docs for examples of using MCP with Docker AI Agent. 

Containerizing apps in multiple popular languages: More coming soon

Docker AI Agent is also more capable, and can now support the containerization of applications in new programming languages including:

JavaScript/TypeScript applications using npm, pnpm, yarn and bun;

Go applications using Go modules;

Python applications using pip, poetry, and uv;

C# applications using nuget

Try it out — just ask, “Can you containerize my application?” 

Once the agent runs through steps such as determining the number of services in the project, the language, package manager, and relevant information for containerization, it’ll generate Docker-related assets. You’ll have an optimized Dockerfile, Docker Compose file, dockerignore file, and a README to jumpstart your application with Docker. 

More language and package manager support will be available soon!

Figure 1: Docker AI Agent helps with containerizing your app and shows steps of its work

No need to write scripts, just ask Docker AI Agent

The Docker AI Agent also comes with built-in capabilities such as interfacing with containers, images, and volumes. Instead of writing scripts, you can simply ask in natural language to perform complex operations.  For example, combining various servers, to do complex tasks such as finding and cleaning unused images.

Figure 2: Finding and optimizing unused images storage with a simple ask to Docker AI Agent

Docker Desktop CLI: Now in GA

With the Docker Desktop 4.37 release, we introduced the Docker Desktop CLI controller in Beta, a command-line tool to manage Docker Desktop. In addition to performing tasks like starting, stopping, restarting, and checking the status of Docker Desktop directly from the command line, developers can also print logs and update to the latest version of Docker Desktop. 

Docker meets developers where they work — whether in the CLI or GUI. With the Docker Desktop CLI, developers can seamlessly switch between GUI and command-line workflows, tailoring their workflows to their needs. 

This feature lets you automate Docker Desktop operations in CI/CD pipelines, expedites troubleshooting directly from the terminal, and creates a smoother, distraction-free workflow. IT admins also benefit from this feature; for example, they can use these commands in automation scripts to manage updates. 

Improve multi-platform image management with the new –platform flag 

Containerized applications often need to run across multiple architectures, making efficient platform-specific image management essential. To simplify this, we’ve introduced a –platform flag for docker save, docker load, and docker history. This addition will let developers explicitly select and manage images for specific architectures like linux/amd64, linux/arm64, and more.

The new –platform flag gives you full control over environment variants when saving or loading. For example, exporting only the linux/arm64 version of an image is now as simple as running:

docker save –platform linux/arm64 -o my-image.tar my-app:latest

Similarly, docker load –platform linux/amd64 ensures that only the amd64 variant is imported from a multi-architecture archive, reducing ambiguity and improving cross-platform workflows. For debugging and optimization, docker history –platform provides detailed insights into the build history of a specific architecture.

These enhancements streamline multi-platform development by giving developers full control over how they build, store, and distribute images. 

Head over to our history, load, and save documentation to learn more! 

Wrapping up 

Docker Desktop 4.39 reinforces our commitment to streamlining the developer experience. With Docker AI Agent’s expanded support for MCP, Kubernetes, built-in capabilities of interacting with containers, and more, developers can simplify and customize their workflow. They can also seamlessly switch between the GUI and command-line, while creating automations with the Docker Desktop CLI. Plus, with the new –platform flag, developers now have full control over how they build, store, and distribute images. 

Less friction, more flexibility — we can’t wait to see what you build next!

Authenticate and update today to receive your subscription level’s newest Docker Desktop features.

Learn more

Subscribe to the Docker Navigator Newsletter.

Learn about our sign-in enforcement options.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Docker Engine v28: Hardening Container Networking by Default

Docker simplifies containerization by removing runtime complexity and making app development seamless. With Docker Engine v28, we’re taking another step forward in security by ensuring containers aren’t unintentionally accessible from local networks. This update isn’t about fixing a single vulnerability — it’s about security hardening so your containers stay safe. 

What happened?

When you run a container on the default Docker “bridge” network, Docker sets up NAT (Network Address Translation) rules using your system’s firewall (via iptables). For example, the following command forwards traffic from port 8080 on your host to port 80 in the container. 

docker run -d -p 8080:80 my-web-app

However, if your host’s filter-FORWARD chain is permissive (i.e., ACCEPT by default) and net.ipv4.ip_forward is enabled, unpublished ports could also be remotely accessible under certain conditions.

This only affects hosts on the same physical/link-layer network as the Docker host. In multi-tenant LAN environments or other shared local networks, someone connected on an RFC1918 subnet (such as 192.168.x.x or 10.x.x.x) could reach unpublished container ports if they knew (or guessed) its IP address.

Who’s affected?

This behavior only affects Linux users running Docker versions earlier than 28.0.0 with iptables. Docker Desktop is not affected.

If you installed Docker on a single machine and used our defaults, without manually customizing your firewall settings, you’re likely unaffected by upgrading. However, you could be impacted if:

You deliberately set your host’s FORWARD chain to ACCEPT and rely on accessing containers by their IP from other machines on the LAN.

You bound containers to 127.0.0.1 or another loopback interface but still route external traffic to them using advanced host networking tricks.

You require direct container access from subnets, VLANs, or other broadcast domains without explicitly publishing ports (i.e., no p flags).

If any of these exceptions apply to you, you might notice that containers previously reachable by direct IP now appear blocked unless you opt out or reconfigure your networking.

What’s the impact?

This exposure would’ve required being on the same local network or otherwise having route-level access to the container’s RFC1918 IP range. It did not affect machines across the public internet. However, a malicious user could discover unpublished container ports, within LAN settings or corporate environments, and connect to them. 

For instance, they might add a custom route to the container’s subnet, using the following command:

ip route add 172.17.0.0/16 via 192.168.0.10

From there, if 192.168.0.10 is your Docker host with a permissive firewall, the attacker could send packets straight to 172.17.0.x (the container’s IP).

What are my next steps?

If you’ve been impacted, we recommend taking the following three steps:

1. Upgrade to Docker Engine 28.0

Docker Engine 28.0 now drops traffic to unpublished ports by default. 

This “secure by default” approach prevents containers from being unexpectedly accessible on the LAN. Most users won’t notice a difference, published ports will continue to function as usual (-p 8080:80), and unpublished ports remain private as intended.

2. Decide if you need the old behavior (opt-out)

If you connect to containers over the LAN without publishing ports, you have a couple of options:

Option 1: Disable Docker’s DROP policy In /etc/docker/daemon.json, add the following:

{
“ip-forward-no-drop”: true
}

Or you can run Docker with the –ip-forward-no-drop flag. This preserves a globally open FORWARD chain. But keep in mind that Docker still drops traffic for unpublished ports using separate rules. See Option 2 for a fully unprotected alternative.

Option 2: Create a “nat-unprotected” network

docker network create -d bridge
-o com.docker.network.bridge.gateway_mode_ipv4=nat-unprotected
my_unprotected_net

3. Consider custom iptables management

Advanced users with complex network setups can manually configure iptables to allow exactly the traffic they need. But this route is only recommended if you’re comfortable managing firewall rules.

Technical details

In previous Docker Engine versions, Docker’s reliance on a permissive FORWARD chain meant that containers on the default bridge network could be reached if:

net.ipv4.ip_forward was enabled (often auto-enabled by Docker if it wasn’t already).

The system-wide FORWARD chain was set to ACCEPT.

Another machine on the same LAN (or an attached subnet) routed traffic to the container’s IP address.

In Docker 28.0, we now explicitly drop unsolicited inbound traffic to each container’s internal IP unless that port was explicitly published (-p or –publish). This doesn’t affect local connections from the Docker host itself, but it does block remote LAN connections to unpublished ports.

Attacks: a real-world example

Suppose you have two hosts on the same RFC1918 subnet:

Attacker at 10.0.0.2

DockerHost at 10.0.0.3

DockerHost runs a container with IP 172.17.0.2, and Docker’s firewall policy is effectively “ACCEPT.” In this situation, an attacker could run the following command:

ip route add 172.17.0.2/32 via 10.0.0.3
nc 172.17.0.2 3306

If MySQL was listening on port 3306 in the container (unpublished), Docker might still forward that traffic. The container sees a connection from 10.0.0.2, and no authentication is enforced by Docker’s network stack alone. 

Mitigations in Docker Engine 28.0

By enforcing a drop rule for unpublished ports, Docker 28.0 now prevents the above scenario by default.

1. Default drop for unpublished ports

Ensures that local traffic to container IPs is discarded unless explicitly published.

2. Targeted adjustments to FORWARD policy

Originally, if Docker had to enable IP forwarding, it would set DROP by default. Now, Docker only applies that if necessary and extends the same logic to IPv6. You can opt-out with –ip-forward-no-drop or in the config file.

3. Unpublished ports stay private

The container remains accessible from the host itself, but not from other devices on the LAN (unless you intentionally choose “nat-unprotected”).

Why we’re making this change now

Some users have raised concerns about local-network exposure for years (see issues like #14041 and #22054). But there’s been a lot of change since then:

Docker was simpler: Use cases often revolved around single-node setups where any extra exposure was mitigated by typical dev/test workflows.

Ecosystem evolution: Overlay networks, multi-host orchestration, and advanced routing became mainstream. Suddenly, scenarios once relegated to specialized setups became common.

Security expectations: Docker now underpins critical workloads in complex environments. Everyone benefits from safer defaults, even if it means adjusting older assumptions.

By introducing these changes in Docker Engine 28.0, we align with today’s best practices: don’t expose anything without explicit user intent. It’s a shift away from relying on users to configure their own firewalls if they want to lock things down.

Backward compatibility and the path forward

These changes are not backward compatible for users who rely on direct container IP access from a local LAN. For the majority, the new defaults lock down containers more securely without breaking typical docker run -p scenarios.

Still, we strongly advise upgrading to 28.0.1 or later so you can benefit from:

Safer defaults: Unpublished ports remain private unless you publish or explicitly opt out.

Clearer boundaries: Published vs. unpublished ports are now unambiguously enforced by iptables rules.

Easier management: Users can adopt the new defaults without becoming iptables experts.

Quick upgrade checklist

Before you upgrade, here’s a recommended checklist to run through to make sure you’re set up for success with the latest release:

Examine current firewall settingsRun iptables -L -n and ip6tables -L -n. If you see ACCEPT in the FORWARD chain, and you depend on that for multi-subnet or direct container access, plan accordingly.

Test in a controlled environmentSpin up Docker Engine 28.0 on a staging environment. Attempt to reach containers that you previously accessed directly. Then, verify which connections still work and which are now blocked.

Decide on opt-outsIf you do need the old behavior, set “ip-forward-no-drop”: true or use a “nat-unprotected” network. Otherwise, enjoy the heightened security defaults.

Monitor logs and metricsWatch for unexpected connection errors or service downtime. If something breaks, check if it’s caused by Docker’s new drop rules before rolling back any changes.

Conclusion

By hardening default networking rules in Docker Engine 28.0, we’re reducing accidental container exposure on local networks. Most users can continue without disruption and can just enjoy the extra peace of mind. But if you rely on older behaviors, you can opt out or create specialized networks that bypass these rules.

Ready to upgrade? Follow our official installation and upgrade instructions to get Docker Engine 28.0 on your system today.

We appreciate the feedback from the community and encourage you to reach out if you have questions or run into surprises after upgrading. You can find us on GitHub issues, our community forums, or your usual support channels.

Learn more

Review the Docker Engine v28 release notes

Read Docker’s iptables documentation

Issues that shaped these changes:

#14041

#22054

#48815

Quelle: https://blog.docker.com/feed/

Revisiting Docker Hub Policies: Prioritizing Developer Experience

At Docker, we are committed to ensuring that Docker Hub remains the best place for developers, engineering teams, and operations teams to build, share, and collaborate. As part of this, we previously announced plans to introduce image pull consumption fees and storage-based billing. After further evaluating how developers use Docker Hub and what will best support the ecosystem, we have refined our approach—one that prioritizes developer experience and enables developers to scale with confidence while reinforcing Docker Hub as the foundation of the cloud-native ecosystem.

What’s Changing?

We’re making important updates to our previously announced pull limits and storage policies to ensure Docker Hub remains a valuable resource for developers:

No More Pull Count Limits or Consumption Charges – We’re cancelling pull consumption charges entirely. Our focus is on making Docker Hub the best place for developers to build, share, and collaborate—ensuring teams can scale with confidence.

Unlimited Pull rates for Paid Users (As Announced Earlier) – Starting April 1, 2025, all paid Docker subscribers will have unlimited image pulls (with fair use limits) to ensure a seamless experience.

Updated Pull Rate Limits for Free & Unauthenticated Users – To ensure a reliable and seamless experience for all users, we are updating authenticated and free pull limits:

Unauthenticated users: Limited to 10 pulls per hour (as announced previously)

Free authenticated users: Increased to 100 pulls per hour (up from 40 pulls / hour)

System accounts & automation: As previously shared, automated systems and service accounts can easily authenticate using Personal Access Tokens (PATs) or Organizational Access Tokens (OATs), ensuring access to higher pull limits and a more reliable experience for automated authenticated pulls.

Storage Charges Delayed Indefinitely – Previously, we announced plans to introduce storage-based billing, but we have decided to indefinitely delay any storage charges. Instead, we are focusing on delivering new tools that will allow users to actively manage their storage usage. Once these tools are available, we will assess storage policies in the best interest of our users. If and when storage charges are introduced, we will provide a six-month notice, ensuring teams have ample time to adjust.

Why This Matters

The Best Place to Build and Share – Docker Hub remains the world’s leading container registry, trusted by over 20 million developers and organizations. We’re committed to keeping it the best place to distribute and consume software.

Growing the Ecosystem – We’re making these changes to support more developers, teams, and businesses as they scale, reinforcing Docker Hub as the foundation of the cloud-native world.

Investing in the Future – Our focus is on delivering more capabilities that help developers move faster, from better storage management to strengthening security to better protect the software supply chain.

Committed to Developers – Every decision we make is about strengthening the platform and enabling developers to build, share, and innovate without unnecessary barriers.

We appreciate your feedback, and we’re excited to keep evolving Docker Hub to meet the needs of developers and teams worldwide. Stay tuned for more updates, and as always—happy building! 
Quelle: https://blog.docker.com/feed/

Powered by Docker: Streamlining Engineering Operations as a Platform Engineer

The Powered by Docker is a series of blog posts featuring use cases and success stories from Docker partners and practitioners. This story was contributed by Neal Patel from Siimpl.io. Neal has more than ten years of experience developing software and is a Docker Captain.

Background

As a platform engineer at a mid-size startup, I’m responsible for identifying bottlenecks and developing solutions to streamline engineering operations to keep up with the velocity and scale of the engineering organization. In this post, I outline some of the challenges we faced with one of our clients, how we addressed them, and provide guides on how to tackle these challenges at your company.

One of our clients faced critical engineering challenges, including poor synchronization between development and CI/CD environments, slow incident response due to inadequate rollback mechanisms, and fragmented telemetry tools that delayed issue resolution. Siimpl implemented strategic solutions to enhance development efficiency, improve system reliability, and streamline observability, turning obstacles into opportunities for growth.

Let’s walk through the primary challenges we encountered.

Inefficient development and deployment

Problem: We lacked parity between developer tooling and CI/CD tooling, which made it difficult for engineers to test changes confidently.

Goal: We needed to ensure consistent environments across development, testing, and production.

Unreliable incident response

Problem: If a rollback was necessary, we did not have the proper infrastructure to accomplish this efficiently.

Goal: We wanted to revert to stable versions in case of deployment issues easily.

Lack of comprehensive telemetry

Problem: Our SRE team created tooling to simplify collecting and publishing telemetry, but distribution and upgradability were poor. Also, we found adoption to be extremely low.

Goal: We needed to standardize how we configure telemetry collection, and simplify the configuration of auto-instrumentation libraries so the developer experience is turnkey.

Solution: Efficient development and deployment

CI/CD configuration with self-hosted GitHub runners and Docker Buildx

We had a requirement for multi-architecture support (arm64/amd64), which we initially implemented in CI/CD with Docker Buildx and QEMU. However, we noticed an extreme dip in performance due to the emulated architecture build times.

We were able to reduce build times by almost 90% by ditching QEMU (emulated builds), and targeting arm64 and amd64 self-hosted runners. This gave us the advantage of blazing-fast native architecture builds, but still allowed us to support multi-arch by publishing the manifest after-the-fact. 

Here’s a working example of the solution we will walk through: https://github.com/siimpl/multi-architecture-cicd

If you’d like to deploy this yourself, there’s a guide in the README.md.

Prerequisites

This project uses the following tools:

Docker Build Cloud (included in all Docker paid subscriptions.)

DBC cloud driver

GitHub/GitHub Actions

A managed container orchestration service like Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or  Google Kubernetes Engine (GKE)

Terraform

Helm

Because this project uses industry-standard tooling like Terraform, Kubernetes, and Helm, it can be easily adapted to any CI/CD or cloud solution you need.

Key features

The secret sauce of this solution is provisioning the self-hosted runners in a way that allows our CI/CD to specify which architecture to execute the build on.

The first step is to provision two node pools — an amd64 node pool and an arm64 node pool, which can be found in the aks.tf. In this example, the node_count is fixed at 1 for both node pools but for better scalability/flexibility you can also enable autoscaling for a dynamic pool.

resource “azurerm_kubernetes_cluster_node_pool” “amd64″ {
name = “amd64pool”
kubernetes_cluster_id = azurerm_kubernetes_cluster.cicd.id
vm_size = “Standard_DS2_v2″ # AMD-based instance
node_count = 1
os_type = “Linux”
tags = {
environment = “dev”
}
}

resource “azurerm_kubernetes_cluster_node_pool” “arm64″ {
name = “arm64pool”
kubernetes_cluster_id = azurerm_kubernetes_cluster.cicd.id
vm_size = “Standard_D4ps_v5″ # ARM-based instance
node_count = 1
os_type = “Linux”
tags = {
environment = “dev”
}
}

Next, we need to update the self-hosted runners’ values.yaml to have a configurable nodeSelector. This will allow us to deploy one runner scale set to the arm64pool and one to the amd64pool.

Once the Terraform resources are successfully created, the runners should be registered to the organization or repository you specified in the GitHub config URL. We can now update the REGISTRY values for the emulated-build and the native-build.

After creating a pull request with those changes, navigate to the Actions tab to witness the results.

You should see two jobs kick off, one using the emulated build path with QEMU, and the other using the self-hosted runners for native node builds. Depending on cache hits or the Dockerfile being built, the performance improvements can be up to 90%. Even with this substantial improvement, utilizing Docker Build Cloud can improve performance 95%. More importantly, you can reap the benefits during development builds! Take a look at the docker-build-cloud.yml workflow for more details. All you need is a Docker Build Cloud subscription and a cloud driver to take advantage of the improved pipeline.

Getting Started

1. Generate GitHub PAT

2. Update the variables.tf

3. Initialise AZ CLI

4. Deploy Cluster

5. Create a PR to validate pipelines

README.md for reference

Reliable Incident Response

Leveraging SemVer Tagged Containers for Easy Rollback

Recognizing that deployment issues can arise unexpectedly, we needed a mechanism to quickly and reliably rollback production deployments. Below is an example workflow for properly rolling back a deployment based on the tagging strategy we implemented above.

Rollback Process:

In case of a problematic build, deployment was rolled back to a previous stable version using the tagged images.

AWS CLI commands were used to update ECS services with the desired image tag:

on:
workflow_call:
inputs:
image-version:
required: true
type: string
jobs:
rollback:
runs-on: ubuntu-latest
permissions:
id-token: write
context: read
steps:
– name: Rollback to previous version
run: |
aws ecs update-service –cluster my-cluster –service my-service –force-new-deployment –image ${{ secrets.REGISTRY }}/myapp:${{ inputs.image-version }}

Comprehensive Telemetry

Configuring Sidecar Containers in ECS for Aggregating/Publishing Telemetry Data (OTEL)

As we adopted a OpenTelemetry to standardize observability, we quickly realized that adoption was one of the toughest hurdles. As a team, we decided to bake in as much configuration as possible into the infrastructure (Terraform modules) so that we could easily distribute and maintain observability instrumentation.

Sidecar Container Setup:

Sidecar containers were defined in the ECS task definitions to run OpenTelemetry collectors.

The collectors were configured to aggregate and publish telemetry data from the application containers.

Task Definition Example:

{
“containerDefinitions”: [
{
“name”: “myapp”,
“image”: “myapp:1.0.0″,
“essential”: true,
“portMappings”: [{ “containerPort”: 8080 }]
},
{
“name”: “otel-collector”,
“image”: “otel/opentelemetry-collector:latest”,
“essential”: false,
“portMappings”: [{ “containerPort”: 4317 }],
“environment”: [
{ “name”: “OTEL_RESOURCE_ATTRIBUTES”, “value”: “service.name=myapp” }
]
}
],
“family”: “my-task”
}

Configuring Multi-Stage Dockerfiles for OpenTelemetry Auto-Instrumentation Libraries (Node.js)

At the application level, configuring the auto-instrumentation posed a challenge since most applications varied in their build process. By leveraging multi-stage Dockerfiles, we were able to help standardize the way we initialized the auto-instrumentation libraries across microservices. We were primarily a nodejs shop, so below is an example Dockerfile for that.

Multi-Stage Dockerfile:

The Dockerfile is divided into stages to separate the build environment from the final runtime environment, ensuring a clean and efficient image.

OpenTelemetry libraries are installed in the build stage and copied to the runtime stage:

# Stage 1: Build stage
FROM node:20 AS build
WORKDIR /app
COPY package.json package-lock.json ./
# package.json defines otel libs (ex. @opentelemetry/node @opentelemetry/tracing)
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Runtime stage
FROM node:20
WORKDIR /app
COPY –from=build /app /app
CMD [”node”, “dist/index.js”]

Results

By addressing these challenges we were able to reduce build times by ~90%, which alone dropped our DORA metrics for Lead time for changes and Time to restore by ~50%. With the rollback strategy and telemetry changes, we were able to reduce our Mean time to Detect (MTTD) and Mean time to resolve (MTTR) by ~30%. We believe that it could get to 50-60% with tuning of alerts and the addition of runbooks (automated and manual).

Enhanced Development Efficiency: Consistent environments across development, testing, and production stages sped up the development process, and roughly 90% faster build times with the native architecture solution.

Reliable Rollbacks: Quick and efficient rollbacks minimized downtime and maintained system integrity.

Comprehensive Telemetry: Sidecar containers enabled detailed monitoring of system health and security without impacting application performance, and was baked right into the infrastructure developers were deploying. Auto-instrumentation of the application code was simplified drastically with the adoption of our Dockerfiles.

Siimpl: Transforming Enterprises with Cloud-First Solutions

With Docker at the core, Siimpl.io’s solutions demonstrate how teams can build faster, more reliable, and scalable systems. Whether you’re optimizing CI/CD pipelines, enhancing telemetry, or ensuring secure rollbacks, Docker provides the foundation for success. Try Docker today to unlock new levels of developer productivity and operational efficiency.

Learn more from our website or contact us at solutions@siimpl.io
Quelle: https://blog.docker.com/feed/

Introducing the Beta Launch of Docker’s AI Agent, Transforming Development Experiences

For years, Docker has been an essential partner for developers, empowering everyone from small startups to the world’s largest enterprises. Today, AI is transforming organizations across industries, creating opportunities for those who embrace it to gain a competitive edge. Yet, for many teams, the question of where to start and how to effectively integrate AI into daily workflows remains a challenge. True to its developer-first philosophy, Docker is here to bridge that gap.

We’re thrilled to introduce the beta launch of Docker AI Agent (also known as Project: Gordon)—an embedded, context-aware assistant seamlessly integrated into the Docker suite. Available within Docker Desktop and CLI, this innovative agent delivers tailored guidance for tasks like building and running containers, authoring Dockerfiles and Docker-specific troubleshooting—eliminating disruptive context-switching. By addressing challenges precisely when and where developers encounter them, Docker AI Agent ensures a smoother, more productive workflow.

As the AI Agent evolves, enterprise teams will unlock even greater capabilities, including customizable features that streamline collaboration, enhance security, and help developers work smarter. With the Docker AI Agent, we’re making Docker even easier and more effective to use than it has ever been — AI accessible, actionable, and indispensable for developers everywhere.

How Docker’s AI Agent Simplifies Development Challenges  

Developing in today’s fast-paced tech landscape is increasingly complex, with developers having to learn an ever growing number of tools, libraries and technologies.

By integrating a GenAI Agent into Docker’s ecosystem, we aim to provide developers with a powerful assistant that can help them navigate these complexities. 

The Docker AI Agent helps developers accelerate their work, providing real-time assistance, actionable suggestions, and automations that remove many of the manual tasks associated with containerized application development. Delivering the most helpful, expert-level guidance on Docker-related questions and technologies, Gordon serves as a powerful support system for developers, meeting them exactly where they are in their workflow. 

If you’re a developer who favors graphical interfaces, Docker Desktop AI UI will help you navigate container running issues, image size management and more generic Dockerfile oriented questions. If you’re a command line interface user, you can call, and share context with the agent directly in your favorite terminal.

So what can Docker’s AI Agent do today? 

We’re delivering an expert assistant for every Docker-related concept and technology, whether it’s getting started, optimizing an existing Dockerfile or Compose file, or understanding Docker technologies in general. With Docker AI Agent, you also have the ability to delegate actions while maintaining full control and review over the process.

A first example, if you want to run a container from an image, our agent can suggest the most appropriate docker run command tailored to your needs. This eliminates the guesswork or the need to search Docker Hub, saving you time and effort. The result combines a custom prompt, live data from Docker Hub, Docker container expertise and private usage insights, unique to Docker Inc.

We’ve intentionally designed the output to be concise and actionable, avoiding the overwhelming verbosity often associated with AI-generated commands. We also provide sources for most of the AI agent recommendations, pointing directly to our documentation website. Our goal is to continuously refine this experience, ensuring that Docker’s AI Agent always provides the best possible command based on your specific local context.

Beside helping you run containers, the Docker AI Agent can today:

Explain, Rate and optimize Dockerfile leveraging the latest version of Docker.

Help you run containers in an effective, concise way, leveraging the local context (checking port already used or volumes).

Answers any docker related questions with the latest version of our documentations for our whole tool suite, and as such is able to answer any kind of questions on Docker tools and technologies.

Containerize a software project helping you run your software in containers.

Helps on Docker related Github Actions.

Suggest fix when a container is failing to start in Docker Desktop.

Provides contextual help for containers, images and volumes.

Can augment its answer with per directory MCP servers (see doc).

For the node expert, in the above screenshot the AI is recommending node 20.12 which is not the latest version but the one the AI found in the package.json.

With every future version of Docker Desktop and thanks to the feedback that you provide, the agent will be able to do so much more in the future.

How can you try Docker AI Agent? 

This first beta release of Docker AI Agent is now progressively available for all signed-in users*. By default, the Docker AI agent is disabled. To enable it you will need to follow the steps below. Here’s how to get started:

Install or update to the latest release of Docker Desktop 4.38

Enable Docker AI into Docker Desktop Settings -> Features in Development

For the best experience, ensure the Docker terminal is enabled by going to Settings → General

Apply Changes 

* If you’re a business subscriber, your Administrator needs to enable the Docker AI Agent for the organization first. This can be done through the Settings Management. If this is your case, feel free to contact us through the support  for further information.

Docker Agent’s Vision for 2025

By 2025, we aim to expand the agent’s capabilities with features like customizing your experience with more context from your registry, enhanced GitHub Copilot integrations, and deeper presence across the development tools you already use. With regular updates and your feedback, Docker AI Agent is being built to become an indispensable part of your development process.

For now this beta is the start of an exciting evolution in how we approach developer productivity. Stay tuned for more updates as we continue to shape a smarter, more streamlined way to build, secure, and ship applications. We want to hear from you, if you like or want more information you can contact us.

Learn more

Learn more about Docker’s AI Agent in on our docs 

Quelle: https://blog.docker.com/feed/