Introducing the Beta Launch of Docker’s AI Agent, Transforming Development Experiences

For years, Docker has been an essential partner for developers, empowering everyone from small startups to the world’s largest enterprises. Today, AI is transforming organizations across industries, creating opportunities for those who embrace it to gain a competitive edge. Yet, for many teams, the question of where to start and how to effectively integrate AI into daily workflows remains a challenge. True to its developer-first philosophy, Docker is here to bridge that gap.

We’re thrilled to introduce the beta launch of Docker AI Agent (also known as Project: Gordon)—an embedded, context-aware assistant seamlessly integrated into the Docker suite. Available within Docker Desktop and CLI, this innovative agent delivers tailored guidance for tasks like building and running containers, authoring Dockerfiles and Docker-specific troubleshooting—eliminating disruptive context-switching. By addressing challenges precisely when and where developers encounter them, Docker AI Agent ensures a smoother, more productive workflow.

As the AI Agent evolves, enterprise teams will unlock even greater capabilities, including customizable features that streamline collaboration, enhance security, and help developers work smarter. With the Docker AI Agent, we’re making Docker even easier and more effective to use than it has ever been — AI accessible, actionable, and indispensable for developers everywhere.

How Docker’s AI Agent Simplifies Development Challenges  

Developing in today’s fast-paced tech landscape is increasingly complex, with developers having to learn an ever growing number of tools, libraries and technologies.

By integrating a GenAI Agent into Docker’s ecosystem, we aim to provide developers with a powerful assistant that can help them navigate these complexities. 

The Docker AI Agent helps developers accelerate their work, providing real-time assistance, actionable suggestions, and automations that remove many of the manual tasks associated with containerized application development. Delivering the most helpful, expert-level guidance on Docker-related questions and technologies, Gordon serves as a powerful support system for developers, meeting them exactly where they are in their workflow. 

If you’re a developer who favors graphical interfaces, Docker Desktop AI UI will help you navigate container running issues, image size management and more generic Dockerfile oriented questions. If you’re a command line interface user, you can call, and share context with the agent directly in your favorite terminal.

So what can Docker’s AI Agent do today? 

We’re delivering an expert assistant for every Docker-related concept and technology, whether it’s getting started, optimizing an existing Dockerfile or Compose file, or understanding Docker technologies in general. With Docker AI Agent, you also have the ability to delegate actions while maintaining full control and review over the process.

A first example, if you want to run a container from an image, our agent can suggest the most appropriate docker run command tailored to your needs. This eliminates the guesswork or the need to search Docker Hub, saving you time and effort. The result combines a custom prompt, live data from Docker Hub, Docker container expertise and private usage insights, unique to Docker Inc.

We’ve intentionally designed the output to be concise and actionable, avoiding the overwhelming verbosity often associated with AI-generated commands. We also provide sources for most of the AI agent recommendations, pointing directly to our documentation website. Our goal is to continuously refine this experience, ensuring that Docker’s AI Agent always provides the best possible command based on your specific local context.

Beside helping you run containers, the Docker AI Agent can today:

Explain, Rate and optimize Dockerfile leveraging the latest version of Docker.

Help you run containers in an effective, concise way, leveraging the local context (checking port already used or volumes).

Answers any docker related questions with the latest version of our documentations for our whole tool suite, and as such is able to answer any kind of questions on Docker tools and technologies.

Containerize a software project helping you run your software in containers.

Helps on Docker related Github Actions.

Suggest fix when a container is failing to start in Docker Desktop.

Provides contextual help for containers, images and volumes.

Can augment its answer with per directory MCP servers (see doc).

For the node expert, in the above screenshot the AI is recommending node 20.12 which is not the latest version but the one the AI found in the package.json.

With every future version of Docker Desktop and thanks to the feedback that you provide, the agent will be able to do so much more in the future.

How can you try Docker AI Agent? 

This first beta release of Docker AI Agent is now progressively available for all signed-in users*. By default, the Docker AI agent is disabled. To enable it you will need to follow the steps below. Here’s how to get started:

Install or update to the latest release of Docker Desktop 4.38

Enable Docker AI into Docker Desktop Settings -> Features in Development

For the best experience, ensure the Docker terminal is enabled by going to Settings → General

Apply Changes 

* If you’re a business subscriber, your Administrator needs to enable the Docker AI Agent for the organization first. This can be done through the Settings Management. If this is your case, feel free to contact us through the support  for further information.

Docker Agent’s Vision for 2025

By 2025, we aim to expand the agent’s capabilities with features like customizing your experience with more context from your registry, enhanced GitHub Copilot integrations, and deeper presence across the development tools you already use. With regular updates and your feedback, Docker AI Agent is being built to become an indispensable part of your development process.

For now this beta is the start of an exciting evolution in how we approach developer productivity. Stay tuned for more updates as we continue to shape a smarter, more streamlined way to build, secure, and ship applications. We want to hear from you, if you like or want more information you can contact us.

Learn more

Learn more about Docker’s AI Agent in on our docs 

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.38: New AI Agent, Multi-Node Kubernetes, and Bake in GA

At Docker, we’re committed to simplifying the developer experience and empowering enterprises to scale securely and efficiently. With the Docker Desktop 4.38 release, teams can look forward to improved developer productivity and enterprise governance. 

We’re excited to announce the General Availability of Bake, a powerful feature for optimizing build performance and multi-node Kubernetes testing to help teams “shift left.” We’re also expanding availability for several enterprise features designed to boost operational efficiency. And last but not least, Docker AI Agent (formerly Project: Agent Gordon) is now in Beta, delivering intelligent, real-time Docker-related suggestions across Docker CLI, Desktop, and Hub. It’s here to help developers navigate Docker concepts, fix errors, and boost productivity.

Docker’s AI Agent boosts developer productivity  

We’re thrilled to introduce Docker AI Agent (also known as Project: Agent Gordon) — an embedded, context-aware assistant seamlessly integrated into the Docker suite. Available within Docker Desktop and CLI, this innovative agent delivers real-time, tailored guidance for tasks like container management and Docker-specific troubleshooting — eliminating disruptive context-switching. Docker AI agent can be used for every Docker-related concept and technology, whether you’re getting started, optimizing an existing Dockerfile or Compose file, or understanding Docker technologies in general. By addressing challenges precisely when and where developers encounter them, Docker AI Agent ensures a smoother, more productive workflow. 

The first iteration of Docker’s AI Agent is now available in Beta for all signed-in users. The agent is disabled by default, so user activation is required. Read more about Docker’s New AI Agent and how to use it to accelerate developer velocity here. 

Figure 1: Asking questions to Docker AI Agent in Docker Desktop

Simplify build configurations and boost performance with Docker Bake

Docker Bake is an orchestration tool that simplifies and speeds up Docker builds. After launching as an experimental feature, we’re thrilled to make it generally available with exciting new enhancements.

While Dockerfiles are great for defining build steps, teams often juggle docker build commands with various options and arguments — a tedious and error-prone process. Bake changes the game by introducing a declarative file format that consolidates all options and image dependencies (also known as targets) in one place. No more passing flags to every build command! Plus, Bake’s ability to parallelize and deduplicate work ensures faster and more efficient builds.

Key benefits of Docker Bake

Simplicity: Abstract complex build configurations into one simple command.

Flexibility: Write build configurations in a declarative syntax, with support for custom functions, matrices, and more.

Consistency: Share and maintain build configurations effortlessly across your team.

Performance: Bake parallelizes multi-image workflows, enabling faster and more efficient builds.

Developers can simplify multi-service builds by integrating Bake directly into their Compose files — Bake supports Compose files natively. It enables easy, efficient building of multiple images from a single repository with shared configurations. Plus, it works seamlessly with Docker Build Cloud locally and in CI. With Bake-optimized builds as the foundation, developers can achieve more efficient Docker Build Cloud performance and faster builds.

Learn more about streamlining build configurations, boosting performance, and improving team workflows with Bake in our announcement blog. 

Shift Left with Multi-Node Kubernetes testing in Docker Desktop

In today’s complex production environments, “shifting left”  is more essential than ever. By addressing concerns earlier in the development cycle, teams reduce costs and simplify fixes, leading to more efficient workflows and better outcomes. That’s why we continue to bring new features and enhancements to integrate feedback directly into the developer’s inner loop

Docker Desktop now includes Multi-Node Kubernetes integration, enabling easier and extensive testing directly on developers’ machines. While single-node clusters allow for quick verification of app deployments, they fall short when it comes to testing resilience and handling the complex, unpredictable issues of distributed systems. To tackle this, we’re updating our Kubernetes distribution with kind — a lightweight, fast, and user-friendly solution for local test and multi-node cluster simulations.

Figure 2: Selecting Kubernetes version and cluster number for testing

Key Benefits:

Multi-node cluster support: Replicate a more realistic production environment to test critical features like node affinity, failover, and networking configurations.

Multiple Kubernetes versions: Easily test across different Kubernetes versions, which is a must for validating migration paths.

Up-to-date maintenance: Since kind is an actively maintained open-source project, developers can update to the latest version on demand without waiting for the next Docker Desktop release.

Head over to our documentation to discover how to use multi-node Kubernetes clusters for local testing and simulation.

General availability of administration features for Docker Business subscription

With the Docker Desktop 3.36 release, we introduced Beta enterprise admin tools to streamline administration, improve security, and enhance operational efficiency. And the feedback from our Early Access Program customers has been overwhelmingly positive. 

For instance, enforcing sign-in with macOS configuration files and across multiple organizations makes deployment easier and more flexible for large enterprises. Also, the PKG installer simplifies managing large-scale Docker Desktop deployments on macOS by eliminating the need to convert DMG files into PKG first.

Today, the features below are now available to all Docker Business customers.  

Enforce sign-in with macOS configuration profiles 

Enforce sign-in for more than one organization at a time 

Deploy Docker Desktop for Mac in bulk with the PKG installer 

Looking ahead, Docker is dedicated to continue expanding enterprise administration capabilities. Stay tuned for more announcements!

Wrapping up 

Docker Desktop 4.38 reinforces our commitment to simplifying the developer experience while equipping enterprises with robust tools. 

With Bake now in GA, developers can streamline complex build configurations into a single command. The new Docker AI Agent offers real-time, on-demand guidance within their preferred Docker tools. Plus, with Multi-node Kubernetes testing in Docker Desktop, they can replicate realistic production environments and address issues earlier in the development cycle. Finally, we made a few new admin tools available to all our Business customers, simplifying deployment, management, and monitoring. 

We look forward to how these innovations accelerate your workflows and supercharge your operations! 

Learn more

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

Subscribe to the Docker Navigator Newsletter.

Learn about our sign-in enforcement options.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Docker Bake is Now Generally Available in Docker Desktop 4.38!

We’re excited to announce the General Availability of Docker Bake with Docker Desktop 4.38! This powerful build orchestration tool takes the hassle out of managing complex builds and offers simplicity, flexibility, and performance for teams of all sizes.

What is Docker Bake?

Docker Bake is an orchestration tool that streamlines Docker builds, similar to how Compose simplifies managing runtime environments. With Bake, you can define build stages and deployment environments in a declarative file, making complex builds easier to manage. It also leverages BuildKit’s parallelization and optimization features to speed up build times.

While Dockerfiles are excellent for defining image build steps, teams often need to build multiple images and execute helper tasks like testing, linting, and code generation. Traditionally, this meant juggling numerous docker build commands with their own options and arguments – a tedious and error-prone process.

Bake changes the game by introducing a declarative file format that encapsulates all options and image dependencies, referred to as targets. Additionally, Bake’s ability to parallelize and deduplicate work ensures faster and more efficient builds.

Why should you use Bake?

Challenges with complex Docker Build configuration:

Managing long, complex build commands filled with countless flags and environment variables.

Tedious workflows for building multiple images.

Difficulty declaring builds for specific targets or environments.

Requires a script or 3rd-party tool to make things manageable

Docker Bake tackles these challenges with a better way to manage complex builds with a simple, declarative approach.

Key benefits of Docker Bake

Simplicity: Replace complex chains of Docker build commands and scripts with a single docker buildx bake command while maintaining clear, version-controlled configuration files that are easy to understand and modify.

Flexibility: Express sophisticated build logic through HCL syntax and matrix builds, enabling dynamic configurations that adapt to different environments and requirements while supporting custom functions for advanced use cases.

Consistency: Maintain standardized build configurations across teams and environments through version-controlled files and inheritance patterns, eliminating environment-specific build issues and reducing configuration drift.

Performance: Automatically parallelize independent builds and eliminate redundant operations through context deduplication and intelligent caching, dramatically reducing build times for complex multi-image workflows.

Figure 1: One simple Docker buildx bake command to replace all the flags and environment variables.

Use cases for Docker Bake

1. Monorepo and Image Bakery

Docker Bake can help developers efficiently manage and build multiple related Docker images from a single source repository. Plus, they can leverage shared configurations and automated dependency handling to enforce organizational standards.

Development Efficiency: Teams can maintain consistent build logic across dozens or hundreds of microservices in a single repository, reducing configuration drift and maintenance overhead.

Resource Optimization: Shared base images and contexts are automatically deduplicated, dramatically reducing build times and storage costs.

Standardization: Enforce organizational standards through inherited configurations, ensuring all services follow security, tagging, and testing requirements.

Change Management: A single source of truth for build configurations makes it easier to implement organization-wide changes like base image updates or security patches.

2. Compose users

Docker Bake provides seamless compatibility with existing docker-compose.yml files, allowing direct use of your current configurations. Existing Compose users are able to get started using Bake with minimal effort.

Gradual Adoption: Teams can incrementally adopt advanced build features while still leveraging their existing compose workflows and knowledge.

Development Consistency: Use the same configuration for both local development (via compose) and production builds (via Bake), eliminating “works on my machine” issues.

Enhanced Capabilities: Access powerful features like matrix builds and HCL expressions while maintaining compatibility with familiar compose syntax.

CI/CD Integration: Seamlessly integrate with existing CI/CD pipelines that already understand compose files while adding Bake’s advanced build capabilities.

3. Complex build configurations

Developers can use targets, groups, variables, functions, and matrix targets and many more tools in Bake to simplify their build configurations across projects and teams.

Cross-Platform Compatibility: Matrix builds enable teams to efficiently manage builds across multiple architectures, OS versions, and dependency combinations from a single configuration.

Dynamic Adaptation: HCL expressions allow builds to adapt to different environments, git branches, or CI variables without maintaining multiple configurations.

Build Optimization: Custom functions enable sophisticated logic for things like version calculation, tag generation, and conditional builds based on git history.

Quality Control: Variable validation and inheritance ensure consistent configuration across complex build scenarios, reducing errors and maintenance burden.

Scale Management: Groups and targets help organize large-scale build systems with dozens or hundreds of permutations, making them manageable and maintainable.

4. Docker Build Cloud

With Bake-optimized builds as the foundation, developers can achieve more efficient Docker Build Cloud performance and faster builds.

Enhanced Docker Build Cloud Performance: Instantly parallelize matrix builds across cloud infrastructure, turning hour-long build pipelines into minutes without managing build infrastructure.

Resource Optimization: Leverage Build Cloud’s distributed caching and deduplication to dramatically reduce bandwidth usage and build times, which is especially valuable for remote teams.

Cost Management: Save cost with DBC — Bake’s precise target definitions mean you only consume cloud resources for exactly what needs to be built.

Developer Experience: Teams can run complex multi-architecture builds without powerful local machines, enabling development from any device while maintaining build performance.

CI/CD Enhancement: Offload resource-intensive builds from CI runners to Build Cloud, reducing CI costs and queue times while improving reliability.

What’s New in Bake for GA?

Docker Bake has been an experimental feature for several years, allowing us to refine and improve it based on user feedback. So, there is already a strong set of ingredients that users love, such as targets and groups, variables, HCL Expression Support, inheritance capabilities, matrix targets, and additional contexts. With this GA release, Bake is now ready for production use, and we’ve added several enhancements to make it more efficient, secure, and easier to use:

Deduplicated Context Transfers: Significantly speeds up build pipelines by eliminating redundant file transfers when multiple targets share the same build context.

Entitlements: Enhances security and resource management by providing fine-grained control over what capabilities and resources builders can access during the build process.

Composable Attributes: Simplifies configuration management by allowing teams to define reusable attribute sets that can be mixed, matched, and overridden across different targets.

Variable Validation: Prevents wasted time and resources by catching configuration errors before the actual build process begins.

Deduplicate context transfers

When you build targets concurrently using groups, build contexts are loaded independently for each target. If the same context is used by multiple targets in a group, that context is transferred once for each time it’s used. This can significantly impact build time, depending on your build configuration.

Previously, the workaround required users to define a named context that loads the context files and then have each target reference the named context. But with Bake, this will be handled automatically now.

Bake can automatically deduplicate context transfers from targets sharing the same context. When you build targets concurrently using groups, build contexts are loaded independently for each target. If the same context is used by multiple targets in a group, that context is transferred once for each time it’s used. This more efficient approach leads to much faster build time. 

Read more about how to speed up your build time in our docs. 

Entitlements

Bake now includes entitlements to control access to privileged operations, aligning with Build. This prevents unintended side effects and security risks. If Bake detects a potential issue — like a privileged access request or an attempt to access files outside the current directory — the build will fail unless explicitly allowed.

To be consistent, the Bake command now supports the –allow=ENTITLEMENT flag to grant access to additional entitlements. The following entitlements are currently supported for Bake.

Build equivalents

–allow network.host — Allows executions with host networking.

–allow security.insecure — Allows executions without sandbox. (i.e. —privileged)

File system: Grant filesystem access for builds that need access files outside the working directory. This will impact context, output, cache-from, cache-to, dockerfile, secret

–allow fs=<path|*> — Grant read and write access to files outside the working directory.

–allow fs.read=<path|*> — Grant read access to files outside the working directory.

–allow fs.write=<path|*> — Grant write access to files outside the working directory.

ssh

–allow ssh —– Allows exposing SSH agent.

Composable attributes

Several attributes previously had to be defined in CSV (e.g. type=provenance,mode=min). These were challenging to read and couldn’t be easily overridden. The following can now be defined as structured objects:

target “app” {
attest = [
{ type = “provenance”, mode = “max” },
{ type = “sbom”, disabled = true}
]

cache-from = [
{ type = “registry”, ref = “user/app:cache” },
{ type = “local”, src = “path/to/cache”}
]

cache-to = [
{ type = “local”, dest = “path/to/cache” },
]

output = [
{ type = “oci”, dest = “../out.tar” },
{ type = “local”, dest=”../out”}
]

secret = [
{ id = “mysecret”, src = “path/to/secret” },
{ id = “mysecret2″, env = “TOKEN” },
]

ssh = [
{ id = “default” },
{ id = “key”, paths = [”path/to/key”] },
]
}

As such, the attributes are now composable. Teams can mix, match, and override attributes across different targets which simplifies configuration management.

target “app-dev” {
attest = [
{ type = “provenance”, mode = “min” },
{ type = “sbom”, disabled = true}
]
}

target “app-prod” {
inherits = [”app-dev”]

attest = [
{ type = “provenance”, mode = “max” },
]
}

Variable validation

Bake now supports validation for variables similar to Terraform to help developers catch and resolve configuration errors early. The GA for Bake also supports the following use cases.

Basic validation

To verify that the value of a variable conforms to an expected type, value range, or other condition, you can define custom validation rules using the validation block.

variable “FOO” {
validation {
condition = FOO != “”
error_message = “FOO is required.”
}
}

target “default” {
args = {
FOO = FOO
}
}

Multiple validations

To evaluate more than one condition, define multiple validation blocks for the variable. All conditions must be true.

variable “FOO” {
validation {
condition = FOO != “”
error_message = “FOO is required.”
}
validation {
condition = strlen(FOO) > 4
error_message = “FOO must be longer than 4 characters.”
}
}

target “default” {
args = {
FOO = FOO
}
}

Dependency on other variables

You can reference other Bake variables in your condition expression, enabling validations that enforce dependencies between variables. This ensures that dependent variables are set correctly before proceeding.

variable “FOO” {}
variable “BAR” {
validation {
condition = FOO != “”
error_message = “BAR requires FOO to be set.”
}
}

target “default” {
args = {
BAR = BAR
}
}

New Bake options

In addition to updating the Bake configuration, we’ve added a new –list option. Previously, if you were unfamiliar with a project or wanted a reminder of the supported targets and variables, you would have to read through the file. Now, the list option will allow you to quickly query a list of them. It also supports the JSON format option if you need programmatic access.

List target

Quickly get a list of the targets available in your Bake configuration.

docker buildx bake –list targets

docker buildx bake –list type=targets,format=json

List variables

Get a list of variables available for your Bake configuration.

docker buildx bake –list variables

docker buildx bake –list type=variables,format=json

These improvements build on a powerful feature set, ensuring Bake is both reliable and future-ready.

Get started with Docker Bake

Ready to simplify your builds? Update to Docker Desktop 4.38 today and start using Bake. With its declarative syntax and advanced features, Docker Bake is here to help you build faster, more efficiently, and with less effort.

Explore the documentation to learn how to create your first Bake file and experience the benefits of streamlined builds firsthand.

Let’s bake something amazing together!

Quelle: https://blog.docker.com/feed/

How Docker Streamlines the  Onboarding Process and Sets Up Developers for Success

Nearly half (45%) of developers say they don’t have enough time for learning and development, according to a developer experience research study by Harness and Wakefield Research. Additionally, developer onboarding is a slow and painful process, with 71% of executive buyers saying that onboarding new developers takes at least two months. 

To accelerate innovation and bring products to market faster, organizations must empower developers with robust support and intuitive guardrails, enabling them to succeed within a structured yet flexible environment. That’s where Docker fits in: We help developers onboard quickly and help organizations set up the right guardrails to give developers the flexibility to innovate within the boundaries of company policies. 

Setting up developer teams for success 

Docker is recognized as one of the most used, desired, and admired developer tools, making it an essential component of any development team’s toolkit. For developers who are new to Docker, you can quickly get them up and running with Docker’s integrated development workflows, verified secure content, and accessible learning resources and community support.

Streamlined developer onboarding

When new developers join a team, Docker Desktop can significantly reduce the time and effort required to set up their development environments. Docker Desktop integrates seamlessly with popular IDEs, such as Visual Studio Code, allowing developers to containerize directly within familiar tools, accelerating learning within their usual workflows. Docker Extensions expand Docker Desktop’s capabilities and establish new functionalities, integrating developers’ favorite development tools into their application development and deployment workflows. 

Developers can also use Docker for GitHub Copilot for seamless onboarding with assistance for containerizing applications, generating Docker assets, and analyzing project vulnerabilities. In fact, the Docker extension is a top choice among developers in GitHub Copilot’s extension leaderboard, as highlighted by Visual Studio Magazine.

Docker Build Cloud integrates with Docker Compose and CI workflows, making it a seamless transition for dev teams. Verified content on Docker Hub gives developers preconfigured, trusted images, reducing setup time and ensuring a secure foundation as they onboard onto projects. 

Docker Scout provides actionable insights and recommendations, allowing developers to enhance their container security awareness, scan for vulnerabilities, and improve security posture with real-time feedback. And, Testcontainers Cloud lets developers run reliable integration tests, with real dependencies defined in code. With these tools, developers can be confident about delivering high-quality and reliable apps and experiences in production.  

Continuous learning with accessible knowledge resources

Continuous learning is a priority for Docker, with a wide range of accessible resources and tools designed to help developers deepen their knowledge and stay current in their containerization journey.

Docker Docs offers beginner-friendly guides, tutorials, and AI tools to guide developers through foundational concepts, empowering them to quickly build their container skills. Our collection of guides takes developers step by step to learn how Docker can optimize development workflows and how to use it with specific languages, frameworks, or technologies.

Docker Hub’s AI Catalog empowers developers to discover, pull, and integrate AI models into their workflows, bridging the gap between innovation and implementation. 

Docker also offers regular webinars and tech talks that help developers stay updated on new features and best practices and provide a platform to discuss real-world challenges. If you’re a Docker Business customer, you can even request additional, customized training from our Docker experts. 

Docker’s partnerships with educational platforms and organizations, such as Udemy Training and LinkedIn Learning, ensure developers have access to comprehensive training — from beginner tutorials to advanced containerization topics.

Docker’s global developer community

One of Docker’s greatest strengths is its thriving global developer community, offering organizations a unique advantage by connecting them with a wealth of shared expertise, resources, and real-world solutions.

With more than 20 million monthly active users, Docker’s community forums and events foster vibrant collaboration, giving developers access to a collective knowledge base that spans industries and expertise levels. Developers can ask questions, solve challenges, and gain insights from a diverse range of peers — from beginners to seasoned experts. Whether you’re troubleshooting an issue or exploring best practices, the Docker community ensures you’re never working in isolation.

A key pillar of this ecosystem is the Docker Captains program — a network of experienced and passionate Docker advocates who are leaders in their fields. Captains share technical knowledge through blog posts, videos, webinars, and workshops, giving businesses and teams access to curated expertise that accelerates onboarding and productivity.

Beyond forums and the Docker Captains program, Docker’s community-driven events, such as meetups and virtual workshops (Figure 1), provide developers with direct access to real-world use cases, innovative workflows, and emerging trends. These interactions foster continuous learning and help developers and their organizations keep pace with the ever-evolving software development landscape.

Figure 1: Docker DevTools Day 1.0 Meetup in Singapore.

For businesses, tapping into Docker’s extensive community means access to a vast pool of knowledge, support, and inspiration, which is a critical asset in driving developer productivity and innovation.

Empowering developers with enhanced user management and security

In previous articles, we looked at how Docker simplifies complexity and boosts developer productivity (the right tool) and how to unlock efficiency with Docker for AI and cloud-native development (the right process).

To scale and standardize app development processes across the entire company, you also need to have the right guardrails in place for governance, compliance, and security, which is often handled through enterprise control and admin management tools. Ideally, organizations provide guardrails without being overly prescriptive and slowing developer productivity and innovation. 

Modern enterprises require a layered security approach, beginning with trusted content as the foundation for building robust and compliant applications. This approach gives your dev teams a good foundation for building securely from the start. 

Throughout the software development process, you need a secure platform. For regulated industries like finance and public sectors, this means fortified dev environments. Security vulnerability analysis and policy evaluation tools also help inform improvements and remediation. 

Additionally, you need enterprise controls and dashboards that ensure enterprise IT and security teams can confidently monitor and manage risk. 

Setting up the right guardrails 

Docker provides a number of admin tools to safeguard your software with integrated container security in the Docker Business plan. Our goal is to improve security and compliance of developer environments with minimal impact on developer experience or productivity. 

Centralized settings for improved dev environments security 

Docker provides developer teams with access to a vast library of trusted and certified application content, including Docker Official Images, Docker Verified Publisher, and Docker Trusted Open Source content. Coupled with advanced image and registry management rules — with tools like Image Access Management and Registry Access Management — you can ensure that your developers only use software that satisfies your company’s security policies. 

With a solid foundation to build securely from the start, your organization can further enhance security throughout the software development process. Docker ensures software supply chain integrity through vulnerability scanning and image analysis with Docker Scout. Rapid remediation capabilities paired with detailed CVE reporting help developers quickly find and fix vulnerabilities, resulting in speedy time to resolution.

Although containers are generally secure, container development tools still must be properly secured to reduce the risk of security breaches in the developer’s environment. Hardened Docker Desktop is an example of Docker’s fortified development environments with enhanced container isolation. It lets you enforce strict security settings and prevent developers and their containers from bypassing these controls. With air-gapped containers, you can further restrict containers from accessing network resources, limiting where data can be uploaded to or downloaded from.

Continuous monitoring and managing risks

With the Admin Console and Docker Desktop Insights, IT administrators and security teams can visualize and understand how Docker is used within their organizations and manage the implementation of organizational configurations and policies (Figure 2). 

These insights help teams streamline processes and improve efficiency. For example, you can enforce sign-in for developers who don’t sign in to an account associated with your organization. This step ensures that developers receive the benefits of your Docker subscription and work within the boundaries of the company policies. 

Figure 2: Docker Desktop Insights Dashboard provides information on product usage.

For business and engineering leaders, full visibility and governance over the development process help ensure compliance and mitigate risk while driving developer productivity. 

Unlock innovation with Docker’s development suite

Docker is the leading suite of tools purpose-built for cloud-native development, combining a best-in-class developer experience with enterprise-grade security and governance. With Docker, your organization can streamline onboarding, foster innovation, and maintain robust compliance — all while empowering your teams to deliver impactful solutions to market faster and more securely. 

Explore the Docker Business plan today and unlock the full potential of your development processes.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Protecting the Software Supply Chain: The Art of Continuous Improvement

Without continuous improvement in software security, you’re not standing still — you’re walking backward into oncoming traffic. Attack vectors multiply, evolve, and look for the weakest link in your software supply chain daily. 

Cybersecurity Ventures forecasts that the global cost of software supply chain attacks will reach nearly $138 billion by 2031, up from $60 billion in 2025 and $46 billion in 2023. A single overlooked vulnerability isn’t just a flaw; it’s an open invitation for compromise, potentially threatening your entire system. The cost of a breach doesn’t stop with your software — it extends to your reputation and customer trust, which are far harder to rebuild. 

Docker’s suite of products offers your team peace of mind. With tools like Docker Scout, you can expose vulnerabilities before they expose you. Continuous image analysis doesn’t just find the cracks; it empowers your teams to seal them from code to production. But Docker Scout is just the beginning. Tools like Docker Hub’s trusted content, Docker Official Images (DOI), Image Access Management (IAM), and Hardened Docker Desktop work together to secure every stage of your software supply chain. 

In this post, we’ll explore how these tools provide built-in security, governance, and visibility, helping your team innovate faster while staying protected. 

Securing the supply chain

Your software supply chain isn’t just an automated sequence of tools and processes. It’s a promise — to your customers, team, and future. Promises are fragile. The cracks can start to show with every dependency, third-party integration, and production push. Tools like Image Access Management help protect your supply chain by providing granular control over who can pull, share, or modify images, ensuring only trusted team members access sensitive assets. Meanwhile, Hardened Docker Desktop ensures developers work in a secure, tamper-proof environment, giving your team confidence that development is aligned with enterprise security standards. The solution isn’t to slow down or second-guess; it’s to continuously improve on securing your software supply chain, such as automated vulnerability scans and trusted content from Docker Hub.

A breach is more than a line item in the budget. Customers ask themselves, “If they couldn’t protect this, what else can’t they protect?” Downtime halts innovation as fines for compliance failures and engineering efforts re-route to forensic security analysis. The brand you spent years perfecting could be reduced to a cautionary tale. Regardless of how innovative your product is, it’s not trusted if it’s not secure. 

Organizations must stay prepared by regularly updating their security measures and embracing new technologies to outpace evolving threats. As highlighted in the article Rising Tide of Software Supply Chain Attacks: An Urgent Problem, software supply chain attacks are increasingly targeting critical points in development workflows, such as third-party dependencies and build environments. High-profile incidents like the SolarWinds attack have demonstrated how adversaries exploit trust relationships and weaknesses in widely used components to cause widespread damage. 

Preventing security problems from the start

Preventing attacks like the SolarWinds breach requires prioritizing code integrity and adopting secure software development practices. Tools like Docker Scout seamlessly integrate security into developers’ workflows, enabling proactive identification of vulnerabilities in dependencies and ensuring that trusted components form the backbone of your applications.

Docker Hub’s trusted content and Docker Scout’s policy evaluation features help ensure that your organization uses compliant and secure images. Docker Official Images (DOI) provide a robust foundation for deployments, mitigating risks from untrusted components. To extend this security foundation, Image Access Management allows teams to enforce image-sharing policies and restrict access to sensitive components, preventing accidental exposure or misuse. For local development, Hardened Docker Desktop ensures that developers operate in a secure, enterprise-grade environment, minimizing risks from the outset. This combination of tools enables your engineering team to put out fires and, more importantly, prevent them from starting in the first place.

Building guardrails

Governance isn’t a roadblock; it’s the blueprint for progress. The problem is that some companies treat security like a fire extinguisher — something you grab when things go wrong. That is not a viable strategy in the long run. Real innovation happens when security guardrails are so well-designed that they feel like open highways, empowering teams to move fast without compromising safety. 

A structured policy lifecycle loop — mapping connections, planning changes, deploying cleanly, and retiring the dead weight — turns governance into your competitive edge. Automate it, and you’re not just checking boxes; you’re giving your teams the freedom to move fast and trust the road ahead. 

Continuous improvement on security policy management doesn’t have to feel like a bureaucratic chokehold. Docker provides a streamlined workflow to secure your software supply chain effectively. Docker Scout integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and detailed reports and recommendations to help teams address issues before code reaches production. 

With the introduction of Docker Health Scores — a security grading system for container images — teams gain a clear and actionable snapshot of their image security posture. These scores empower developers to prioritize remediation efforts and continuously improve their software’s security from code to production.

Keeping up with continuous improvement

Security threats aren’t slowing down. New attack vectors and vulnerabilities grow every day. With cybercrime costs expected to rise from $9.22 trillion in 2024 to $13.82 trillion by 2028, organizations face a critical choice: adapt to this evolving threat landscape or risk falling behind, exposing themselves to escalating costs and reputational damage. Continuous improvement in software security isn’t a luxury. Building and maintaining trust with your customers is essential so they know that every fresh deployment is better than the one that came before. Otherwise, expect high costs due to imminent software supply chain attacks. 

Best practices for securing the software supply chain involve integrating vulnerability scans early in the development lifecycle, leveraging verified content from trusted sources, and implementing governance policies to ensure consistent compliance standards without manual intervention. Continuous monitoring of vulnerabilities and enforcing runtime policies help maintain security at scale, adapting to the dynamic nature of modern software ecosystems.

Start today

Securing your software supply chain is a journey of continuous improvement. With Docker’s tools, you can empower your teams to build and deploy software securely, ensuring vulnerabilities are addressed before they become liabilities.

Don’t wait until vulnerabilities turn into liabilities. Explore Docker Hub, Docker Scout, Hardened Docker Desktop, and Image Access Management to embed security into every stage of development. From granular control over image access to tamper-proof local environments, Docker’s suite of tools helps safeguard your innovation, protect your reputation, and empower your organization to thrive in a dynamic ecosystem.

Learn more

Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.

Docker Health Scores: A security grading system for container images, offering teams clear insights into their image security posture.

Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.

Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for your containerized applications.

Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.

Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.

Quelle: https://blog.docker.com/feed/

Mastering Docker and Jenkins: Build Robust CI/CD Pipelines Efficiently

Hey there, fellow engineers and tech enthusiasts! I’m excited to share one of my favorite strategies for modern software delivery: combining Docker and Jenkins to power up your CI/CD pipelines. 

Throughout my career as a Senior DevOps Engineer and Docker Captain, I’ve found that these two tools can drastically streamline releases, reduce environment-related headaches, and give teams the confidence they need to ship faster.

In this post, I’ll walk you through what Docker and Jenkins are, why they pair perfectly, and how you can build and maintain efficient pipelines. My goal is to help you feel right at home when automating your workflows. Let’s dive in.

Brief overview of continuous integration and continuous delivery

Continuous integration (CI) and continuous delivery (CD) are key pillars of modern development. If you’re new to these concepts, here’s a quick rundown:

Continuous integration (CI): Developers frequently commit their code to a shared repository, triggering automated builds and tests. This practice prevents conflicts and ensures defects are caught early.

Continuous delivery (CD): With CI in place, organizations can then confidently automate releases. That means shorter release cycles, fewer surprises, and the ability to roll back changes quickly if needed.

Leveraging CI/CD can dramatically improve your team’s velocity and quality. Once you experience the benefits of dependable, streamlined pipelines, there’s no going back.

Why combine Docker and Jenkins for CI/CD?

Docker allows you to containerize your applications, creating consistent environments across development, testing, and production. Jenkins, on the other hand, helps you automate tasks such as building, testing, and deploying your code. I like to think of Jenkins as the tireless “assembly line worker,” while Docker provides identical “containers” to ensure consistency throughout your project’s life cycle.

Here’s why blending these tools is so powerful:

Consistent environments: Docker containers guarantee uniformity from a developer’s laptop all the way to production. This consistency reduces errors and eliminates the dreaded “works on my machine” excuse.

Speedy deployments and rollbacks: Docker images are lightweight. You can ship or revert changes at the drop of a hat — perfect for short delivery process cycles where minimal downtime is crucial.

Scalability: Need to run 1,000 tests in parallel or support multiple teams working on microservices? No problem. Spin up multiple Docker containers whenever you need more build agents, and let Jenkins orchestrate everything with Jenkins pipelines.

For a DevOps junkie like me, this synergy between Jenkins and Docker is a dream come true.

Setting up your CI/CD pipeline with Docker and Jenkins

Before you roll up your sleeves, let’s cover the essentials you’ll need:

Docker Desktop (or a Docker server environment) installed and running. You can get Docker for various operating systems.

Jenkins downloaded from Docker Hub or installed on your machine. These days, you’ll want jenkins/jenkins:lts (the long-term support image) rather than the deprecated library/jenkins image.

Proper permissions for Docker commands and the ability to manage Docker images on your system.

A GitHub or similar code repository where you can store your Jenkins pipeline configuration (optional, but recommended).

Pro tip: If you’re planning a production setup, consider a container orchestration platform like Kubernetes. This approach simplifies scaling Jenkins, updating Jenkins, and managing additional Docker servers for heavier workloads.

Building a robust CI/CD pipeline with Docker and Jenkins

After prepping your environment, it’s time to create your first Jenkins-Docker pipeline. Below, I’ll walk you through common steps for a typical pipeline — feel free to modify them to fit your stack.

1. Install necessary Jenkins plugins

Jenkins offers countless plugins, so let’s start with a few that make configuring Jenkins with Docker easier:

Docker Pipeline Plugin

Docker

CloudBees Docker Build and Publish

How to install plugins:

Open Manage Jenkins > Manage Plugins in Jenkins.

Click the Available tab and search for the plugins listed above.

Install them (and restart Jenkins if needed).

Code example (plugin installation via CLI):

# Install plugins using Jenkins CLI
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker-pipeline
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker-build-publish

Pro tip (advanced approach): If you’re aiming for a fully infrastructure-as-code setup, consider using Jenkins configuration as code (JCasC). With JCasC, you can declare all your Jenkins settings — including plugins, credentials, and pipeline definitions — in a YAML file. This means your entire Jenkins configuration is version-controlled and reproducible, making it effortless to spin up fresh Jenkins instances or apply consistent settings across multiple environments. It’s especially handy for large teams looking to manage Jenkins at scale.

Reference:

Jenkins Plugin Installation Guide.

Jenkins Configuration as Code Plugin.

2. Set up your Jenkins pipeline

In this step, you’ll define your pipeline. A Jenkins “pipeline” job uses a Jenkinsfile (stored in your code repository) to specify the steps, stages, and environment requirements.

Example Jenkinsfile:

pipeline {
agent any
stages {
stage('Checkout') {
steps {
git branch: 'main', url: 'https://github.com/your-org/your-repo.git'
}
}
stage('Build') {
steps {
script {
dockerImage = docker.build("your-org/your-app:${env.BUILD_NUMBER}")
}
}
}
stage('Test') {
steps {
sh 'docker run –rm your-org/your-app:${env.BUILD_NUMBER} ./run-tests.sh'
}
}
stage('Push') {
steps {
script {
docker.withRegistry('https://index.docker.io/v1/', 'dockerhub-credentials') {
dockerImage.push()
}
}
}
}
}
}

Let’s look at what’s happening here:

Checkout: Pulls your repository.

Build: Creates a built docker image (your-org/your-app) with the build number as a tag.

Test: Runs your test suite inside a fresh container, ensuring Docker containers create consistent environments for every test run.

Push: Pushes the image to your Docker registry (e.g., Docker Hub) if the tests pass.

Reference: Jenkins Pipeline Documentation.

3. Configure Jenkins for automated builds

Now that your pipeline is set up, you’ll want Jenkins to run it automatically:

Webhook triggers: Configure your source control (e.g., GitHub) to send a webhook whenever code is pushed. Jenkins will kick off a build immediately.

Poll SCM: Jenkins periodically checks your repo for new commits and starts a build if it detects changes.

Which trigger method should you choose?

Webhook triggers are ideal if you want near real-time builds. As soon as you push to your repo, Jenkins is notified, and a new build starts almost instantly. This approach is typically more efficient, as Jenkins doesn’t have to continuously check your repository for updates. However, it requires that your source control system and network environment support webhooks.

Poll SCM is useful if your environment can’t support incoming webhooks — for example, if you’re behind a corporate firewall or your repository isn’t configured for outbound hooks. In that case, Jenkins routinely checks for new commits on a schedule you define (e.g., every five minutes), which can add a small delay and extra overhead but may simplify setup in locked-down environments.

Personal experience: I love webhook triggers because they keep everything as close to real-time as possible. Polling works fine if webhooks aren’t feasible, but you’ll see a slight delay between code pushes and build starts. It can also generate extra network traffic if your polling interval is too frequent.

4. Build, test, and deploy with Docker containers

Here comes the fun part — automating the entire cycle from build to deploy:

Build Docker image: After pulling the code, Jenkins calls docker.build to create a new image.

Run tests: Automated or automated acceptance testing runs inside a container spun up from that image, ensuring consistency.

Push to registry: Assuming tests pass, Jenkins pushes the tagged image to your Docker registry — this could be Docker Hub or a private registry.

Deploy: Optionally, Jenkins can then deploy the image to a remote server or a container orchestrator (Kubernetes, etc.).

This streamlined approach ensures every step — build, test, deploy — lives in one cohesive pipeline, preventing those “where’d that step go?” mysteries.

5. Optimize and maintain your pipeline

Once your pipeline is up and running, here are a few maintenance tips and enhancements to keep everything running smoothly:

Clean up images: Routine cleanup of Docker images can reclaim space and reduce clutter.

Security updates: Stay on top of updates for Docker, Jenkins, and any plugins. Applying patches promptly helps protect your CI/CD environment from vulnerabilities.

Resource monitoring: Ensure Jenkins nodes have enough memory, CPU, and disk space for builds. Overworked nodes can slow down your pipeline and cause intermittent failures.

Pro tip: In large projects, consider separating your build agents from your Jenkins controller by running them in ephemeral Docker containers (also known as Jenkins agents). If an agent goes down or becomes stale, you can quickly spin up a fresh one — ensuring a clean, consistent environment for every build and reducing the load on your main Jenkins server.

Why use Declarative Pipelines for CI/CD?

Although Jenkins supports multiple pipeline syntaxes, Declarative Pipelines stand out for their clarity and resource-friendly design. Here’s why:

Simplified, opinionated syntax: Everything is wrapped in a single pipeline { … } block, which minimizes “scripting sprawl.” It’s perfect for teams who want a quick path to best practices without diving deeply into Groovy specifics.

Easier resource allocation: By specifying an agent at either the pipeline level or within each stage, you can offload heavyweight tasks (builds, tests) onto separate worker nodes or Docker containers. This approach helps prevent your main Jenkins controller from becoming overloaded.

Parallelization and matrix builds: If you need to run multiple test suites or support various OS/browser combinations, Declarative Pipelines make it straightforward to define parallel stages or set up a matrix build. This tactic is incredibly handy for microservices or large test suites requiring different environments in parallel.

Built-in “escape hatch”: Need advanced Groovy features? Just drop into a script block. This lets you access Scripted Pipeline capabilities for niche cases, while still enjoying Declarative’s streamlined structure most of the time.

Cleaner parameterization: Want to let users pick which tests to run or which Docker image to use? The parameters directive makes your pipeline more flexible. A single Jenkinsfile can handle multiple scenarios — like unit vs. integration testing — without duplicating stages.

Declarative Pipeline examples

Below are sample pipelines to illustrate how declarative syntax can simplify resource allocation and keep your Jenkins controller healthy.

Example 1: Basic Declarative Pipeline

pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building…'
}
}
stage('Test') {
steps {
echo 'Testing…'
}
}
}
}

Runs on any available Jenkins agent (worker).

Uses two stages in a simple sequence.

Example 2: Stage-level agents for resource isolation

pipeline {
agent none // Avoid using a global agent at the pipeline level
stages {
stage('Build') {
agent { docker 'maven:3.9.3-eclipse-temurin-17' }
steps {
sh 'mvn clean package'
}
}
stage('Test') {
agent { docker 'openjdk:17-jdk' }
steps {
sh 'java -jar target/my-app-tests.jar'
}
}
}
}

Each stage runs in its own container, preventing any single node from being overwhelmed.

agent none at the top ensures no global agent is allocated unnecessarily.

Example 3: Parallelizing test stages

pipeline {
agent none
stages {
stage('Test') {
parallel {
stage('Unit Tests') {
agent { label 'linux-node' }
steps {
sh './run-unit-tests.sh'
}
}
stage('Integration Tests') {
agent { label 'linux-node' }
steps {
sh './run-integration-tests.sh'
}
}
}
}
}
}

Splits tests into two parallel stages.

Each stage can run on a different node or container, speeding up feedback loops.

Example 4: Parameterized pipeline

pipeline {
agent any

parameters {
choice(name: 'TEST_TYPE', choices: ['unit', 'integration', 'all'], description: 'Which test suite to run?')
}

stages {
stage('Build') {
steps {
echo 'Building…'
}
}
stage('Test') {
when {
expression { return params.TEST_TYPE == 'unit' || params.TEST_TYPE == 'all' }
}
steps {
echo 'Running unit tests…'
}
}
stage('Integration') {
when {
expression { return params.TEST_TYPE == 'integration' || params.TEST_TYPE == 'all' }
}
steps {
echo 'Running integration tests…'
}
}
}
}

Lets you choose which tests to run (unit, integration, or both).

Only executes relevant stages based on the chosen parameter, saving resources.

Example 5: Matrix builds

pipeline {
agent none

stages {
stage('Build and Test Matrix') {
matrix {
agent {
label "${PLATFORM}-docker"
}
axes {
axis {
name 'PLATFORM'
values 'linux', 'windows'
}
axis {
name 'BROWSER'
values 'chrome', 'firefox'
}
}
stages {
stage('Build') {
steps {
echo "Build on ${PLATFORM} with ${BROWSER}"
}
}
stage('Test') {
steps {
echo "Test on ${PLATFORM} with ${BROWSER}"
}
}
}
}
}
}
}

Defines a matrix of PLATFORM x BROWSER, running each combination in parallel.

Perfect for testing multiple OS/browser combinations without duplicating pipeline logic.

Additional resources:

Jenkins Pipeline Syntax: Official reference for sections, directives, and advanced features like matrix, parallel, and post conditions.

Jenkins Pipeline Steps Reference: Comprehensive list of steps you can call in your Jenkinsfile.

Jenkins Configuration as Code Plugin (JCasC): Ideal for version-controlling your Jenkins configuration, including plugin installations and credentials.

Using Declarative Pipelines helps ensure your CI/CD setup is easier to maintain, scalable, and secure. By properly configuring agents — whether Docker-based or label-based — you can spread workloads across multiple worker nodes, minimize resource contention, and keep your Jenkins controller humming along happily.

Best practices for CI/CD with Docker and Jenkins

Ready to supercharge your setup? Here are a few tried-and-true habits I’ve cultivated:

Leverage Docker’s layer caching: Optimize your Dockerfiles so stable (less frequently changing) layers appear early. This drastically reduces build times.

Run tests in parallel: Jenkins can run multiple containers for different services or microservices, letting you test them side by side. Declarative Pipelines make it easy to define parallel stages, each on its own agent.

Shift left on security: Integrate security checks early in the pipeline. Tools like Docker Scout let you scan images for vulnerabilities, while Jenkins plugins can enforce compliance policies. Don’t wait until production to discover issues.

Optimize resource allocation: Properly configure CPU and memory limits for Jenkins and Docker containers to avoid resource hogging. If you’re scaling Jenkins, distribute builds across multiple worker nodes or ephemeral agents for maximum efficiency.

Configuration management: Store Jenkins jobs, pipeline definitions, and plugin configurations in source control. Tools like Jenkins Configuration as Code simplify versioning and replicating your setup across multiple Docker servers.

With these strategies — plus a healthy dose of Declarative Pipelines — you’ll have a lean, high-octane CI/CD pipeline that’s easier to maintain and evolve.

Troubleshooting Docker and Jenkins Pipelines

Even the best systems hit a snag now and then. Here are a few hurdles I’ve seen (and conquered):

Handling environment variability: Keep Docker and Jenkins versions synced across different nodes. If multiple Jenkins nodes are in play, standardize Docker versions to avoid random build failures.

Troubleshooting build failures: Use docker logs -f <container-id> to see exactly what happened inside a container. Often, the logs reveal missing dependencies or misconfigured environment variables.

Networking challenges: If your containers need to talk to each other — especially across multiple hosts — make sure you configure Docker networks or an orchestration platform properly. Read Docker’s networking documentation for details, and check out the Jenkins diagnosing issues guide for more troubleshooting tips.

Conclusion

Pairing Docker and Jenkins offers a nimble, robust approach to CI/CD. Docker locks down consistent environments and lightning-fast rollouts, while Jenkins automates key tasks like building, testing, and pushing your changes to production. When these two are in harmony, you can expect shorter release cycles, fewer integration headaches, and more time to focus on developing awesome features.

A healthy pipeline also means your team can respond quickly to user feedback and confidently roll out updates — two crucial ingredients for any successful software project. And if you’re concerned about security, there are plenty of tools and best practices to keep your applications safe.

I hope this guide helps you build (and maintain) a high-octane CI/CD pipeline that your team will love. If you have questions or need a hand, feel free to reach out on the community forums, join the conversation on Slack, or open a ticket on GitHub issues. You’ll find plenty of fellow Docker and Jenkins enthusiasts who are happy to help.

Thanks for reading, and happy building!

Learn more

Subscribe to the Docker Newsletter. 

Take a deeper dive in Docker’s official Jenkins integration documentation.

Explore Docker Business for comprehensive CI/CD security at scale.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Simplify AI Development with the Model Context Protocol and Docker

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

In December, we published The Model Context Protocol: Simplifying Building AI apps with Anthropic Claude Desktop and Docker. Along with the blog post, we also created Docker versions for each of the reference servers from Anthropic and published them to a new Docker Hub mcp namespace.

This provides lots of ways for you to experiment with new AI capabilities using nothing but Docker Desktop.

For example, to extend Claude Desktop to use Puppeteer, update your claude_desktop_config.json file with the following snippet:

“puppeteer”: {
“command”: “docker”,
“args”: [”run”, “-i”, “–rm”, “–init”, “-e”, “DOCKER_CONTAINER=true”, “mcp/puppeteer”]
}

After restarting Claude Desktop, you can ask Claude to take a screenshot of any URL using a Headless Chromium browser running in Docker.

You can do the same thing for a Model Context Protocol (MCP) server that you’ve written. You will then be able to distribute this server to your users without requiring them to have anything besides Docker Desktop.

How to create an MCP server Docker Image

An MCP server can be written in any language. However, most of the examples, including the set of reference servers from Anthropic, are written in either Python or TypeScript and use one of the official SDKs documented on the MCP site.

For typical uv-based Python projects (projects with a pyproject.toml and uv.lock in the root), or npm TypeScript projects, it’s simple to distribute your server as a Docker image.

If you don’t already have Docker Desktop, sign up for a free Docker Personal subscription so that you can push your images to others.

Run docker login from your terminal.

Copy either this npm Dockerfile or this Python Dockerfile template into the root of your project. The Python Dockerfile will need at least one update to the last line.

Run the build with the Docker CLI (instructions below).

The two Dockerfiles shown above are just templates. If your MCP server includes other runtime dependencies, you can update the Dockerfiles to include these additions. The runtime of your MCP server should be self-contained for easy distribution.

If you don’t have an MCP server ready to distribute, you can use a simple mcp-hello-world project to practice. It’s a simple Python codebase containing a server with one tool call. Get started by forking the repo, cloning it to your machine, and then following the following instructions to build the MCP server image.

Building the image

Most sample MCP servers are still designed to run locally (on the same machine as the MCP client, communication over stdio). Over the next few months, you’ll begin to see more clients supporting remote MCP servers but for now, you need to plan for your server running on at least two different architectures (amd64 and arm64). This means that you should always distribute what we call multi-platform images when your target is local MCP servers. Fortunately, this is easy to do.

Create a multi-platform builder

The first step is to create a local builder that will be able to build both platforms. Don’t worry; this builder will use emulation to build the platforms that you don’t have. See the multi-platform documentation for more details.

docker buildx create
  –name mcp-builder
  –driver docker-container
  –bootstrap

Build and push the image

In the command line below, substitute <your-account> and your mcp-server-name for valid values, then run a build and push it to your account.

docker buildx build
–builder=mcp-builder
–platform linux/amd64,linux/arm64
-t <your-docker-account>/mcp-server-name
–push .

Extending Claude Desktop

Once the image is pushed, your users will be able to attach your MCP server to Claude Desktop by adding an entry to claude_desktop_config.json that looks something like:

“your-server-name”: {
“command”: “docker”,
“args”: [”run”, “-i”, “–rm”, “–pull=always”,
“your-account/your-server-name”]
}

This is a minimal set of arguments. You may want to pass in additional command-line arguments, environment variables, or volume mounts.

Next steps

The MCP protocol gives us a standard way to extend AI applications. Make sure your extension is easy to distribute by packaging it as a Docker image. Check out the Docker Hub mcp namespace for examples that you can try out in Claude Desktop today.

As always, feel free to follow along in our public repo.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Subscribe to the Docker Newsletter. 

Learn about accelerating AI development with the Docker AI Catalog.

Read the Docker Labs GenAI series.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Meet Gordon: An AI Agent for Docker

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

In previous articles, we focused on how AI-based tools can help developers streamline tasks and offered ideas for enabling agentic workflows, like reviewing branches and understanding code changes.

In this article, we’ll explore our experiments around the idea of creating a Docker AI Agent — something that could both help new users learn about our tools and products and help power users get things done faster.

During our explorations around this Docker Agent and AI-based tools, we noticed that the main pain points we encountered were often the same:

LLMs need good context to provide good answers (garbage in -> garbage out).

Using AI tools often requires context switching (moving to another app, to a different website, etc.).

We’d like agents to be able to suggest and perform actions on behalf of the users.

Direct product integrations with AI are often more satisfying to use than chat interfaces.

At first, we tried to see what’s possible using off-the-shelf services like ChatGPT or Claude. 

By using testing prompts such as “optimize the following Dockerfile, following all best practices” and providing the model with a sub-par but common Dockerfile, we could sometimes get decent answers. Often, though, the resulting Dockerfile had subtle bugs, hallucinations, or simply wasn’t optimized or didn’t use many of the best practices we would’ve hoped for. Thus, this approach was not reliable enough.

Data ended up being the main issue. Training data for LLM models is always outdated by some amount of time, and the number of bad Dockerfiles that you can find online vastly outnumbers the amount of up-to-date Dockerfiles using all best practices, etc.

After doing proof-of-concept tests using a RAG approach, including some documents with lots of useful advice for creating good Dockerfiles, we realized that the AI Agent idea was definitely possible. However, setting up all the things required for a good RAG would’ve taken too much bandwidth from our small team.

Because of this, we opted to use kapa.ai for that specific part of our agent. Docker already uses them to provide the AI docs assistant on Docker docs, so most of our high-quality documentation is already available for us to reference as part of our LLM usage through their service. Using kapa.ai allowed us to experiment more, getting high-quality results faster, and allowing us to try different ideas around the AI agent concept.

Enter Gordon

Out of this experimentation came a new product that you can try: Gordon. With Gordon, we’d like to tackle these pain points. By integrating Gordon into Docker Desktop and the Docker CLI (Figure 1), we can:

Access much more context that can be used by the LLMs to best understand the user’s questions and provide better answers or even perform actions on the user’s behalf.

Be where the users are. If you launch a container via Docker Desktop and it fails, you can quickly debug with Gordon. If you’re in the terminal hacking away, Docker AI will be there, too.

Avoid being a purely chat-based agent by providing Gordon-based features directly as part of Docker Desktop UI elements. If Gordon detects certain scenarios, like a container that failed to start, a button will appear in the UI to directly get suggestions, or run actions, etc. (Figure 2).

Figure 1: Gordon icon on Docker Desktop.

Figure 2: Ask Gordon (beta).

What Gordon can do

We want to start with Gordon by optimizing for Docker-related tasks — not general-purpose questions — but we are not excluding expanding the scope to more development-related tasks as work on the agent continues.

Work on Gordon is at an early stage and its capabilities are constantly evolving, but it’s already really good at some things (Figure 3). Here are things to definitely try out:

Ask general Docker-related questions. Gordon knows Docker well and has access to all of our documentation.

Get help debugging container build or runtime errors.

Remediate policy deviations from Docker Scout.

Get help optimizing Docker-related files and configurations.

Ask it how to run specific containers (e.g., “How can I run MongoDB?”).

Figure 3: Using Gordon to understand a Dockerfile.

How Gordon works

The Gordon backend lives on Docker servers, while the client is a CLI that lives on the user’s machine and is bundled with Docker Desktop. Docker Desktop uses the CLI to access the local machine’s files, asking the user for the directory each time it needs that context to answer a question. When using the CLI directly, it has access to the working directory it’s executed in. For example, if you are in a directory with a Dockerfile and you run “Docker AI, rate my Dockerfile”, it will find the one that’s present in that directory

Currently, Gordon does not have write access to any files, so it will not edit any of your files. We’re hard at work on future features that will allow the agent to do the work for you, instead of only suggesting solutions. 

Figure 4 shows a rough overview of how we are thinking about things behind the scenes.

Figure 4: Overview of Gordon.

The first step of this pipeline, “Understand the user’s input and figure out which action to perform”, is done using “tool calling” (also known as “function calling”) with the OpenAI API. 

Although this is a popular approach, we noticed that the documentation online isn’t very good, and general best practices aren’t well defined yet. This led us to experiment a lot with the feature and try to figure out what works for us and what doesn’t.

Things we noticed:

Tool descriptions are important, and we should prefer more in-depth descriptions with examples.

Testing around tool-detection code is also important. Adding new tools to a request could confuse the LLM and cause it to no longer trigger the expected tool.

The LLM model used influences how the whole tool calling functionality should be implemented, as different models might prefer descriptions written in a certain way, behave better/worse under certain scenarios (e.g. when using lots of tools), etc.

Try Gordon for yourself

Gordon is available as an opt-in Beta feature starting with Docker Desktop version 4.37. To participate in the closed beta, all you need to do is fill out the form on the site.

Initially, Gordon will be available for use both in Docker Desktop and the Docker CLI, but our idea is to surface parts of this tech in various other parts of our products as well.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Subscribe to the Docker Newsletter. 

Learn about accelerating AI development with the Docker AI Catalog.

Read the Docker Labs GenAI series.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Unlocking Efficiency with Docker for AI and Cloud-Native Development

The need for secure and high quality software becomes more critical every day as the impact of vulnerabilities increases and related costs continue to rise. For example, flawed software cost the U.S. economy $2.08 trillion in 2020 alone, according to the Consortium for Information and Software Quality (CISQ). And, a software defect that might cost $100 to fix if found early in the development process can grow exponentially to $10,000 if discovered later in production. 

Docker helps you deliver secure, efficient applications by providing consistent environments and fast, reliable container management, building on best practices that let you discover and resolve issues earlier in the software development life cycle (SDLC).

Shifting left to ensure fewer defects

In a previous blog post, we talked about using the right tools, including Docker’s suite of products to boost developer productivity. Besides having the right tools, you also need to implement the right processes to optimize your software development and improve team productivity. 

The software development process is typically broken into two distinct loops, the inner and the outer loops. At Docker, we believe that investing in the inner loop is crucial. This means shifting security left and identifying problems as soon as you can. This approach improves efficiency and reduces costs by helping teams find and fix software issues earlier.

Using Docker tools to adopt best practices

Docker’s products help you adopt these best practices — we are focused on enhancing the software development lifecycle, especially around refining the inner loop. Products like Docker Desktop allow your dev team in the inner loop to run, test, code, and build everything fast and consistently. This consistency eliminates the “it works on my machine” issue, meaning applications behave the same in both development and production.  

Shifting left lets your dev team identify problems earlier in your software project lifecycle. When you detect issues sooner, you increase efficiency and help ensure secure builds and compliance. By shifting security left with Docker Scout, your dev teams can identify vulnerabilities sooner and help avoid issues down the road. 

Another example of shifting left involves testing — doing testing earlier in the process leads to more robust software and faster release cycles. This is when Testcontainers Cloud comes in handy because it enables developers to run reliable integration tests, with real dependencies defined in code. 

Accelerate development within the hybrid inner loop

We see more and more companies adopting the so-called hybrid inner loop, which combines the best of two worlds — local and cloud. The results provide greater flexibility for your dev teams and encourage better collaboration. For example, Docker Build Cloud uses the power of the cloud to speed up build time without sacrificing the local development experience that developers love. 

By using these Docker products across the software development life cycle, teams get quick feedback loops and faster issue resolution, ensuring a smooth development flow from inception to deployment. 

Simplifying AI application development

When you’re using the right tools and processes to accelerate your application delivery and maximize efficiency throughout your SDLC, processes that were once cumbersome become your new baseline, freeing up time for true innovation. 

Docker also helps accelerate innovation by simplifying AI/ML development. We are continually investing in AI to help your developers deliver AI-backed applications that differentiate your business and enhance competitiveness.

Docker AI tools

Docker’s GenAI Stack accelerates the incorporation of large language models (LLMs) and AI/ML into your code, enabling the delivery of AI-backed applications. All containers work harmoniously and are managed directly from Docker Desktop, allowing your team to monitor and adjust components without leaving their development environment. Deploying the GenAI Stack is quick and easy, and leveraging Docker’s containerization technology helps speed setup and simplify scaling as applications grow.

Earlier this year, we announced the preview of Docker Extension for GitHub Copilot. By standardizing best practices and enabling integrations with tools like GitHub Copilot, Docker empowers developers to focus on innovation, closing the gap from the first line of code to production.

And, more recently, we launched the Docker AI Catalog in Docker Hub. This new feature simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. Your dev team will benefit from shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Wrapping up

Docker products help you establish sound processes and practices related to shifting left and discovering issues earlier to avoid headaches down the road. This approach ultimately unlocks developer productivity, giving your dev team more time to code and innovate. Docker also allows you to quickly use AI to close knowledge gaps and offers trusted tools to build AI/ML applications and accelerate time to market. 

To see how Docker continues to empower developers with the latest innovations and tools, check out our Docker 2024 Highlights.

Learn about Docker’s updated subscriptions and find the ideal plan for your team’s needs.

Learn more

Subscribe to the Docker Navigator Newsletter. 

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

How to Dockerize a Django App: Step-by-Step Guide for Beginners

One of the best ways to make sure your web apps work well in different environments is to containerize them. Containers let you work in a more controlled way, which makes development and deployment easier. This guide will show you how to containerize a Django web app with Docker and explain why it’s a good idea.

We will walk through creating a Docker container for your Django application. Docker gives you a standardized environment, which makes it easier to get up and running and more productive. This tutorial is aimed at those new to Docker who already have some experience with Django. Let’s get started!

Why containerize your Django application?

Django apps can be put into containers to help you work more productively and consistently. Here are the main reasons why you should use Docker for your Django project:

Creates a stable environment: Containers provide a stable environment with all dependencies installed, so you don’t have to worry about “it works on my machine” problems. This ensures that you can reproduce the app and use it on any system or server. Docker makes it simple to set up local environments for development, testing, and production.

Ensures reproducibility and portability: A Dockerized app bundles all the environment variables, dependencies, and configurations, so it always runs the same way. This makes it easier to deploy, especially when you’re moving apps between environments.

Facilitates collaboration between developers: Docker lets your team work in the same environment, so there’s less chance of conflicts from different setups. Shared Docker images make it simple for your team to get started with fewer setup requirements.

Speeds up deployment processes: Docker makes it easier for developers to get started with a new project quickly. It removes the hassle of setting up development environments and ensures everyone is working in the same place, which makes it easier to merge changes from different developers.

Getting started with Django and Docker

Setting up a Django app in Docker is straightforward. You don’t need to do much more than add in the basic Django project files.

Tools you’ll need

To follow this guide, make sure you first:

Install Docker Desktop and Docker Compose on your machine.

Use a Docker Hub account to store and access Docker images.

Make sure Django is installed on your system.

If you need help with the installation, you can find detailed instructions on the Docker and Django websites.

How to Dockerize your Django project

The following six steps include code snippets to guide you through the process.

Step 1: Set up your Django project

1. Initialize a Django project. 

If you don’t have a Django project set up yet, you can create one with the following commands:

django-admin startproject my_docker_django_app
cd my_docker_django_app

2. Create a requirements.txt file. 

In your project, create a requirements.txt file to store dependencies:

pip freeze > requirements.txt

3. Update key environment settings.

You need to change some sections in the settings.py file to enable them to be set using environment variables when the container is started. This allows you to change these settings depending on the environment you are working in.

# The secret key
SECRET_KEY = os.environ.get("SECRET_KEY")

DEBUG = bool(os.environ.get("DEBUG", default=0))

ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",")

Step 2: Create a Dockerfile

A Dockerfile is a script that tells Docker how to build your Docker image. Put it in the root directory of your Django project. Here’s a basic Dockerfile setup for Django:

# Use the official Python runtime image
FROM python:3.13

# Create the app directory
RUN mkdir /app

# Set the working directory inside the container
WORKDIR /app

# Set environment variables
# Prevents Python from writing pyc files to disk
ENV PYTHONDONTWRITEBYTECODE=1
#Prevents Python from buffering stdout and stderr
ENV PYTHONUNBUFFERED=1

# Upgrade pip
RUN pip install –upgrade pip

# Copy the Django project and install dependencies
COPY requirements.txt /app/

# run this command to install all dependencies
RUN pip install –no-cache-dir -r requirements.txt

# Copy the Django project to the container
COPY . /app/

# Expose the Django port
EXPOSE 8000

# Run Django’s development server
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Each line in the Dockerfile serves a specific purpose:

FROM: Selects the image with the Python version you need.

WORKDIR: Sets the working directory of the application within the container.

ENV: Sets the environment variables needed to build the application

RUN and COPY commands: Install dependencies and copy project files.

EXPOSE and CMD: Expose the Django server port and define the startup command.

You can build the Django Docker container with the following command:

docker build -t django-docker .

To see your image, you can run:

docker image list

The result will look something like this:

REPOSITORY TAG IMAGE ID CREATED SIZE
django-docker latest ace73d650ac6 20 seconds ago 1.55GB

Although this is a great start in containerizing the application, you’ll need to make a number of improvements to get it ready for production.

The CMD manage.py is only meant for development purposes and should be changed for a WSGI server.

Reduce the size of the image by using a smaller image.

Optimize the image by using a multistage build process.

Let’s get started with these improvements.

Update requirements.txt

Make sure to add gunicorn to your requirements.txt. It should look like this:

asgiref==3.8.1
Django==5.1.3
sqlparse==0.5.2
gunicorn==23.0.0
psycopg2-binary==2.9.10

Make improvements to the Dockerfile

The Dockerfile below has changes that solve the three items on the list. The changes to the file are as follows:

Updated the FROM python:3.13 image to FROM python:3.13-slim. This change reduces the size of the image considerably, as the image now only contains what is needed to run the application.

Added a multi-stage build process to the Dockerfile. When you build applications, there are usually many files left on the file system that are only needed during build time and are not needed once the application is built and running. By adding a build stage, you use one image to build the application and then move the built files to the second image, leaving only the built code. Read more about multi-stage builds in the documentation.

Add the Gunicorn WSGI server to the server to enable a production-ready deployment of the application.

# Stage 1: Base build stage
FROM python:3.13-slim AS builder

# Create the app directory
RUN mkdir /app

# Set the working directory
WORKDIR /app

# Set environment variables to optimize Python
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

# Upgrade pip and install dependencies
RUN pip install –upgrade pip

# Copy the requirements file first (better caching)
COPY requirements.txt /app/

# Install Python dependencies
RUN pip install –no-cache-dir -r requirements.txt

# Stage 2: Production stage
FROM python:3.13-slim

RUN useradd -m -r appuser &&
mkdir /app &&
chown -R appuser /app

# Copy the Python dependencies from the builder stage
COPY –from=builder /usr/local/lib/python3.13/site-packages/ /usr/local/lib/python3.13/site-packages/
COPY –from=builder /usr/local/bin/ /usr/local/bin/

# Set the working directory
WORKDIR /app

# Copy application code
COPY –chown=appuser:appuser . .

# Set environment variables to optimize Python
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

# Switch to non-root user
USER appuser

# Expose the application port
EXPOSE 8000

# Start the application using Gunicorn
CMD ["gunicorn", "–bind", "0.0.0.0:8000", "–workers", "3", "my_docker_django_app.wsgi:application"]

Build the Docker container image again.

docker build -t django-docker .

After making these changes, we can run a docker image list again:

REPOSITORY TAG IMAGE ID CREATED SIZE
django-docker latest 3c62f2376c2c 6 seconds ago 299MB

You can see a significant improvement in the size of the container.

The size was reduced from 1.6 GB to 299MB, which leads to faster a deployment process when images are downloaded and cheaper storage costs when storing images.

You could use docker init as a command to generate the Dockerfile and compose.yml file for your application to get you started.

Step 3: Configure the Docker Compose file

A compose.yml file allows you to manage multi-container applications. Here, we’ll define both a Django container and a PostgreSQL database container.

The compose file makes use of an environment file called .env, which will make it easy to keep the settings separate from the application code. The environment variables listed here are standard for most applications:

services:
db:
image: postgres:17
environment:
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_USER: ${DATABASE_USERNAME}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
ports:
– "5432:5432"
volumes:
– postgres_data:/var/lib/postgresql/data
env_file:
– .env

django-web:
build: .
container_name: django-docker
ports:
– "8000:8000"
depends_on:
– db
environment:
DJANGO_SECRET_KEY: ${DJANGO_SECRET_KEY}
DEBUG: ${DEBUG}
DJANGO_LOGLEVEL: ${DJANGO_LOGLEVEL}
DJANGO_ALLOWED_HOSTS: ${DJANGO_ALLOWED_HOSTS}
DATABASE_ENGINE: ${DATABASE_ENGINE}
DATABASE_NAME: ${DATABASE_NAME}
DATABASE_USERNAME: ${DATABASE_USERNAME}

DATABASE_PASSWORD: ${DATABASE_PASSWORD}
DATABASE_HOST: ${DATABASE_HOST}
DATABASE_PORT: ${DATABASE_PORT}
env_file:
– .env
volumes:
postgres_data:

And the example .env file:

DJANGO_SECRET_KEY=your_secret_key
DEBUG=True
DJANGO_LOGLEVEL=info
DJANGO_ALLOWED_HOSTS=localhost
DATABASE_ENGINE=postgresql_psycopg2
DATABASE_NAME=dockerdjango
DATABASE_USERNAME=dbuser
DATABASE_PASSWORD=dbpassword
DATABASE_HOST=db
DATABASE_PORT=5432

Step 4: Update Django settings and configuration files

1. Configure database settings. 

Update settings.py to use PostgreSQL:

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.{}'.format(
os.getenv('DATABASE_ENGINE', 'sqlite3')
),
'NAME': os.getenv('DATABASE_NAME', 'polls'),
'USER': os.getenv('DATABASE_USERNAME', 'myprojectuser'),
'PASSWORD': os.getenv('DATABASE_PASSWORD', 'password'),
'HOST': os.getenv('DATABASE_HOST', '127.0.0.1'),
'PORT': os.getenv('DATABASE_PORT', 5432),
}
}

2. Set ALLOWED_HOSTS to read from environment files. 

In settings.py, set ALLOWED_HOSTS to:

# 'DJANGO_ALLOWED_HOSTS' should be a single string of hosts with a , between each.
# For example: 'DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1,[::1]'
ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",")

3. Set the SECRET_KEY to read from environment files.

In settings.py, set SECRET_KEY to:

# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get("DJANGO_SECRET_KEY")

4. Set DEBUG to read from environment files.

In settings.py, set DEBUG to:

# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = bool(os.environ.get("DEBUG", default=0))

Step 5: Build and run your new Django project

To build and start your containers, run:

docker compose up –build

This command will download any necessary Docker images, build the project, and start the containers. Once complete, your Django application should be accessible at http://localhost:8000.

Step 6: Test and access your application

Once the app is running, you can test it by navigating to http://localhost:8000. You should see Django’s welcome page, indicating that your app is up and running. To verify the database connection, try running a migration:

docker compose run django-web python manage.py migrate

Troubleshooting common issues with Docker and Django

Here are some common issues you might encounter and how to solve them:

Database connection errors: If Django can’t connect to PostgreSQL, verify that your database service name matches in compose.yml and settings.py.

File synchronization issues: Use the volumes directive in compose.yml to sync changes from your local files to the container.

Container restart loops or crashes: Use docker compose logs to inspect container errors and determine the cause of the crash.

Optimizing your Django web application

To improve your Django Docker setup, consider these optimization tips:

Automate and secure builds: Use Docker’s multi-stage builds to create leaner images, removing unnecessary files and packages for a more secure and efficient build.

Optimize database access: Configure database pooling and caching to reduce connection time and boost performance.

Efficient dependency management: Regularly update and audit dependencies listed in requirements.txt to ensure efficiency and security.

Take the next step with Docker and Django

Containerizing your Django application with Docker is an effective way to simplify development, ensure consistency across environments, and streamline deployments. By following the steps outlined in this guide, you’ve learned how to set up a Dockerized Django app, optimize your Dockerfile for production, and configure Docker Compose for multi-container setups.

Docker not only helps reduce “it works on my machine” issues but also fosters better collaboration within development teams by standardizing environments. Whether you’re deploying a small project or scaling up for enterprise use, Docker equips you with the tools to build, test, and deploy reliably.

Ready to take the next step? Explore Docker’s powerful tools, like Docker Hub and Docker Scout, to enhance your containerized applications with scalable storage, governance, and continuous security insights.

Learn more 

Subscribe to the Docker Newsletter. 

Learn more about Docker commands, Docker Compose, and security in the Docker Docs. 

Find Dockerized Django projects for inspiration and guidance in GitHub. 

Discover Docker plugins that improve performance, logging, and security.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/