Using an AI Assistant to Read Tool Documentation

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

Using new tools on the command line can be frustrating. Even if we are confident that we’ve found the right tool, we might not know how to use it.

Telling an agent to RT(F)M

A typical workflow might look something like the following.

Install tool.

Read the documentation.

Run the command.

Repeat.

Can we improve this flow using LLMs?

Install tool

Docker provides us with isolated environments to run tools. Instead of requiring that commands be installed, we have created minimal Docker images for each tool so that using the tool does not impact the host system. Leave no trace, so to speak.

Read the documentation

Man pages are one of the ways that authors of tools ship content about how to use that tool. This content also comes with standard retrieval mechanisms (the man tool). A tool might also support a command-line option like –help. Let’s start with the idealistic notion that we should be able to retrieve usage information from the tool itself.

In this experiment, we’ve created two entry points for each tool. The first entry point is the obvious one. It is a set of arguments passed directly to a command-line program. The OpenAI-compatible description that we generate for this entry point is shown below. We are using the same interface for every tool.

{"name": "run_my_tool",
"description": "Run the my_tool command.",
"parameters":
{"type": "object",
"properties":
{"args":
{"type": "string",
"description": "The arguments to pass to my_tool"}}},
"container": {"image": "namespace/my_tool:latest"}}

The second entrypoint gives the agent the ability to read the man page and, hopefully, improve its ability to run the first entrypoint. The second entrypoint is simpler, because it only does one thing (asks a tool how to use it).

{"name": "my_tool_manual",
"description": "Read the man page for my_tool",
"container": {"image": "namespace/my_tool:latest", "command": ["man"]}}

Run the command

Let’s start with a simple example. We want to use a tool called qrencode to generate a QR code for a link. We have used our image generation pipeline to package this tool into a minimal image for qrencode. We will now pass this prompt to a few different LLMs; we are using LLMs that have been trained for tool calling (e.g., GPT 4, Llama 3.1, and Mistral). Here’s the prompt that we are testing:

Generate a QR code for the content https://github.com/docker/labs-ai-tools-for-devs/blob/main/prompts/qrencode/README.md. Save the generated image to qrcode.png.
If the command fails, read the man page and try again.

Note the optimism in this prompt. Because it’s hard to predict what different LLMs have already seen in their training sets, and many command-line tools use common names for arguments, it’s interesting to see what LLM will infer before adding the man page to the context.

The output of the prompt is shown below. Grab your phone and check it out.

Figure 1: Content QR code generated by AI assistant.

Repeat

When an LLM generates a description of how to run something, it will usually format that output in such a way that it will be easy for a user to cut and paste the response into a terminal:

qrencode -o qrcode.png 'my content'

However, if the LLM is generating tool calls, we’ll see output that is instead formatted to be easier to run:

[{"function": {"arguments": "{
"args": "-o qrcode.png 'my content'"
}"
"name": "qrencode"}
"id": "call_Vdw2gDFMAwaEUMgxLYBTg8MB"}]

We respond to this by spinning up a Docker container.

Running the tool as part of the conversation loop is useful even when the command fails. In Unix, there are standard ways to communicate failures. For example, we have exit codes, and stderr streams. This is how tools create feedback loops and correct our behavior while we’re iterating at the terminal. This same mechanism can be used in a conversation loop involving an LLM.

To illustrate, here is another simple example. We’ll try running a tool that writes ASCII art with the following prompt.

Write ascii art with the message "Docker" using the font "helvetica". Read the man page for figlet if you detect any errors and then try again.

In our test, this did fail. However, it also described the apparent issue on the stderr output stream.

Error: error: could not load font helvetica

By including this message in the conversation loop, the assistant can suggest different courses of action. Different LLMs produced different results here. For example, Llama 3.1 gives instructions for how to install the missing font. On the other hand, GPT 4 re-ran the command, but only after having made the “executive” decision to try a different font.

I'm sorry, it seems the font Helvetica is not available. Let's try it with the default font.

Here is the output of the figlet command with the default font (term):

____ _
| _ | | _____ _
| | | |/ / _| |/ / '_|
| |_| | (_) | (__| < __/ |
|____/ ___/ ___|_|____|_|

We are very early in understanding how to take advantage of this apparent capacity to try different approaches. But this is another reason why quarantining these tools in Docker containers is useful. It limits their blast radius while we encourage experimentation.

Results

We started by creating a pipeline to produce minimal Docker images for each tool. The set of tools was selected based on whether they have outputs useful for developer-facing workflows. We continue to add new tools as we think of new use cases. The initial set is listed below.

gh pylint commitlint scalafix gitlint yamllint checkmake gqlint sqlint golint golangci-lint hadolint markdownlint-cli2 cargo-toml-lint ruff dockle clj-kondo selene tflint rslint yapf puppet-lint oxlint kube-linter csslint cpplint ansible-lint actionlint black checkov jfmt datefmt rustfmt cbfmt yamlfmt whatstyle rufo fnlfmt shfmt zprint jet typos docker-ls nerdctl diffoci dive kompose git-test kubectl fastly infracost sops curl fzf ffmpeg babl unzip jq graphviz pstree figlet toilet tldr qrencode clippy go-tools ripgrep awscli2 azure-cli luaformatter nixpkgs-lint hclfmt fop dnstracer undocker dockfmt fixup_yarn_lock github-runner swiftformat swiftlint nix-linter go-critic regal textlint formatjson5 commitmsgfmt

There was a set of initial problems with context extraction.

Missing manual pages

Only about 60% of the tools we selected have man pages. However, even in those cases, there are usually other ways to get help content. The following steps show the final procedure we used:

Try to run the man page.

Try to run the tool with the argument –help.

Try to run the tool with the argument -h.

Try to run the tool with –broken args and then read stderr.

Using this procedure, every tool in the list above eventually produced documentation.

Long manual pages

Limited context lengths impacted some of the longer manual pages, so it was still necessary to employ standard RAG techniques to summarize verbose man pages. Our tactic was to focus on descriptions of command-line arguments and sections that had sample usage. These had the largest impact on the quality of the agent’s output. The structure of Unix man pages helped with the chunking, because we were able to rely on standard sections to chunk the content.

Subcommands

For a small set of tools, it was necessary to traverse a tree of help menus. However, these were all relatively popular tools, and the LLMs we deployed already knew about this command structure. It’s easy to check this out for yourself. Ask an LLM, for example: “What are the subcommands of Git?” or “What are the subcommands of Docker?” Maybe only popular tools get big enough that they start to be broken up into subcommands.

Summary

We should consider the active role that agents can play when determining how to use a tool. The Unix model has given us standards such as man pages, stderr streams, and exit codes, and we can take advantage of these conventions when asking an assistant to learn a tool. Beyond distribution, Docker also provides us with process isolation, which is useful when creating environments for safe exploration.

Whether or not an AI can successfully generate tool calls may also become a metric for whether or not a tool has been well documented.

To follow along with this effort, check out the GitHub repository for this project.

Learn more

Read the Docker Labs GenAI series.

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

10 Docker Myths Debunked

Containers might seem like a relatively recent technological breakthrough, but their origins trace back to the 1970s when Unix systems first used container-like concepts to isolate applications. Fast-forward to 2013, and Docker revolutionized this idea by introducing a portable, user-friendly container platform, sparking widespread adoption. In 2015, Docker was instrumental in creating the Open Container Initiative (OCI) to promote open standards within the container ecosystem. With the stability provided by the OCI, container technology spread throughout the tech world.

Although Docker Desktop is the leading tool for creating containerized applications, Docker remains surrounded by numerous misconceptions. In this article, we’ll debunk the top Docker myths and explain the capabilities and benefits of this transformative technology.

Myth #1: Docker is no longer open source

Docker consists of multiple components, most of which are open source. The core Docker Engine is open source and licensed under the Apache 2.0 license, so developers can continue to use and contribute to it freely. Other vital parts of the Docker ecosystem, like the Docker CLI and Docker Compose, also remain open source. This allows the community to maintain transparency, contribute improvements, and customize their container solutions.

Docker’s commitment to open source is best illustrated by the Moby Project. In 2017, Moby was spun out of the then-monolithic Docker codebase to provide a set of “building blocks” to create containerized solutions and platforms. Docker uses the Moby project for the free Docker Engine project and our commercial Docker Desktop.

Users can also find Trusted Open Source Content on Docker Hub. These Docker-Sponsored Open Source and Docker Official Images offer trusted versions of open source projects and reliable building blocks for better development.

Docker is a founder and remains a crucial contributor to the OCI, which defines container standards. This initiative ensures that Docker and other container technologies remain interoperable and maintain a commitment to open source principles.

Myth #2: Docker containers are virtual machines 

Docker containers are often mistaken for virtual machines (VMs), but the technologies operate quite differently. Unlike VMs, Docker containers don’t include an entire operating system (OS). Instead, they share the host operating system kernel, making them more lightweight and efficient. VMs require a hypervisor to create virtual hardware for the guest OS, which introduces significant overhead. Docker only packages the application and its dependencies, allowing for faster startup times and minimal performance overhead.

By utilizing the host operating system’s resources efficiently, Docker containers use fewer resources overall than VMs, which need substantial resources to run multiple operating systems concurrently. Docker’s architecture efficiently runs numerous isolated applications on a single host, optimizing infrastructure and development workflows. Understanding this distinction is crucial for maximizing Docker’s lightweight and scalable potential.

However, when running on non-Linux systems, Docker needs to emulate a Linux environment. For example, Docker Desktop uses a fully managed VM to provide a consistent experience across Windows, Mac, and Linux by running its Linux components inside this VM.

Myth #3: Docker Engine vs. Docker Desktop vs. Docker Enterprise Edition — They’re all the same

Considerable confusion surrounds the different Docker options that are available, which include:

Mirantis Container Runtime: Docker Enterprise Edition (Docker EE) was sold to Mirantis in 2019 and rebranded as Mirantis Container Runtime. This software, which is managed and sold by Mirantis, is designed for production container deployments and offers a lightweight alternative to existing orchestration tools.

Docker Engine: Docker Engine is the fully open source version built from the Moby Project, providing the Docker Engine and CLI.

Docker Desktop: Docker Desktop is a commercial offering sold by Docker that combines Docker Engine with additional features to enhance developer productivity. The Docker Business subscription includes advanced security and governance features for enterprises.

All of these variants are OCI-compliant, differing mainly in features and experiences. Docker Engine caters to the open source community, Docker Desktop elevates developer workflows with a comprehensive suite of tools for building and scaling applications, and Mirantis Container Runtime provides a specialized solution for enterprise production environments with advanced management and support. Understanding these distinctions is crucial for selecting the appropriate Docker variant to meet specific project requirements and organizational goals.

Myth #4: Docker is the same thing as Kubernetes

This myth arises from the fact that both Docker and Kubernetes are associated with containerized environments. Although they are both key players in the container ecosystem, they serve different roles.

Kubernetes (K8s) is an orchestration system for managing container instances at scale. This container orchestration tool automates the deployment, scaling, and operations of multiple containers across clusters of hosts. Other orchestration technologies include Nomad, serverless frameworks, Docker’s Swarm mode, and Apache Mesos. Each offers different features for managing containerized workloads.

Docker is primarily a platform for developing, shipping, and running containerized applications. It focuses on packaging applications and their dependencies in a portable container and is often used for local development where scaling is not required. Docker Desktop includes Docker Compose, which is designed to orchestrate multi-container deployments locally

In many organizations, Docker is used to develop applications, and the resulting Docker images are then deployed to Kubernetes for production. To support this workflow, Docker Desktop includes an embedded Kubernetes installation and the Compose Bridge tool for translating Compose format into Kubernetes-friendly code.

Myth #5: Docker is not secure

The belief that Docker is not secure is often a result of misunderstandings around how security is implemented within Docker. To help reduce security vulnerabilities and minimize the attack surface, Docker offers the following measures:

Opt-in security configuration 

Except for a few components, Docker operates on an opt-in basis for security. This approach removes friction for new users, but means Docker can still be configured to be more secure for enterprise considerations and for security-conscious users with sensitive data.

“Rootless” mode capabilities 

Docker Engine can run in rootless mode, where the Docker daemon runs without root permissions. This capability reduces the potential blast radius of malicious code escaping a container and gaining root permissions on the host. Docker Desktop takes security further by offering Enhanced Container Isolation (ECI), which provides advanced isolation features beyond what rootless mode can offer.

Built-in security features

Additionally, Docker security includes built-in features such as namespaces, control groups (cgroups), and seccomp profiles that provide isolation and limit the capabilities of containers.

SOC 2 Type 2 Attestation and ISO 27001 Certification

It’s important to note that, as an open source tool, Docker Engine is not in scope for SOC 2 Type 2 Attestation or ISO 27001 Certification. These certifications pertain to Docker, Inc.’s paid products, which offer additional enterprise-grade security and compliance features. These paid features, outlined in a Docker security blog post, focus on enhancing security and simplifying compliance for SOC 2, ISO 27001, FedRAMP, and other standards.  

Along with these security measures, Docker also provides best practices in the Docker documentation and training materials to help users learn how to secure their containers effectively. Recognizing and implementing these features reduces security risks and ensures that Docker can be a secure platform for containerized applications.

Myth #6: Docker is dead

This myth stems from the rapid growth and changes within the container ecosystem over the past decade. To keep pace with these changes, Docker is actively developed and is also widely adopted. In fact, the Stack Overflow community chose Docker as the most-used and most-desired developer tool in the 2024 Developer Survey for the second year in a row and recognized it as the most-admired developer tool. 

Docker Hub is one of the world’s largest repositories of container images. According to the 2024 Docker State of Application Development Report, tools like Docker Desktop, Docker Scout, Docker Build Cloud, and Docker Debug are integral to more than two-thirds of container development workflows. And, as a founding member of the OCI and steward of the Moby project, Docker continues to play a guiding role in containerization.

In the automation space, Docker is crucial for building OCI images and creating lightweight runners for build queues. With the rise of data science and AI/ML, Docker images facilitate the exchange of models, notebooks, and applications, supported by GPU workload capabilities in Docker Desktop. Additionally, Docker is widely used for quickly and cost-effectively mocking up test scenarios as an alternative to deploying actual hardware or VMs.

Myth #7: Docker is hard to learn

The belief that Docker is difficult to learn often comes from the perceived complexity of container concepts and Docker’s many features. However, Docker is a foundational technology used by more than 20 million developers worldwide, and countless resources are available to make learning Docker accessible.

Docker, Inc. is committed to the developer experience, creating intuitive and user-friendly product design for Docker Desktop and supporting products. Documentation, workshops, training, and examples are accessible through Docker Desktop, the Docker website and blog, and the Docker Navigator newsletter. Additionally, the Docker documentation site offers comprehensive guides and learning paths, and Udemy courses co-produced with Docker help new users understand containerization and Docker usage.

The thriving Docker community also contributes a wealth of content and resources, including video tutorials, how-tos, and in-person talks.

Myth #8: Docker and container technology are only for developers

The idea that Docker is only for developers is a common misconception. Docker and containers are used across various fields beyond development. Docker Desktop’s ability to run containerized workloads on Windows, macOS, or Linux requires minimal technical knowledge from users. Its integration features — synchronized host filesystems, network proxy support, air-gapped containers, and resource controls — ensure administrators can enforce governance and security.

Data science: Docker provides consistent environments, enabling data scientists to share models, datasets, and development setups seamlessly.

Healthcare: Docker deploys scalable applications for managing patient data and running simulations, such as medical imaging software across different hospital systems.

Education: Educators and students use Docker to create reproducible research environments, which facilitate collaboration and simplify coding project setups.

Docker’s versatility extends beyond development, providing consistent, scalable, and secure environments for various applications.

Myth #9: Docker Desktop is just a GUI

The myth that Docker Desktop is merely a graphical user interface (GUI) overlooks its extensive features designed to enhance developer experience, streamline container management, and accelerate productivity, such as:

Cross-platform support

Docker is Linux-based, but most developer workstations run Windows or macOS. Docker Desktop enables these platforms to run Docker tooling inside a fully managed VM integrated with the host system’s networking, filesystem, and resources.

Developer tools

Docker Desktop includes built-in Kubernetes, Docker Scout for supply chain management, Docker Build Cloud for faster builds, and Docker Debug for container debugging.

Security and governance

For administrators, Docker Desktop offers Registry Access Management and Image Access Management, Enhanced Container Isolation, single sign-on (SSO) for authorization, and Settings Management, making it an essential tool for enterprise deployment and management.

Myth #10: Docker containers are for microservices only

Although Docker containers are popular for microservices architectures, they can be used for any type of application. For example, monolithic applications can be containerized, allowing them and their dependencies to be isolated into a versioned image that can run across different environments. This approach enables gradual refactoring into microservices if desired.

Additionally, Docker is excellent for rapid prototyping, allowing quick deployment of minimum viable products (MVPs). Containerized prototypes are easier to manage and refactor compared to those deployed on VMs or bare metal.

Now you know

Now that you have the facts, it’s clear that adopting Docker can significantly enhance productivity, scalability, and security for a variety of use cases. Docker’s versatility, combined with extensive learning resources and robust security features, makes it an indispensable tool in modern software development and deployment. Adopting Docker and its true capabilities can significantly enhance productivity, scalability, and security for your use case.

For more detailed insights, refer to the 2024 Docker State of Application Development Report or dive into Docker Desktop now to start your Docker journey today. 

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Get started with Testcontainers Cloud by creating a free account.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker for Web Developers: Getting Started with the Basics

Docker is known worldwide as a popular application containerization platform. But it also has a lesser-known and intriguing alter ego. It’s a popular go-to platform among web developers for its speed, flexibility, broad user base, and collaborative capabilities. 

Docker has been growing as a modern solution that brings innovation to web development using containerization. With containers, developers and web development projects can become more efficient, save time, and drive fresh creativity. Web developers use Docker for development because it ensures consistency across different environments, eliminating the “it works on my machine” problem. Docker also simplifies dependency management, enhances resource efficiency, supports scalable microservices architectures, and allows for rapid deployment and rollback, making it an indispensable tool for modern web development projects.

In this post, we dive into the benefits of using Docker in businesses from small to large, and review Docker’s broad capabilities, strengths, and features for bolstering web development and developer productivity. 

What is Docker?

Docker is secure, out-of-the-box containerization software offering developers and teams a robust, hybrid toolkit to develop, test, monitor, ship, deploy, and run enterprise and web applications. Containerization lets developers separate their applications from infrastructure so they can run them without worrying about what is installed on the host, giving development teams flexibility and collaborative advantages over virtual machines, while delivering better source code faster. 

The Docker suite enables developers to package and run their application code in lightweight, local, standardized containers that have everything needed to run the application — including an operating system and required services. Docker allows developers to run many containers simultaneously on a host, while also allowing the containers to be shared with others. By working within this collaborative workspace, productive and direct communications can thrive and development processes become easier, more accurate, and more secure. Many of the components in Docker are open source, including Docker Compose, BuildKit, the Docker command-line interface (Docker CLI), containerd, and more. 

As the #1 containerization software for developers and teams, Docker is well-suited for all flavors of development. Highlights include: 

Docker Hub: The world’s largest repository of container images, which helps developers and open source contributors find, use, and share their Docker-inspired container images.

Docker Compose: A tool for defining and running multi-container applications.

Docker Engine: An open source containerization technology for building and containerizing applications.

Docker Desktop: Includes the Docker Engine and other open source components; proprietary components; and features such as an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, and security features that support Enhanced Container Isolation (ECI), air-gapped containers, and administrative settings management.

Docker Build Cloud: A Docker service that lets developers build their container images on a cloud infrastructure that ensures fast builds anywhere for all team members. 

What is a container?

Containers are lightweight, standalone, executable packages of software that include everything needed to run an application: code, runtime, libraries, environment variables, and configuration files. Containers are isolated from each other and can be connected to networks or storage and can be used to create new images based on their current states. 

Docker containers are faster and more efficient for software creation than virtualization, which uses a resource-heavy software abstraction layer on top of computer hardware. Additionally, Docker containers require fewer physical hardware resources than virtual machines and communicate with their host systems through well-defined channels.

Why use Docker for web applications?

Docker is a popular choice for developers building enterprise applications for various reasons, including consistent environments, efficient resource usage, speed, container isolation, scalability, flexibility, and portability. And, Docker is popular for web development for these same reasons. 

Consistent environments

Using Docker containers, web developers can build web applications that provide consistent environments from their development all the way through to production. By including all the components needed to run an application within an isolated container, Docker addresses those issues by allowing developers to produce and package their containers and then run them through various development, testing, and production environments to ensure their quality, security, and performance. This approach helps developers prevent the common and frustrating “but it works on my machine” conundrum, assuring that the code will run and perform well anywhere, from development through deployment.

Efficiency in using resources

With its lightweight architecture, Docker uses system resources more efficiently than virtual machines, allowing developers to run more applications on the same hardware. Docker containers allow multiple containers to run on a single host and gain resource efficiency due to the isolation and allocation features that containers incorporate. Additionally, containers require less memory and disk space to perform their tasks, saving on hardware costs and making resource management easier. Docker also saves development time by allowing container images to be reused as needed. 

Speed

Docker’s design and components also give developers significant speed advantages in setting up and tearing down container environments, allowing needed processes to be completed in seconds due to its lightweight and flexible application architecture. This allows developers to rapidly iterate their containerized applications, increasing their productivity for writing, building, testing, monitoring, and deploying their creations.  

Isolation

Docker’s application isolation capabilities provide huge benefits for developers, allowing them to write code and build their containers and applications simultaneously, with changes made in one not affecting the others. For developers, these capabilities allow them to find and isolate any bad code before using it elsewhere, improving security and manageability.

Scalability, flexibility, and portability

Docker’s flexible platform design also gives developers broad capabilities to easily scale applications up or down based on demand, while also allowing them to be deployed across different servers. These features give developers the ability to manage different workloads and system resources as needed. And, its portability features mean that developers can create their applications once and then use them in any environment, further ensuring their reliability and proper operation through the development cycle to production.

How web developers use Docker

There is a wide range of Docker use cases for today’s web developers, including its flexibility as a local development environment that can be quickly set up to match desired production environments; as an important partner for microservices architectures, where each service can be developed, tested, and deployed independently; or as an integral component in continuous integration and continuous deployment (CI/CD) pipelines for automated testing and deployment.

Other important Docker use cases include the availability of a strong and knowledgeable user community to help drive developer experiences and skills around containerization; its importance and suitability for vital cross-platform production and testing; and deep resources and availability for container images that are usable for a wide range of application needs. 

Get started with Docker for web development (in 6 steps)

So, you want to get a Docker container up and running quickly? Let’s dive in using the Docker Desktop GUI. In this example, we will use the Docker version for Microsoft Windows, but there are also Docker versions for use on Mac and many flavors of Linux. 

Step 1: Install Docker Desktop

Start by downloading the installer from the docs or from the release notes.

Double-click Docker Desktop for Windows Installer.exe to run the installer. By default, Docker Desktop is installed at C:Program FilesDockerDocker.

When prompted, be sure to choose the WSL 2 option instead of the Hyper-V option on the configuration page, depending on your choice of backend. If your system only supports one of the two options, you will not be able to select which backend to use.

Follow the instructions on the installation wizard to authorize the installer and proceed with the installation. When the installation is successful, select Close to complete the installation process.

Step 2: Create a Dockerfile

A Dockerfile is a text-based file that contains a running script of instructions giving full details on how a developer wants to build their Docker container image. A Dockerfile, which uses no file extension, is built by creating a file named Dockerfile in the getting-started-app directory, which is also where the package.json file is found. 

A Dockerfile contains details about the container’s operating system, file locations, environment, dependencies, configuration, and more. Check out the useful Docker best practices documentation for creating quality Dockerfiles. 

Here is a basic Dockerfile example for setting up an Apache web server. 

Create a Dockerfile in your project:

FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/

Next, run the commands to build and run the Docker image:

$ docker build -t my-apache2
$ docker run -dit –name my-running-app -p 8080:80 my-apache2

Visit http://localhost:8080 to see it working.

Step 3: Build your Docker image

The Dockerfile that was just created allows us to start building our first Docker container image. The docker build command initiated in the previous step started the new Docker image using the Dockerfile and related “context,” which is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. Docker images begin with a base image that must be downloaded from a repository to start a new image project.

Step 4: Run your Docker container

To run a new container, start with the docker run command, which runs a command in a new container. The command pulls an image if needed and then starts the container. By default, when you create or run a container using docker create or docker run, the container does not expose any of its ports to the outside world. To make a port available to services outside of Docker you must use the –publish or -p flag commands. This creates a firewall rule in the host, mapping a container port to a port on the Docker host to the outside world. 

Step 5: Access your web application

How to access a web application that is running inside a Docker container.

To access a web application running inside a Docker container, you need to publish the container’s port to the host. This can be done using the docker run command with the –publish or -p flag. The format of the –publish command is [host_port]:[container_port].

Here is an example of how to run a container and publish its port using the Docker CLI:

$ docker run -d -p 8080:80 docker/welcome-to-docker

In this command, the first 8080 refers to the host port. This is the port on your local machine that will be used to access the application running inside the container. The second 80 refers to the container port. This is the port that the application inside the container listens on for incoming connections. Hence, the command binds to port 8080 of the host to port 80 on the container system.

After running the container with the published port, you can access the web application by opening a web browser and visiting http://localhost:8080.

You can also use Docker Compose to run the container and publish its port. Here is an example of a compose.yaml file that does this:

services:
app:
image: docker/welcome-to-docker
ports:
– 8080:80

After creating this file, you can start the application with the docker compose up command. Then, you can access the web application at http://localhost:8080.

Step 6: Make changes and update

Updating a Docker application in a container requires several steps. With the command-line interface use the docker stop command to stop the container, then the existing container can be removed by using the docker rm (remove) command. Next, a new updated container can be started by using a new docker run command with the updated container. The old container must be stopped before replacing it because the old container is already using the host’s port 3000. Only one process on the machine — including containers — can listen to a specific port at a time. Only after the old container is stopped can it be removed and replaced with a new one. 

Conclusion

In this blog post, we learned about how Docker brings valuable benefits to web developers to speed up and improve their operations and creativity, and we touched on how web developers can get started with the platform on Day One, including basic instructions on setting up Docker quickly to start using it for web development.

Docker delivers streamlined workflows for web development due to its lightweight architecture and broad collaboration, application design, scalability, and other benefits. Docker expands the capabilities of web application developers, giving them flexible tools for everything from building better code to testing, monitoring, and deploying reliable code more quickly. 

Subscribe to our newsletter to stay up-to-date about Docker and its latest uses and innovations. 

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Continue learning with Docker training. 

Visit Docker Resources to explore more materials.

Check out our documentation guides. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Secure by Design for AI: Building Resilient Systems from the Ground Up

As artificial intelligence (AI) has erupted, Secure by Design for AI has emerged as a critical paradigm. AI is integrating into every aspect of our lives — from healthcare and finance to developers to autonomous vehicles and smart cities — and its integration into critical infrastructure has necessitated that we move quickly to understand and combat threats. 

Necessity of Secure by Design for AI

AI’s rapid integration into critical infrastructure has accelerated the need to understand and combat potential threats. Security measures must be embedded into AI products from the beginning and evolve as the model evolves. This proactive approach ensures that AI systems are resilient against emerging threats and can adapt to new challenges as they arise. In this article, we will explore two polarizing examples — the developer industry and the healthcare industry.

Complexities of threat modeling in AI

AI brings forth new challenges and conundrums when working on an accurate threat model. Before reaching a state in which the data has simple edit and validation checks that can be programmed systematically, AI validation checks need to learn with the system and focus on data manipulation, corruption, and extraction. 

Data poisoning: Data poisoning is a significant risk in AI, where the integrity of the data used by the system can be compromised. This can happen intentionally or unintentionally and can lead to severe consequences. For example, bias and discrimination in AI systems have already led to issues, such as the wrongful arrest of a man in Detroit due to a false facial recognition match. Such incidents highlight the importance of unbiased models and diverse data sets. Testing for bias and involving a diverse workforce in the development process are critical steps in mitigating these risks.

In healthcare, for example, bias may be simpler to detect. You can examine data fields based on areas such as gender, race, etc. 

In development tools, bias is less clear-cut. Bias could result from the underrepresentation of certain development languages, such as Clojure. Bias may even result from code samples based on regional differences in coding preferences and teachings. In developer tools, you likely won’t have the information available to detect this bias. IP addresses may give you information about where a person is living currently, but not about where they grew up or learned to code. Therefore, detecting bias will be more difficult. 

Data manipulation: Attackers can manipulate data sets with malicious intent, altering how AI systems behave. 

Privacy violations: Without proper data controls, personal or sensitive information could unintentionally be introduced into the system, potentially leading to privacy violations. Establishing strong data management practices to prevent such scenarios is crucial.

Evasion and abuse: Malicious actors may attempt to alter inputs to manipulate how an AI system responds, thereby compromising its integrity. There’s also the potential for AI systems to be abused in ways developers did not anticipate. For example, AI-driven impersonation scams have led to significant financial losses, such as the case where an employee transferred $26 million to scammers impersonating the company’s CFO.

These examples underscore the need for controls at various points in the AI data lifecycle to identify and mitigate “bad data” and ensure the security and reliability of AI systems.

Key areas for implementing Secure by Design in AI

To effectively secure AI systems, implementing controls in three major areas is essential (Figure 1):

Figure 1: Key areas for implementing security controls.

1. Data management

The key to data management is to understand what data needs to be collected to train the model, to identify the sensitive data fields, and to prevent the collection of unnecessary data. Data management also involves ensuring you have the correct checks and balances to prevent the collection of unneeded data or bad data.

In healthcare, sensitive data fields are easy to identify. Doctors offices often collect national identifiers, such as drivers licenses, passports, and social security numbers. They also collect date of birth, race, and many other sensitive data fields. If the tool is aimed at helping doctors identify potential conditions faster based on symptoms, you would need anonymized data but would still need to collect certain factors such as age and race. You would not need to collect national identifiers.

In developer tools, sensitive data may not be as clearly defined. For example, an environment variable may be used to pass secrets or pass confidential information, such as the image name from the developer to the AI tool. There may be secrets in fields you would not suspect. Data management in this scenario involves blocking the collection of fields where sensitive data could exist and/or ensuring there are mechanisms to scrub sensitive data built into the tool so that data does not make it to the model. 

Data management should include the following:

Implementing checks for unexpected data: In healthcare, this process may involve “allow” lists for certain data fields to prevent collecting irrelevant or harmful information. In developer tools, it’s about ensuring the model isn’t trained on malicious code, such as unsanitized inputs that could introduce vulnerabilities.

Evaluating the legitimacy of users and their activities: In healthcare tools, this step could mean verifying that users are licensed professionals, while in developer tools, it might involve detecting and mitigating the impact of bot accounts or spam users.

Continuous data auditing: This process ensures that unexpected data is not collected and that the data checks are updated as needed. 

2. Alerting and monitoring 

With AI, alerting and monitoring is imperative to ensuring the health of the data model. Controls must be both adaptive and configurable to detect anomalous and malicious activities. As AI systems grow and adapt, so too must the controls. Establish thresholds for data, automate adjustments where possible, and conduct manual reviews where necessary.

In a healthcare AI tool, you might set a threshold before new data is surfaced to ensure its accuracy. For example, if patients begin reporting a new symptom that is believed to be associated with diabetes, you may not report this to doctors until it is reported by a certain percentage (15%) of total patients. 

In a developer tool, this might involve determining when new code should be incorporated into the model as a prompt for other users. The model would need to be able to log and analyze user queries and feedback, track unhandled or poorly handled requests, and detect new patterns in usage. Data should be analyzed for high frequencies of unhandled prompts, and alerts should be generated to ensure that additional data sets are reviewed and added to the model.

3. Model tuning and maintenance

Producers of AI tools should regularly review and adjust AI models to ensure they remain secure. This includes monitoring for unexpected data, adjusting algorithms as needed, and ensuring that sensitive data is scrubbed or redacted appropriately.

For healthcare, model tuning may be more intensive. Results may be compared to published medical studies to ensure that patient conditions are in line with other baselines established across the world. Audits should also be conducted to ensure that doctors with reported malpractice claims or doctors whose medical license has been revoked are scrubbed from the system to ensure that potentially compromised data sets are not influencing the model. 

In a developer tool, model tuning will look very different. You may look at hyperparameter optimization using techniques such as grid search, random search, and Bayesian search. You may study subsets of data; for example, you may perform regular reviews of the most recent data looking for new programming languages, frameworks, or coding practices. 

Model tuning and maintenance should include the following:

Perform data audits to ensure data integrity and that unnecessary data is not inadvertently being collected. 

Review whether “allow” lists and “deny” lists need to be updated.

Regularly audit and monitor alerts for algorithms to determine if adjustments need to be made; consider the population of your user base and how the model is being trained when adjusting these parameters.

Ensure you have the controls in place to isolate data sets for removal if a source has become compromised; consider unique identifiers that allow you to identify a source without providing unnecessary sensitive data.

Regularly back up data models so you can return to a previous version without heavy loss of data if the source becomes compromised.

AI security begins with design

Security must be a foundational aspect of AI development, not an afterthought. By identifying data fields upfront, conducting thorough AI threat modeling, implementing robust data management controls, and continuously tuning and maintaining models, organizations can build AI systems that are secure by design. 

This approach protects against potential threats and ensures that AI systems remain reliable, trustworthy, and compliant with regulatory requirements as they evolve alongside their user base.

Learn more

Read our Security series.

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Announcing Upgraded Docker Plans: Simpler, More Value, Better Development and Productivity 

At Docker, our mission is to empower development teams by providing the tools they need to ship secure, high-quality apps — FAST. Over the past few years, we’ve continually added value for our customers, responding to the evolving needs of individual developers and organizations alike. Today, we’re excited to announce significant updates to our Docker subscription plans that will deliver even more value, flexibility, and power to your development workflows.

Docker accelerating the inner loop

We’ve listened closely to our community, and the message is clear: Developers want tools that meet their current needs and evolve with new capabilities to meet their future needs. 

That’s why we’ve revamped our plans to include access to ALL the tools our most successful customers are leveraging — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud. Our new unified suite makes it easier for development teams to access everything they need under one subscription with included consumption for each new product and the ability to add more as they need it. This gives every paid user full access, including consumption-based options, allowing developers to scale resources as their needs evolve. Whether customers are individual developers, members of small teams, or work in large enterprises, the refreshed Docker Personal, Docker Pro, Docker Team, and Docker Business plans ensure developers have the right tools at their fingertips.

These changes increase access to Docker Hub across the board, bring more value into Docker Desktop, and grant access to the additional value and new capabilities we’ve delivered to development teams over the past few years. From Docker Scout’s advanced security and software supply chain insights to Docker Build Cloud’s productivity-generating cloud build capabilities, Docker provides developers with the tools to build, deploy, and verify applications faster and more efficiently.

Areas we’ve invested in during the past year include:

The world’s largest container registry. To date, Docker has invested more than $100 million in Docker Hub, which currently stores over 60 petabytes of data and handles billions of pulls each month. We have improved content discoverability, in-depth image analysis, image lifecycle management, and an even broader range of verified high-assurance content on Docker Hub. 

Improved insights. From Builds View to inspecting GitHub Actions builds to Build Checks to Scout health scores, we’re providing teams with more visibility into their usage and providing insights to improve their development outcomes. We have additional Docker Desktop insights coming later this year.

Securing the software supply chain. In October 2023, we launched Docker Scout, allowing developers to continuously address security issues before they hit production through policy evaluation and recommended remediations, and track the SBOM of their software. We later introduced new ways for developers to quickly assess image health and accelerate application security improvements across the software supply chain.

Container-based testing automation. In December 2023, we acquired AtomicJar, makers of Testcontainers, adding container-based testing automation to our portfolio. Testcontainers Cloud offers enterprise features and a scalable, cloud-based infrastructure that provides a consistent Testcontainers experience across the org and centralizes monitoring.

Powerful cloud-based builders. In January 2024, we launched Docker Build Cloud, combining powerful, native ARM & AMD cloud builders with shared cache that accelerates build times by up to 39x.

Security, control, and compliance for businesses. For our Docker Business subscribers, we’ve enhanced security and compliance features, ensuring that large teams can work securely and efficiently. Role-based access control (RBAC), SOC 2 Type 2 compliance, centralized management, and compliance reporting tools are just a few of the features that make Docker Business the best choice for enterprise-grade development environments. And soon, we are rolling out organizational access tokens to make developer access easier at the organizational level, enhancing security and efficiency.

Empowering developers to build AI applications. From introducing a new GenAI Stack to our extension for GitHub Copilot and our partnership with NVIDIA to our series of AI tips content, Docker is simplifying AI application development for our community. 

As we introduce new features and continue to provide — and improve on — the world’s largest container registry, the resources to do so also grow. With the rollout of our unified suites, we’re also updating our pricing to reflect the additional value. Here’s what’s changing at a high level: 

Docker Business pricing stays the same but gains the additional value and features announced today.

Docker Personal remains — and will always remain — free. This plan will continue to be improved upon as we work to grant access to a container-first approach to software development for all developers. 

Docker Pro will increase from $5/month to $9/month and Docker Team prices will increase from $9/user/month to $15/user/mo (annual discounts). Docker Business pricing remains the same.

We’re introducing image pull and storage limits for Docker Hub. This will impact less than 3% of accounts, the highest commercial consumers. For many of our Docker Team and Docker Business customers with Service Accounts, the new higher image pull limits will eliminate previously incurred fees.   

Docker Build Cloud minutes and Docker Scout analyzed repos are now included, providing enough minutes and repos to enhance the productivity of a development team throughout the day.  

Implementing consumption-based pricing for all integrated products, including Docker Hub, to provide flexibility and scalability beyond the plans.  

More value at every level

Our updated plans are packed with more features, higher usage limits, and simplified pricing, offering greater value at every tier. Our updated plans include: 

Docker Desktop: We’re expanding on Docker Desktop as the industry-leading container-first development solution with advanced security features, seamless cloud-native compatibility, and tools that accelerate development while supporting enterprise-grade administration.

Docker Hub: Docker subscriptions cover Hub essentials, such as private and public repo usage. To ensure that Docker Hub remains sustainable and continues to grow as the world’s largest container registry, we’re introducing consumption-based pricing for image pulls and storage. This update also includes enhanced usage monitoring tools, making it easier for customers to understand and manage usage.

The Pulls Usage dashboard is now live on Docker Hub, allowing customers to see an organization’s Hub pull data.

Docker Build Cloud: We’ve removed the per-seat licenses for Build Cloud and increased the included build minutes for Pro, Team, and Business plans — enabling faster, more efficient builds across projects. Customers will have the option to add build minutes as their needs grow, but they will be surprised at how much time they save with our speedy builders. For customers using CI tools, Build Cloud’s speed can even help save on CI bills. 

Docker Scout: Docker Team and Docker Business plans will offer continuous vulnerability analysis for an unlimited number of Scout-enabled repositories. The integration of Docker Scout’s health scores into Docker Pro, Team, and Business plans helps customers maintain security and compliance with ease.

Testcontainers Cloud: Testcontainers Cloud helps customers streamline testing workflows, saving time and resources. We’ve removed the per-seat licenses for Testcontainers Cloud under the new plans and included cloud runtime minutes for Docker Pro, Docker Team, and Docker Business, available to use for Docker Desktop or in CI workflows. Customers will have the option to add runtime minutes as their needs grow.

Looking ahead

Docker continues to innovate and invest in our products, and Docker has been recognized most recently as developers’ most used, desired, and admired developer tool in the 2024 Stack Overflow Developer Survey.  

These updates are just the beginning of our ongoing commitment to providing developers with the best tools in the industry. As we continue to invest in our tools and technologies, development teams can expect even more enhancements that will empower them to achieve their development goals. 

New plans take effect starting November 15, 2024. The Docker Hub plan limits will take effect on Feb 1, 2025. No charges on Docker Hub image pulls or storage will be incurred between November 15, 2024, and January 31, 2025. For existing annual and month-to-month customers, these new plan entitlements will take effect at their next renewal date that occurs on or after November 15, 2024, giving them ample time to review and understand the new offerings. Learn more about the new Docker subscriptions and see a detailed breakdown of features in each plan. We’re committed to ensuring a smooth transition and are here to support customers every step of the way. 

Stay tuned for more updates or reach out to learn more. And as always, thank you for being a part of the Docker community. 

FAQ  

I’m a Docker Business customer, what is new in my plan? 

Docker Business list pricing remains the same, but you will now have access to more of Docker’s products:  

Instead of paying an additional per-seat fee, Docker Build Cloud is now available to all users in your Docker plan. Learn how to use Build Cloud. 

Docker Build Cloud included minutes are increasing from 800/mo to 1500/mo. 

Docker Scout now includes unlimited repos with continuous vulnerability analysis, an increase from 3. Get started with Docker Scout quickstart. 

1500 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.

Docker Hub image pull rate limits have been removed.

1M Docker Hub pulls per month are included. 

If you require additional Build Cloud minutes, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing. See the pricing page for more details. 

I’m a Docker Team customer, what is new in my plan? 

Docker Team will now include the following benefits:  

Instead of paying an additional per-seat fee, Docker Build Cloud is now available to all users in your Docker plan. Learn how to use Build Cloud. 

Docker Build Cloud minutes are increasing from 400/mo to 500/mo.

Docker Scout now includes unlimited repos with continuous vulnerability analysis, an increase from 3. Get started with Docker Scout quickstart. 

500 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.  

Docker Hub image pull rate limits will be removed.

100K Docker Hub pulls per month are included.

The minimum number of users is 1 (lowered from 5)

Docker Team price will increase from $9/user/month (annual) to $15/user/mo (annual) and from $11/user/month (monthly) to $16/user/month (monthly). If you require additional Build Cloud minutes, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing, or reach out to sales for invoice pricing. See the pricing page for more details. 

I’m a Docker Pro customer, what is new in my plan? 

Docker Pro will now include: 

Docker Build Cloud minutes increased from 100/month to 200/month and no monthly fee. Learn how to use Build Cloud.

2 included repos with continuous vulnerability analysis in Docker Scout. Get started with Docker Scout quickstart.  

100 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.

Docker Hub image pull rate limits will be removed. 

25K Docker Hub pulls per month are included.

Docker Pro plans will increase from $5/month (annual) to $9/month (annual) and from $7/month (monthly) to $11/month (monthly). If you require additional Build Cloud minutes, Docker Scout repos, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing. See the pricing page for more details. 

I’m a Docker Personal user, what is included in my plan? 

Docker Personal plans remain free.

When you are logged into your account, you will see additional features and entitlements: 

1 included repo with continuous vulnerability analysis in Docker Scout. Get started with Docker Scout quickstart.

Unlimited public Docker Hub repos. 

1 private Docker Hub repo with 2GB storage. 

Updated Docker Hub image pull rate limit of 40 pulls/hr/user.

Unauthenticated users will be limited to 10 Docker Hub pulls/hr/IP address.  

Docker Personal users who want to start or continue using Docker Build Cloud may trial the service for seven days, or upgrade to a Docker Pro plan. Docker Personal users may trial Testcontainers Cloud for 30 days. 

Where do I learn more about Docker Hub rate limits and storage changes? 

Check your plan’s details on the new plans overview page. For now, see the new Docker Hub Pulls Usage dashboard to understand your current usage.  

When will new pricing go into effect? 

New pricing will go into effect on November 15, 2024, for all new customers. 

For all existing customers, new pricing will take effect on your next renewal date after November 15, 2024. When you renew, you will receive the benefits and entitlements of the new plans. Between now and your renewal date, your existing plan details will apply. 

Can I keep my existing plan? 

If you are on an annual contract, you will keep your current plan and pricing until your next renewal date that falls after November 15, 2024. 

If you are a month-to-month customer, you may convert to an annual contract before November 14 to stay on your existing plan. You may choose between staying on your existing plan entitlements or the new comprehensive plans. After November 15, all month-to-month renewals will be on the new plans. 

I have a regulatory constraint, is it possible to disable individual services? 

While most organizations will see reduced build times and improved supply chain security, some organizations may have constraints that prevent them from using all of Docker’s services. 

After November 15, the default configurations for Docker Desktop, Docker Hub, Docker Build Cloud, and Docker Scout are enabled for all users. The default configuration for Testcontainers Cloud is disabled. To change your organization’s configuration, the org owner or one of your org admins will be able to disable Docker Scout or Build Cloud in the admin console. 

Can I get a refund on individual products I pay for today (Build Cloud, Scout repos, Testcontainers Cloud)? 

Your current plan will remain in effect until your first renewal date on or after November 15, 2024, for annual customers. At that time, your plan will automatically reflect your new entitlements for Docker Build Cloud and Docker Scout. If you are a current Testcontainers Cloud customer in addition to being a Docker Pro, Docker Team, or Docker Business customer, let your account manager know your org ID so that your included minutes can be applied starting November 15.  

 How do I get more help? 

If you have additional questions not addressed in the FAQ, contact your Docker Account Executive or CSM.  

If you need help identifying those contacts or need technical assistance, contact support.

Quelle: https://blog.docker.com/feed/

Getting Started with the Labs AI Tools for Devs Docker Desktop Extension

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

We’ve released a simple way to run AI tools in Docker Desktop. With the Labs AI Tools for Devs Docker Desktop extension, people who want a simple way to run prompts can easily get started. 

If you’re a prompt author, this approach also allows you to build, run, and share your prompts more easily. Here’s how you can get started.

Get the extension

You can download the extension from Docker Hub. Once it’s installed, enter an OpenAI key.

Import a project

With our approach, the information a prompt needs should be extractable from a project. Add projects here that you want to run SDLC tools inside (Figure 1).

Figure 1: Add projects.

Inputting prompts

A prompt can be a git ref or a git URL, which will convert to a ref. You can also import your own local prompt files, which allows you to quickly iterate on building custom prompts.

Sample prompts

(copy + paste the ref)

ToolGit RefLinkDescriptionDockergithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/dockerhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/dockerGenerates a runbook for any Docker projectDockerfilesgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/dockerfileshttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/dockerfilesGenerate multi-stage Dockerfiles for NPM projectsLazy Dockergithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/lazy_dockerhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/lazy_dockerGenerates a runbook for Lazy DockerNPMgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/npmhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/npmResponds with helpful information about NPM projectsESLintgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/eslinthttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/eslintRuns ESLint in your projectESLint Fixgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/eslint_fixhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/eslint_fixRuns ESLint in your project and responds with a fix for the first violation it findsPylintgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/pylinthttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/pylintRuns Pylint in your project, and responds with a fix for the first violation it finds

Figure 2: Enter a GitHub ref or URL.

Writing and testing your own prompt

Create a prompt file

A prompt file is a markdown file. Here’s an example: prompt.md

# prompt system
You are an assistant who can write comedic monologs in the style of Stephen Colbert.

# prompt user
Tell me about my project.

Now, we need to add information about the project. Doing so is done with mustache templates:

# prompt system
You are an assistant who can write comedic monologues in the style of Stephen Colbert.

# prompt user
Tell me about my project.

My project uses the following languages:
{{project.languages}}

My project has the following files:
{{project.files}}

Leverage tools in your project

Just like extractors, which can be used to render prompts, we define tools in the form of Docker images. A function image follows the same spec as extractors but in reverse. 

The Docker image is automatically bind-mounted to the project.

The Docker image entry point is automatically run within the project using –workdir.

The first argument will be a JSON payload. This payload is generated when the LLM tries to call our function.

– name: write_files
description: Write a set of files to my project
parameters:
type: object
properties:
files:
type: array
items:
type: object
properties:
path:
type: string
description: the relative path to the file that should be written
content:
type: string
description: the content that should be written to a file
executable:
type: boolean
description: whether to make the file executable
container:
image: vonwig/function_write_files:latest

Test your prompt

Add the file to a Git repository and push to a public remote.

Paste the URL to reference the file on GitHub.

Alternatively, import a local prompt and select the file on your computer.

Figure 3: Add the URL for the folder.

3. Run.

## ROLE assistant

Don’t even get me started on the files, I mean, have you ever sat down and really looked at a list of files? This project has got more layers than that seven-layer bean dip I had at last weekend’s potluck. This project isn’t just files on files, its files within files, its dot something after dot something else – and before you ask: Yes, all of these are REQUIRED!

Coming down to Dockerfile. Now, I’ve seen some Dockerfiles but our Dockerfile, folks, it’s something else. It lifts, it grinds, it effectively orchestrates our code like a veteran conductor at the symphony. We also have multiple templates because who doesn’t love a good template, right?

Oh, and did I mention the walkthroughs and the resources? Let’s just say this isn’t a “teach a man to fish” situation. This is more of a “teach a man to create an entire fishing corporation” scenario. Now THAT’S dedication.

Finally we’ve got the main.js, and let’s be real, is it even a project without a main.js anymore?

As always, feel free to follow along in our new public repo. Everything we’ve discussed in this blog post is available for you to try out on your own projects.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Join Docker CEO Scott Johnston at SwampUP 2024 in Austin

We are excited to announce Docker’s participation in JFrog’s flagship event, SwampUP 2024, which will take place September 9 – 11, in Austin, Texas. In his SwampUP keynote talk, Docker CEO Scott Johnston will discuss how the Docker and JFrog collaboration boosts secure software and AI application development.

Keynote highlights

Johnston will discuss Docker’s approach to managing secure software supply chains by providing developer teams with trusted content, reducing and limiting exposure to malicious content in the early development stages. He will explore how Docker Desktop, Docker Hub, and Docker Scout play critical roles in ensuring that the building blocks developers rely on are deployed securely. By bringing security to the root of the software development lifecycle, highlighting vulnerabilities, and bringing trusted container images to the inner loop, Docker empowers development teams to safeguard their process, ensuring the delivery of higher quality, more secure applications, faster. 

Attendees will get insights into how Docker innovations, including Docker Business capabilities and Docker Hub benefits, are transforming software development. Johnston will walk through the practical benefits of integrating Docker’s products within JFrog’s ecosystem, showcasing real-world examples of how companies use these combined tools to streamline their development pipelines and accelerate delivering applications, many of which are powered by ML and AI. This combination enables a more comprehensive approach to managing software supply chains, ensuring that security is embedded throughout the development lifecycle.

Better together 

Docker and JFrog’s partnership is more than just a collaboration: It’s a commitment to providing developers with the tools and resources they need to build secure, efficient, and scalable applications. This connection between Docker’s expertise in container-first software development and JFrog’s comprehensive DevOps platform empowers development teams to manage their software supply chains with precision. By bringing together Docker’s trusted content and JFrog’s robust artifact management, developers can ensure their applications are built on a foundation of security and reliability.

Our mutual customers with Docker Business subscriptions can leverage features like Registry Access Management and Image Access Management to ensure developers only access verified registries and image repositories, such as specific instances of JFrog Artifactory or JFrog Container Registry.

Looking ahead, Docker and JFrog are committed to continuing their joint efforts in advancing secure software supply chain practices. Upcoming initiatives include expanding the availability of trusted content, enabling deeper integrations between Docker Scout and JFrog’s products, and introducing new features that will further enhance developer productivity and security. These developments will help organizations navigate the complexities of modern software development with greater confidence and control.

See you in Austin

As we prepare for SwampUP, we invite you to explore the integrations between Docker and JFrog that are already transforming development workflows. Whether you’re looking to manage your on-premise images with JFrog Artifactory or leverage Docker’s advanced security analytics and automated image management capabilities, this partnership offers resources to help developers successfully deploy cloud-native and hybrid applications with containerization best practices at their core.

Catch Scott Johnston’s keynote at SwampUP and learn more about how our partnership with JFrog can elevate your development processes. We’re excited to work together to build a more secure, efficient, and innovative software development ecosystem. See you in Austin!

Learn more

Find us at SwampUP 2024.

Explore the integration between Docker Scout and JFrog Artifactory.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

Why We Need More Gender Diversity in the Cybersecurity Space

What does it mean to be diverse? At the root of diversity is the ability to bring people together with different perspectives, experiences, and ideas. It’s about enriching the work environment to lead to more innovative solutions, better decision-making, and a more inclusive environment.

For me, it’s about ensuring that my daughter one day knows that it really is okay for her to be whatever she wants to be in life. That she isn’t bound by a gender stereotype or what is deemed appropriate based on her sex.  

This is why building a more diverse workforce in technology is so critical. I want the children of the world, my children, to be able to see themselves in the people they admire, in the fields they are interested in, and to know that the world is accepting of the path that they choose.

Monday, August 26th, was Women’s Equality Day, and while I recognize that women have come a long way, there is still work to be done. Diversity is not just a buzzword — it’s a necessity. When diverse perspectives converge, they create a rich ground for innovation. 

Women in cybersecurity

Despite progress in many areas, women are still underrepresented in cybersecurity. Let’s look at key statistics. According to data published in the ISC2 Cybersecurity Workforce Study published in 2023:

Women make up 26% of the cybersecurity workforce globally. 

The average global salary of women who participated in the ISC2 survey was US$109,609 compared to $115,003 for men. For US women, the average salary was $141,066 compared to $148,035 for men. 

Making progress

We should recognize where we have had wins in cybersecurity diversity, too.

The 2024 Cybersecurity Skills Gap global research report highlights significant progress in improving diversity within the cybersecurity industry. According to the report, 83% of companies have set diversity hiring goals for the next few years, with a particular focus on increasing the representation of women and minority groups. Additionally, structured programs targeting women have remained a priority, with 73% of IT decision-makers implementing initiatives specifically aimed at recruiting more women into cybersecurity roles. These efforts suggest a growing commitment to enhancing diversity and inclusion within the field, which is essential for addressing the global cybersecurity skills shortage.

Women hold approximately 25% of the cybersecurity jobs globally, and that number is growing. This representation has seen a steady increase from about 10% in 2013 to 20% in 2019, and it’s projected to reach 30% by 2025, reflecting ongoing efforts to enhance gender diversity in this field. 

Big tech companies are playing a pivotal role in increasing the number of women in cybersecurity by launching large-scale initiatives aimed at closing the gender gap. Microsoft, for instance, has committed to placing 250,000 people into cybersecurity roles by 2025, with a specific focus on underrepresented groups, including women. Similarly, Google and IBM are investing billions into cybersecurity training programs that target women and other underrepresented groups, aiming to equip them with the necessary skills to succeed in the industry.

This progress is crucial as diverse teams are often better equipped to tackle complex cybersecurity challenges, bringing a broader range of perspectives and innovative solutions to the table. As organizations continue to emphasize diversity in hiring, the cybersecurity industry is likely to see improvements not only in workforce composition but also in the overall effectiveness of cybersecurity strategies.

Good for business

This imbalance is not just a social issue — it’s a business one. There are not enough cybersecurity professionals to join the workflow, resulting in a shortage. As of the ISC2’s 2022 report, there is a worldwide gap of 3.4 million cybersecurity professionals. In fact, most organizations feel at risk because they do not have enough cybersecurity staffing. 

Cybersecurity roles are also among the fastest growing roles in the United States. The Cybersecurity and Infrastructure Security Agency (CISA) introduced the Diverse Cybersecurity Workforce Act of 2024 to promote the cybersecurity field to underrepresented and disadvantaged communities. 

Here are a few ideas for how we can help accelerate gender diversity in cybersecurity:

Mentorship and sponsorship: Experienced professionals should actively mentor and sponsor women in these fields, helping them navigate the challenges and seize opportunities.Unfortunately, this year the cybersecurity industry has seen major losses in organizations that support women. Women Who Code (WWC) and Girls in Tech shut their doors due to shortages in funds. Other organizations are still available, including:

Girls Who Code

Women in CyberSecurity (WiCyS)

Black Girls in Cyber

Women in Security and Privacy (WISP)

Women in Technology International (WITI)

Women4Cyber (Europe)

Companies may also consider internal mentorship programs or working with partners to allow cross-company mentorship opportunities.

Women within the cybersecurity field should also consider guest lecture positions or even teaching. Young girls who do not get to see women in the field are statistically less likely to choose that as a profession.

Inclusive work environments: Companies must create cultures where diversity is celebrated, not just tolerated or a means to an end. This means fostering environments where women feel empowered to share their ideas and take risks. This could include:

Provide training to employees at all levels. At Docker, every employee receives an annual training budget. Additionally, our Employee Resource Groups (ERGs) are provided with budgets to facilitate educational initiatives to support under-represented groups. Teams also can add additional training as part of the annual budgeting process.

Ensure there is an established career ladder for cybersecurity roles within the organization. Work with team members to understand their wishes for career advancement and create internal development plans to support those achievements. Make sure results are measurable. 

Provide transparency around promotions and pay, reducing the gender gaps in these areas. 

Ensure recruiters and managers are trained on diversity and identifying diverse candidate pools. At Docker, we invest in sourcing diverse candidates and ensuring our interview panels have a diverse team so candidates can learn about different perspectives regarding life at Docker.

Ensure diverse recruitment panels. This is important for recruiting new diverse talent and allows people to understand the culture from multiple perspectives.

Policy changes: Companies should implement policies that support work-life balance, such as flexible working hours and parental leave, making it easier for women to thrive in these demanding fields. Companies could consider the following programs:

Generous paid parental leave.

Ramp-back programs for parents returning from parental leave.

Flexible working hours, remote working options, condensed workdays, etc. 

Manager training to ensure managers are being inclusive and can navigate diverse direct report needs.

Employee Resource Groups (ERGs): Establishing allyship groups and/or employee resource groups (ERGs) help ensure that employees feel supported and have a mechanism to report needs to the organization. For example, a Caregivers ERG can help advocate for women who need flexibility in their schedule to allow for caregiving responsibilities. 

Better together

As we reflect on the progress made in gender diversity, especially in the cybersecurity industry, it’s clear that while we’ve come a long way, there is still much more to achieve. The underrepresentation of women in cybersecurity is not just a diversity issue — it’s a business imperative. Diverse teams bring unique perspectives that drive innovation, foster creativity, and enhance problem-solving capabilities. The ongoing efforts by companies, coupled with supportive policies and inclusive cultures, are critical steps toward closing the gender gap.

The cybersecurity landscape is evolving, and so must our approach to diversity. It’s encouraging to see big tech companies and organizations making strides in this direction, but the journey is far from over. As we commemorate Women’s Equality Day, let’s commit to not just acknowledging the need for diversity but actively working toward it. The future of cybersecurity — and the future of technology — depends on our ability to embrace and empower diverse voices.

Let’s make this a reality, not just for the sake of our daughters but for our entire industry.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.34: MSI Installer GA, Upgraded Host Networking, and Powerful Enhancements for Boosted Productivity & Administration

Key GA features of the Docker Desktop 4.34 release include: 

MSI installer GA and Windows registry key 

Host networking support to Docker Desktop 

Automatic reclamation of disk space in Docker Desktop for WSL2

Authentication enhancements 

NVIDIA AI Workbench Docker Desktop Integration 

Multi-platform UX improvements and the containerd image store

Docker Desktop 4.34 introduces key features to enhance security, scalability, and productivity for all development team sizes, making deploying and managing environments more straightforward. With the general availability (GA) of the MSI installer for bulk deployment, managing installations across Windows environments becomes even simpler. Enhanced authentication features offer an improved administration experience while reinforcing security. Automatically reclaim valuable disk space with Docker Desktop’s new smart compaction feature, streamlining storage management for WSL2 users. Additionally, the integration with NVIDIA AI Workbench provides developers with a seamless connection between model training and local development. Explore how these innovations simplify your workflows and foster a culture of innovation and reliability in your development practices.

Deploy Docker Desktop in bulk with the MSI installer

We’re excited to announce that the MSI installer for Docker Desktop is now generally available to all our Docker Business customers. This powerful tool allows you to customize and deploy Docker Desktop across multiple users or machines in an enterprise environment, making it easier to manage Docker at scale. 

Features include:

Interactive and silent installations: Choose between an interactive setup process or deploy silently across your organization without interrupting your users.

Customizable installation paths: Tailor the installation location to fit your organization’s needs.

Desktop shortcuts and automatic startup: Simplify access for users with automatic creation of desktop shortcuts and Docker Desktop starting automatically after installation.

Set usage to specific Docker Hub organizations: Control which Docker Hub organizations your users are tied to during installation.

Docker administrators can download the MSI installer directly from the Docker Admin Console.

One of the standout features of this installer is the –allowed-org flag. This option enables the creation of a Windows registry key during installation, enforcing sign-in to a specified organization. By requiring sign-in, you ensure that your developers are using Docker Desktop with their corporate credentials, fully leveraging your Docker Business subscription. This also adds an extra layer of security, protecting your software supply chain.

Additionally, this feature paves the way for Docker to provide you with valuable usage insights across your organization and enable cloud-based control over application settings for every user in your organization in the future.

Figure 1: Docker admins can download the MSI installer directly from the Docker Admin Console.

What’s next

We’re also working on releasing a PKG enterprise installer for macOS, config profiles for macOS, and supporting multiple organizations in all supported sign-in enforcement mechanisms. 

Refer to our docs to learn about MSI configuration and discover more about sign-in enforcement via Windows registry key.

Host networking support to Docker Desktop 

Previously, Docker Desktop lacked seamless host networking capability, complicating the integration between host and container network services. Developers had to take time to set up and enable communication between the host and containers. Docker Desktop now supports host networking capability directly into Docker Desktop. 

Host networking allows containers that are started with –net=host to use localhost to connect to TCP and UDP services on the host. It will automatically allow software on the host to use localhost to connect to TCP and UDP services in the container. This simplifies the setup for scenarios in which close integration between host and container network services is required. Additionally, we’re driving cross-platform consistency and simplifying configuration by reducing the need for additional steps, such as setting up port forwarding or bridge networks. 

While this has previously been available in the Docker Engine, we’re now extending this capability to Docker Desktop for Windows, macOS, and Linux. We’re dedicated to improving developer productivity, and this is another way we help developers spend less time configuring network settings and more time building and testing applications, accelerating development cycles. 

This new capability is available for all users logged into Docker Desktop. To enable this feature, navigate to Settings > Resources > Network. Learn more about this feature on Docker Docs. 

Figure 2: Enable the host networking support feature in the Settings menu.

Automatic reclamation of disk space in Docker Desktop for WSL2 

Previously, when customers using Docker Desktop for WSL2 deleted Docker objects such as containers, images, or builds (for example via a docker system prune), the freed storage space was not automatically reclaimed on their host. Instead, they had to use external tools to “compact” the virtual disk/distribution backing Docker Desktop.

Starting with Docker 4.34, we are rolling out automatic reclamation of disk space. When you quit the app, Docker Desktop will automatically check whether there is storage space that can be returned to the host. It will then scan the virtual disk used for Docker storage, and compact it by returning all zeroed blocks to the operating system. Currently Docker Desktop will only start the scan when it estimates that at least 16GB of space can be returned. In the future, we plan to make this threshold adaptive and configurable by the user.

The feature is now enabled for all customers running the Mono distribution architecture for Docker Desktop on WSL2. This new architecture, which was rolled out starting with Docker Desktop 4.30 for all fresh installations of Docker Desktop, removed the need for a dedicated docker-desktop-data WSL2 distribution to store docker data. We will be rolling out the new architecture to all customers in the upcoming Docker Desktop releases.

Customers with installations still using the docker-desktop-data WSL2 distribution can compact storage manually via VHDX compaction tools, or change the WSL2 configuration to enable the experimental WSL2 feature for disk cleanup.

(Pro tip: Did you know you can use the Disk Usage extension to see how Docker Desktop is using your storage and use it to prune dangling objects with a single click?)

Authentication enhancements 

Previously, authenticating via the CLI required developers to either type their password into the command-line interface — which should generally be avoided by the security-minded — or manually create a personal access token (PAT) by navigating to their Docker account settings, generating the token, and then copying it into the CLI for authentication. This process was time-consuming and forced developers to switch contexts between the CLI and the web portal.

In this latest Docker Desktop release, we’re streamlining the CLI authentication flow. Now, users can authenticate through a seamless browser-based process, similar to the experience in CLIs like GitHub’s gh or Amazon’s AWS CLI. With this improved flow, typing docker login in the CLI will print a confirmation code and open your browser for authentication, automating PAT creation behind the scenes and eliminating the need for manual PAT provisioning. This enhancement saves time, reduces complexity, and delivers a smoother and more secure user experience. Additionally, when you authenticate using this workflow, you’ll be logged in across both Docker CLI and Docker Desktop. 

This new flow also supports developers in organizations that require single sign-on (SSO), ensuring a consistent and secure authentication process.

Figure 3: When you log in via the new workflow, you’ll be logged in across both Docker CLI and Docker Desktop.

Enterprise-grade AI application development with Docker Desktop and NVIDIA AI Workbench  

AI development is a complex journey, often hindered by the challenge of connecting the dots between model training, local development, and deployment. Developers frequently encounter a fragmented and inconsistent development environment and toolchain, making it difficult to move seamlessly from training models in the cloud to running them locally. This fragmentation slows down innovation, introduces errors, and complicates the end-to-end development process.

To solve this, we’re proud to announce the integration of Docker Desktop with NVIDIA AI Workbench, a collaboration designed to streamline every stage of AI development. This solution brings together the power of Docker’s containerization with NVIDIA’s leading AI tools, providing a unified environment that bridges the gap between model training and local development.

With this integration, you can now train models in the cloud using NVIDIA’s robust toolkit and effortlessly transition to local development on Docker Desktop. This eliminates the friction of managing different environments and configurations, enabling a smoother, more efficient workflow from start to finish. 

To learn more about this collaboration and how Docker Business supports enterprise-grade AI application development, read our blog post. 

Multi-platform UX improvements and the containerd image store  

In February 2024, we announced the general availability of the containerd image store in Docker Desktop. Since then, we’ve been working on improving the output of our commands to make multi-platform images easier to view and manage. 

Now, we are happy to announce that the docker image list CLI command now supports an experimental –tree flag. This offers a completely new tree view of the image list, which is more suitable for describing multi-platform images.

Figure 4: New CLI tree view of the image list.

If you’re looking for multi-platform support, you need to ensure that you have the containerd image store enabled in Docker Desktop (see General settings in Docker Desktop, select Use containerd for pulling and storing images). As of the Docker Desktop 4.34 release, fresh installs or factory resets of Docker Desktop will now default to using the containerd image store, meaning that you get multi-platform building capability out of the box. 

Figure 5: You can enable the containerd image store in the Docker Desktop general settings.

To learn more about the containerd image store, check out our containerd documentation. 

Wrapping up 

Docker Desktop 4.34 marks a significant milestone in our commitment to providing an industry-leading container development suite. With key features such as the MSI installer for bulk deployment, enhanced authentication mechanisms, and the integration with NVIDIA AI Workbench, Docker Desktop is transforming how teams manage deployments, protect their environments, and accelerate their development workflows. 

These advancements simplify your development processes and help drive a culture of innovation and reliability. Stay tuned for more exciting updates and enhancements as we continue to deliver solutions designed to empower your development teams and secure your operations at scale. 

Upgrade to Docker Desktop 4.34 today and experience the future of container development. 

Learn more

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

New to Docker? Create an account. 

Subscribe to the Docker Newsletter.

Read New Docker Desktop Enterprise Admin Features: MSI Installer and Login Enforcement Alternative. 

Read Optimizing AI Application Development with Docker Desktop and NVIDIA AI Workbench. 

Learn more about MSI configuration.

Discover more about sign-in enforcement via Windows registry key.

Learn more about host networking support.

Quelle: https://blog.docker.com/feed/

New Docker Desktop Enterprise Admin Features: MSI Installer and Login Enforcement Alternative

At Docker, we continuously strive to enhance the ease and security of our platform for all users. We’re excited to launch the general availability for two significant updates: the Docker Desktop MSI installer and a new sign-in enforcement alternative. These updates aim to streamline administration, improve security, and ensure users can take full advantage of Docker Business subscription features.

Docker Desktop MSI installer

Replacing an EXE installer with an MSI installer for desktop applications offers numerous advantages, particularly for enterprise customers:

Enhanced deployment features: MSI installers provide the ability to manage installations through Group Policy and offer more comprehensive installation and uninstallation control.

Easier and more secure mass deployment: Facilitates secure, efficient deployment across multiple devices, enhancing IT administration efficiency.

Widely accepted: MSI installers are recognized in both home and enterprise environments.

Supports standardized silent install parameters: Aligns with industry standards for silent installations.

Ideal for large-scale deployment: MSI files can be customized to include specific options, such as silent installs or custom installation paths, making them perfect for corporate environments.

For customers using Intune MDM, we have detailed documentation to assist with integration: Intune MDM Documentation.

To access the installer, navigate to the Docker Admin Console > Security and Access > Deploy Docker Desktop.

Sign-in enforcement: Streamlined alternative for organizations

Figure 1: Sign-in enforcement.

Recognizing the need for more streamlined and familiar ways to enforce sign-in protocols, Docker is introducing a new sign-in enforcement mechanism for Windows OS (Figure 1). This update brings several business benefits, including increased user logins and better seat allocation awareness, ultimately helping customers maximize their business subscription features and manage license costs more effectively.

We now offer integration with the Windows Registry, allowing admins to add approved organizations directly within familiar Windows system settings. Find out more.

By moving away from the traditional registry.json method and adopting universally recognized settings, Docker simplifies the process for IT admins already familiar with these systems. This change means:

Easier integration: Organizations can seamlessly integrate Docker sign-in enforcement into their existing configuration management workflows.

Simplified administration: Reduces the learning curve and eliminates the need for additional internal approvals for new file types.

These changes are designed to offer quick deployment and familiar processes to IT administrators. We are committed to refining these mechanisms based on user feedback and evolving requirements. 

Note that the legacy registry.json method will continue to work, ensuring support for existing customers, but it should now be considered a legacy method. If you roll out a registry key, it will take precedence over any pre-existing registry.json.

Roll out the registry key sign-in enforcement at Docker install time via the –allowed-org flag. 

For example, to deploy the MSI installer with sign-in enforcement, run the following: 

msiexec /i "DockerDesktop.msi" /L*V ".msi.log" /quiet /norestart ALLOWEDORG="docker.com"

The above command installs Docker Desktop silently with verbose logging, without restarting the system, and it allows only the specified organization (in this case, “docker.com”) to use Docker Desktop by enforcing sign-in.

Check our full step-by-step installation documentation.

Roadmap

We’re also working on several related administrative improvements, such as:

PKG enterprise installer for macOS.

macOS configuration profiles for enforcing sign-in.

Supporting multiple organizations in all available sign-in enforcement mechanism.

Stay tuned for these exciting updates!

Wrapping up

These updates reflect our ongoing commitment to improving the Docker platform for our users. By introducing the Docker Desktop MSI installer and new sign-in enforcement alternatives, we aim to simplify deployment, enhance security, and streamline administration for organizations of all sizes. We encourage IT teams and administrators to start planning for these changes to enhance their Docker experience.

Learn more

See what else is new in Docker Desktop.

Install Docker Desktop with MSI.

Enforce sign-in with Windows Registry Key.

Subscribe to the Docker Newsletter.

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

New to Docker? Create an account. 

Quelle: https://blog.docker.com/feed/