Experimental Windows Containers Support for BuildKit Released in v0.13.0

We are excited to announce that the latest BuildKit release, v0.13.0, contains experimental Windows Containers support. BuildKit has been around for many years and has been the default build engine on Linux since Docker Engine 23.0.0. 

BuildKit is a toolkit for converting source code to build artifacts (like container images) in an efficient, expressive, and repeatable manner. BuildKit introduced the following benefits as compared with the previous Docker Builder:

Parallelize building independent build stages and skip any unused stages.

Incrementally transfer only the changed files in your build context between builds, also skip the transfer of unused files in your build context.

Use Dockerfile frontend implementations with many new features.

Avoid side effects with the rest of the API (intermediate images and containers).

Prioritize your build cache for automatic pruning.

Since 2018, Windows Container customers have been asking for Windows support for BuildKit, as seen in the BuildKit repo and Windows Containers repo, with hundreds of reactions and comments. We have listened to our users and focused resources in the past year to light up Windows Containers support on BuildKit.

Until now, we only shipped the Buildx client on Windows for building Linux images and some very limited Windows images using cross-compilation. Today, we are introducing experimental support for Windows Containers in BuildKit, with the aim of making this available soon in your standard Docker Build.

What’s next?

In the upcoming months, we will work toward further improvements, including:

General Availability (GA) ready: Improving release materials, including guides and documentation.

Integration with Docker Engine: So you can just run docker build.

OCI worker support: On Linux, there is an option to run BuildKit with only runc using the OCI worker. Currently, only the containerd worker is supported for Windows.

Container driver: Add support for running in the container driver.

Image outputs: Some image outputs supported by Linux may not work on Windows and need to be tested and assessed. These include exporting an image to multiple registries, checking if keys for image output are supported, and testing multi-platform image-building support.

Building other artifacts: BuildKit can be used to build other artifacts beyond container images. Work needs to be done in this area to cross-check whether other artifacts, such as binaries, libraries, and documentation, are also supported on Windows as it is on Linux.

Running buildkitd doesn’t require Admin: Currently, running buildkitd on Windows requires admin privileges. We will be looking into running buildkitd on low privileges, aka “rootless”.

Export cache: Investigations need to be done to confirm whether specific cache exporters (inline, registry, local, gha [GitHub Actions], 3, azblob) are also supported on Windows.

Linux parity: Identifying, accessing, and closing the feature parity gap between Windows and Linux.

Walkthrough — Build a basic “Hello World” image with BuildKit and Windows Containers 

Let’s walk through the process of setting up BuildKit, including the necessary dependencies, and show how to build a basic Windows image. For feedback and issues, file a ticket at Issues · moby/buildkit (github.com) tagged with area/windows. 

The platform requirements are listed below. In our scenario, we will be running a nanoserver:ltsc2022 base image with AMD64. 

Architecture: AMD64, Arm64 (binaries available but not officially tested yet). 

Supported operating systems: Windows Server 2019, Windows Server 2022, Windows 11. 

Base images: servercore:ltsc2019, servercore:ltsc2022, nanoserver:ltsc2022. See the  compatibility map. 

The workflow will cover the following steps:

Enable Windows Containers.

Install containerd.

Install BuildKit.

Build a simple “Hello World” image.

1. Enable Windows Containers 

Start a PowerShell terminal in admin privilege mode. Run the following command to ensure the Containers feature is enabled:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V, Containers -All

If you see RestartNeeded as True on your setup, restart your machine and reopen an Administrator PowerShell terminal (Figure 1). Otherwise, continue to the next step.

Figure 1: Enabling Windows Containers in PowerShell.

2. Install containerd

Next, we need to install containerd, which is used as the container runtime for managing containers and images.

Note: We currently only support the containerd worker. In the future, we plan to add support for the OCI worker, which uses runc and will therefore remove this dependency

Run the following script to install the latest containerd release. If you have containerd already installed, skip the script below and run Start-Service containerd to start the containerd service. 

Note: containerd v1.7.7+ is required.

# If containerd previously installed run:
Stop-Service containerd

# Download and extract desired containerd Windows binaries
$Version="1.7.13" # update to your preferred version
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
tar.exe xvf .containerd-windows-amd64.tar.gz

# Copy and configure
Copy-Item -Path ".bin" -Destination "$Env:ProgramFilescontainerd" -Recurse -Container:$false -Force
cd $Env:ProgramFilescontainerd
.containerd.exe config default | Out-File config.toml -Encoding ascii

# Copy
Copy-Item -Path .bin* -Destination (New-Item -Type Directory $Env:ProgramFilescontainerd -Force) -Recurse -Force

# add the binaries (containerd.exe, ctr.exe) in $env:Path
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + [IO.Path]::PathSeparator + "$Env:ProgramFilescontainerd"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
# reload path, so you don't have to open a new PS terminal later if needed
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")

# configure
containerd.exe config default | Out-File $Env:ProgramFilescontainerdconfig.toml -Encoding ascii
# Review the configuration. Depending on setup you may want to adjust:
# – the sandbox_image (Kubernetes pause image)
# – cni bin_dir and conf_dir locations
Get-Content $Env:ProgramFilescontainerdconfig.toml

# Register and start service
containerd.exe –register-service
Start-Service containerd

3. Install BuildKit

Note: Ensure you have updated to the latest version of Docker Desktop.

Run the following script to download and extract the latest BuildKit release.

$version = "v0.13.0" # specify the release version, v0.13+
$arch = "amd64" # arm64 binary available too
curl.exe -LO https://github.com/moby/buildkit/releases/download/$version/buildkit-$version.windows-$arch.tar.gz
# there could be another `.bin` directory from containerd instructions
# you can move those
mv bin bin2
tar.exe xvf .buildkit-$version.windows-$arch.tar.gz
## x bin/
## x bin/buildctl.exe
## x bin/buildkitd.exe

Next, run the following commands to add the BuildKit binaries to your Program Files directory, then add them to the PATH so they can be called directly.

# after the binaries are extracted in the bin directory
# move them to an appropriate path in your $Env:PATH directories or:
Copy-Item -Path ".bin" -Destination "$Env:ProgramFilesbuildkit" -Recurse -Force
# add `buildkitd.exe` and `buildctl.exe` binaries in the $Env:PATH
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + `
[IO.Path]::PathSeparator + "$Env:ProgramFilesbuildkit"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + `
[System.Environment]::GetEnvironmentVariable("Path","User")

Run buildkitd.exe. You should expect to see something as shown in Figure 2:

Figure 2: Successfully starting buildkitd without any errors in the logs.

Now we can set up buildx (the BuildKit client) to use our BuildKit instance. Here we will create a Builder item that points to our Buildkit instance we just started, by running:

docker buildx create –name buildkit-exp –use –driver=remote npipe:////./pipe/buildkitd

Here we are creating a new instance of a builder and pointing it to our BuildKit instance. BuildKit will listen on npipe:////./pipe/buildkitd.

Notice that we also name the builder, here, we call it buildkit-exp, but you can name it whatever you want. Just remember to add –use to set this as the current builder.

Let’s test our connection by running docker buildx inspect (Figure 3):

Figure 3: Docker buildx inspect shows that our new builder is connected.

All good!

You can also list and manage your builders. Run docker buildx ls (Figure 4).

Figure 4: Run docker buildx ls to return a list of all builders and nodes. Here we can see our new builder added to the list.

4. Build “Hello World” image 

We will be building a simple “hello world” image as shown in the following the Dockerfile.

FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
COPY hello.txt C:
CMD ["cmd", "/C", "type C:hello.txt"]

Run the following commands to create a directory and change directory to sample_dockerfile.

mkdir sample_dockerfile
cd sample_dockerfile

Run the following script to add the Dockerfile shown above and hello.txt to the sample_dockerfile directory.

Set-Content Dockerfile @"
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
USER ContainerAdministrator
COPY hello.txt C:/
RUN echo "Goodbye!" >> hello.txt
CMD ["cmd", "/C", "type C:hello.txt"]
"@

Set-Content hello.txt @"
Hello from buildkit!
This message shows that your installation appears to be working correctly.
"@

Now we can use buildx to build our image and push it to the registry (see Figure 5):

docker buildx build –builder buildkit-exp –push -t <your_username>/hello-buildkit .

Figure 5: Here we can see our build running to a successful completion.

If you are utilizing Docker Hub as your registry, run docker login before running buildx build (Figure 6).

Figure 6: Successful login to Docker Hub so we can publish our images.

Congratulations! You can now run containers with standard docker run:

docker run <HUB ACCOUNT NAME>/hello-buildkit

Get started with BuildKit

We encourage you to test out the released experimental Windows BuildKit support v0.13.0. To start out, feel free to follow the documentation or blog, which will walk you through building a simple Windows image with BuildKit. File feedback and issues at Issues · moby/buildkit (github.com) tagged with area/windows.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Thank you

A big thanks to @gabriel-samfira, @TBBle, @tonistiigi, @AkihiroSuda, @crazy-max, @jedevc, @thaJeztah, @profnandaa, @iankingori[LX11] , and many other key community members who have contributed to enabling Windows Containers support on BuildKit. We also thank Windows Container developers who continue to provide valuable feedback and insights.
Quelle: https://blog.docker.com/feed/

Announcing Docker Desktop Support for Windows on Arm: New AI Innovation Opportunities

Docker Desktop now supports running on Windows on Arm (WoA) devices. This exciting development was unveiled during Microsoft’s “Introducing the Next Generation of Windows on Arm” session at Microsoft Build. Docker CTO, Justin Cormack, highlighted how this strategic move will empower developers with even more rapid development capabilities, leveraging Docker Desktop on Arm-powered Windows devices.

The Windows on Arm platform is redefining performance and user experience for applications. With this integration, Docker Desktop extends its reach to a new wave of hardware architectures, broadening the horizons for containerized application development.

Justin Cormack announcing Docker Desktop support for Windows on Arm devices with Microsoft Principal TPM Manager Jamshed Damkewala in the Microsoft Build session “Introducing the next generation of Windows on Arm.” 

Docker Desktop support for Windows on Arm

Read on to learn why Docker Desktop support for Windows on Arm is a game changer for developers and organizations.

Broader accessibility

By supporting Arm devices, Docker Desktop becomes accessible to a wider audience, including users of popular Arm-based devices like the Microsoft devices. This inclusivity fosters a larger, more diverse Docker community, enabling more developers to harness the power of containerization on their preferred devices.

Enhanced developer experience

Developers can seamlessly work on the newest Windows on Arm devices, streamlining the development process and boosting productivity. Docker Desktop’s consistent, cross-platform experience ensures that development workflows remain smooth and efficient, regardless of the underlying hardware architecture.

Future-proofing development

As the tech industry gradually shifts toward Arm architecture for its efficiency and lower power consumption, Docker Desktop’s support for WoA devices ensures we remain at the forefront of innovation. This move future-proofs Docker Desktop, keeping it relevant and competitive as this transition accelerates.

Innovation and experimentation

With Docker Desktop on a new architecture, developers and organizations have more opportunities to innovate and experiment. Whether designing applications for traditional x64 or the emerging Arm ecosystems, Docker Desktop offers a versatile platform for creative exploration.

Market expansion

Furthering compatibility in the Windows Arm space opens new markets and opportunities for Docker, including new relationships with device manufacturers and increased adoption in sectors prioritizing energy efficiency and portability while supporting Docker’s users and customers in leveraging the dev environments that support their goals.

Accelerating developer innovation with Microsoft’s investment in WoA dev tooling

Windows on Arm is arguably as successful as it has ever been. Today, multiple Arm-powered Windows laptops and tablets are available, capable of running nearly the entire range of Windows apps thanks to x86-to-Arm code translation. While Windows on Arm still represents a small fraction of the entire Windows ecosystem, the development of native Arm apps provides a wealth of fresh opportunities for AI innovation.

Microsoft’s investments align with Docker’s strategic goals of cross-platform compatibility and user-centric development, ensuring Docker remains at the forefront of containerization technologies in a diversifying hardware landscape.

Expand your development landscape with Docker Desktop on Windows Arm devices. Update to Docker Desktop 4.31 or consider upgrading to Pro or Business subscriptions to unlock the full potential of cross-platform containerization. Embrace the future of development with Docker, where innovation, efficiency, and cross-platform compatibility drive progress.

Learn more

Watch the Docker Breakout Session Optimizing the Microsoft developer experience with Docker to learn more about Docker and Microsoft better together opportunities.

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account.

Learn about Docker Build Cloud and how you can leverage cloud resources directly from Docker Desktop.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

The Strategic Imperative of AI in 2024

The winds of change are sweeping across industries, propelled by the transformative power of generative artificial intelligence (GenAI). In 2024, AI has become a strategic imperative for enterprises seeking to stay ahead of the curve. Although some organizations may view AI with hesitation, the reality is that ignoring its potential puts them at risk of falling behind. 

In this article, we examine the incredible growth of AI and explore its potential power to transform industries and help enterprises accelerate innovation.

Download the white paper: Docker, Putting the AI in Containers

The Cambrian explosion of artificial intelligence

You are probably familiar with chatbots for desktop users, such as ChatGPT and Google Gemini. However, the landscape of enterprise applications is teeming with examples of AI driving differentiation and success. Consider healthcare, where AI algorithms can aid in early disease detection and personalized treatment plans, or finance, where AI-powered fraud detection systems and algorithmic trading are reshaping the industry. In manufacturing, AI-driven robots can optimize production lines, and predictive maintenance can help minimize downtime. 

We are seeing an even more significant expansion as new types of AI systems provide solutions to problems previously not attainable with machine learning. New GenAI systems offer capabilities to solve organizations’ most pressing issues faster and more efficiently than ever.

In 2023, IBM reported that 42% of IT professionals at large organizations report that they have actively deployed AI, while an additional 40% are actively exploring using the technology. Across the board, businesses are leveraging AI to innovate, gain market share, and secure a competitive edge.

The landscape of AI models has undergone a fascinating shift in a very short time. We have witnessed the initial explosion of behemoths like Amazon’s GPT-3, boasting billions of parameters and impressive capabilities. These large language models (LLMs) captivated the world with their ability to generate human-quality text, translate languages, and answer complex questions.

Shift in scale

The sheer scale of these LLMs, however, has presented challenges in terms of computational resources, training costs, and environmental impact. As sustainability concerns have intensified and accessibility has become a priority, a new breed of AI models has emerged: the small and robust models.

These smaller models, exemplified by projects like Mixtral, Microsoft’s Phi, Google’s Gemini, and others, operate with significantly fewer parameters, often in the millions or even tens of millions. This reduction in size does not equate to a decrease in capability. These models leverage innovative architectures and training techniques to achieve impressive performance metrics, sometimes rivaling their larger counterparts.

As the number and type of models have increased, there has also been growth of open source ethos in AI. Hugging Face, a repository for open source AI software, datasets, and development tools, has seen its list of models grow to more than 500,000 models of all shapes and sizes suited for various applications (Figure 1). Many of these models are ideally suited for deployment in containers that can be developed locally or in the data center.

Figure 1: Hugging Face provides a repository of open source models and tools to help test and develop large language models.

This shift toward smaller, more efficient models signifies a crucial change in focus. The emphasis is no longer solely on raw power but also on practicality, resourcefulness, and accessibility. These models help democratize AI by lowering the barrier to entry for researchers, enterprise software developers, and even small and medium businesses with limited resources. They pave the way for deployment on edge devices, fostering advancements in areas like AI at the edge and ubiquitous computing.

These models will also provide the foundation for enterprises to adapt and fine-tune these models for their usage. They will do so using existing practices of containerization and will need tools that can provide the ability to move quickly through each phase of the software development lifecycle. As the industry’s de facto development and deployment environment for enterprise applications, Docker containerization offers an ideal approach. 

The arrival of these small yet powerful models also signals a new era in AI development. This change is a testament to the ingenuity of researchers and represents a shift towards responsible and sustainable AI advancement. Although large models will likely continue to play a vital role, the future of AI will increasingly be driven by these smaller, more impactful models.

Operational drivers

Beyond the competitive landscape, AI presents a compelling value proposition through its operational benefits. Imagine automating repetitive tasks, extracting actionable insights from massive datasets, and delivering more personalized experiences. AI facilitates data-driven decision-making as users push projects to completion, improving efficiency, cost reduction, and resource optimization.

Alignment with business goals

Users must align AI initiatives with specific business goals and objectives, however, rather than simply deploying AI as a technology standalone. Whether driving revenue growth, expanding market share, or enhancing operational excellence, AI-driven projects can be powerful when directed toward strategic priorities. For instance, AI-powered recommendation engines can help boost sales, while chatbots can improve customer service, ultimately contributing to overall business success.

Digital transformation

Moreover, AI has become a cornerstone of digital transformation initiatives. Businesses are undergoing a fundamental shift toward data-driven, interconnected operations, and AI plays a critical role in unlocking new opportunities and accelerating this transformation. From personalized marketing campaigns to hyper-efficient supply chains, AI empowers organizations to adapt to ever-changing market dynamics and achieve sustainable growth.

The AI imperative

As competitors leverage AI to fuel innovation and gain a competitive edge, businesses that fail to embrace this transformative technology risk being left behind. AI has the potential to revolutionize a variety of industries, from manufacturing to healthcare, and can provide enterprises with a host of benefits, including:

Enhanced decision-making: AI algorithms can analyze vast amounts of data to identify patterns, trends, and insights beyond human analysis capabilities. This capability enables businesses to make informed decisions, optimize operations, and minimize risks.

Streamlined and automated processes: AI-powered automation can handle repetitive and time-consuming tasks precisely and efficiently, freeing up valuable human resources for more strategic and creative endeavors. This approach can increase productivity, cost savings, and improve customer satisfaction.

Enhanced customer experience: AI-driven chatbots and virtual assistants can provide seamless and personalized customer support, resolving queries promptly and efficiently. AI can also analyze customer data to tailor marketing campaigns, product recommendations, and offers, thereby creating a more engaging and satisfying customer experience.

Innovation and product development: AI can accelerate innovation by allowing businesses to explore new ideas, test hypotheses, and rapidly prototype solutions. This approach can lead to the development of innovative products and services that meet changing customer needs.

The adoption of AI also comes with challenges that businesses must carefully navigate. For example, hurdles that enterprises must address include ethical considerations, data privacy concerns, and the need for skilled AI professionals.

Conclusion

In 2024 and beyond, AI is poised to reshape the business landscape. Enterprises that recognize the strategic imperative of AI and embrace it will stay ahead of the curve, while those that lag may struggle to remain competitive. Businesses need to consider how best to invest in AI, develop a clear AI strategy, and adopt this transformative technology. 

To learn more, read the whitepaper Docker, Putting the AI in Containers, which aims to equip you with the knowledge and tools to unlock the transformative potential of AI, starting with the powerful platform of Docker containerization.

Read the white paper: Docker, Putting the AI in Containers

Learn more

Read Docker, Putting the AI in Containers.

Get started with Artificial Intelligence and Machine Learning With Docker.

Read Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces.

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

This post was contributed by Mark Hinkle, CEO and Founder of Peripety Labs.
Quelle: https://blog.docker.com/feed/

Docker Documentation Gets an AI-Powered Assistant

We recently launched a new tool to enhance Docker documentation: an AI-powered documentation assistant incorporating kapa.ai. Docker Docs AI is designed to get you the information you need by providing instant, accurate answers to your Docker-related questions directly within our documentation pages.

Docker Docs AI

Docker documentation caters to a diverse range of users, from beginner users eager to learn the basics to advanced users keen on exploring Docker’s new functionalities and CLI options (Figure 1).

Figure 1: Docker Docs AI in action.

Navigating a large documentation website can be daunting, especially when you’re in a hurry to solve specific issues or implement new features. Context-switching, trying to locate the right information, and piecing together information from different sections are all examples of pain points users face when looking up a complex command or configuration file. 

The AI assistant addresses these pain points by simplifying the search process, interpreting your questions, and guiding you to the precise information you need when you need it (Figure 2).

Figure 2: Docker Docs AI text box for asking questions.

Find what you’re looking for

Docker documentation consists of more than 1,000 pages of content covering various topics, products, and services. The docs get about 13 million views every month, and most of those views originate from search engines. Although search engines are great, it isn’t always easy to conjure the right keywords together to get the result you’re looking for. That’s where we think that an AI-powered search can help:

It’s better at recognizing your intent and personalizing the results.

It lets you search in a more conversational style.

More importantly, kapa.ai is a Retrieval-Augmented Generation (RAG) system that uses the Docker technical documentation as a knowledge source for answering questions. This makes it capable of handling highly specific questions, contextual to Docker, with high accuracy, and with backlinks to the relevant content for additional reading.

Language options

Additionally, the new docs AI search can answer user questions in your preferred language. For example, when a user asks a question about Docker in Simplified Chinese, the AI search detects the language of the query, processes the question to understand the context and intent, and then translates the response into Simplified Chinese (Figure 3). 

This multilingual capability allows users to interact with the AI search seamlessly in their native language, thereby improving accessibility and enhancing the overall user experience.

Figure 3: Docker Docs AI can answer questions in your preferred language.

Using the Docker Docs AI

We’re thrilled to see that our users are highly engaged with the AI search since its launch, and we’re processing around 1,000 queries per day! Users can vote on answers and optionally leave comments, which provides us with great insights into the types of questions asked and allows us to improve responses.

The following section shows interesting ways that people are using Docker Docs AI.

Answers from multiple sources

Sometimes, the answer you need requires digging into multiple pages, extracting information from each page, and piecing it together. In the following example, the user instructs the agent to generate an inline Dockerfile in a Compose file. 

This specific example doesn’t exist in the Docker documentation, but the AI assistant generates a file using different sources (Figure 4):

Figure 4: Docker Docs AI can generate answers containing information from multiple sources.

In this case, the AI derived the answer from the following sources:

Building multi-platform images / cross-compilation

Compose Build Specification / dockerfile_inline

Multi-stage builds

Debugging commands

Often, you need to consult the documentation when you’re faced with a specific problem in building or running your application. Docker docs cannot cover every possible error case for every type of application, so finding the right information to debug your problem can be time-consuming. 

The AI assistant comes in handy here as a debugging tool (Figure 5):

Figure 5: Docker Docs AI can help with debugging.

Here, the question contains a specific error message of a failed build. Given the error message, the AI can deduce the problematic line of code in the Dockerfile that caused this error, and suggest ways to solve it, including links to the relevant documentation for additional reading.

Contextual help

One of the most important capabilities unlocked with AI search is the ability to provide contextual help for your application and source code. The conversational user interface lets you provide additional context to your questions that just isn’t possible with a traditional search tool (Figure 6):

Figure 6: You can provide additional context to help Docker Docs AI generate an answer.

Dive into Docker documentation

The new AI search capability within Docker documentation has emerged as an indispensable resource. The tool streamlines access to essential information to a wide range of users, ensuring a smoother developer experience. 

We invite you to try it out, use it to debug your Dockerfiles, Compose files, and docker run commands, and let us know what you think by leaving a comment using the feedback feature in the AI widget.

Explore new Docker concept guides

What is a container? This guide includes a video, explanation, and hands-on module so you can learn all about the basics of building with Docker. 

Building images: Get started with the guide for understanding the image layers.

Running containers: Learn about publishing and exposing ports.

GenAI video transcription and chat: Our new GenAI guide presents a project on video transcription and analysis using a set of technologies related to the GenAI Stack.

Administration overview: Administrators can manage companies and organizations using Docker Hub or the Docker Admin Console. Check out the administration manual to learn the right setup for your organization.

Data science with JupyterLab: A new use-case guide explains how to use Docker and JupyterLab to create and run reproducible data science environments.

Learn more

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account. 

Learn how Docker Build Cloud in Docker Desktop can accelerate builds.

Secure your supply chain with Docker Scout in Docker Desktop.

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

“@docker can you help me…”: An Early Look at the Docker Extension for GitHub Copilot

At this point, every developer has probably heard about GitHub Copilot. Copilot has quickly become an indispensable tool for many developers, helping novice to seasoned developers become more productive by improving overall efficiency and expediting learning. 

Today, we are thrilled to announce that we are joining GitHub’s Partner Program and have shipped an experience as part of their limited public beta. 

At Docker, we want to make it easy for anyone to reap the benefits of containers without all the overhead of getting started. We aim to meet developers wherever they are, whether in their favorite editor, their terminal, Docker Desktop, and now, even on GitHub.

What is the Docker Copilot extension?

In short, the Docker extension for GitHub Copilot (@docker) is an integration that extends GitHub Copilot’s technology to assist developers in working with Docker. 

What can I use @docker for? 

This initial scope for the Docker extension aims to take any developer end-to-end, from learning about containerization to validating and using generated Docker assets for inner loop workflows (Figure 1). Here’s a quick overview of what’s possible today:

Initiate a conversation with the Docker extension: In GitHub Copilot Chat, get in the extension context by using “@docker” at the beginning of your session.

Learn about containerization: Ask the Docker extension for GitHub Copilot to give you an overview of containerization with a question like,  “@docker, What does containerizing an application mean?”

Generate the correct Docker assets for your project: Get help containerizing your application and watch it generate the Dockerfiles, docker-compose.yml, and .dockerignore files tailored to your project’s languages and file structure: “@docker How would I use Docker to containerize this project?” 

Open a pull request with the assets to save you time: With your consent, the Docker extension can even ask if you want to open a PR with these generated Docker assets on GitHub, allowing you to review and merge them at your convenience.

Find project vulnerabilities with Docker Scout: The Docker extension also integrates with Docker Scout to surface a high-level summary of detected vulnerabilities and provide the next steps to continue using Scout in your terminal via CLI: “@docker can you help me find vulnerabilities in my project?”

From there, you can quickly jump into an editor, like Codespaces, VS Code, or JetBrains IDEs, and start building your app using containers. The Docker Copilot extension currently supports Node, Python, and Java-based projects (single-language or multi-root/multi-language projects).

Figure 1: Docker extension for GitHub Copilot in action.

How do I get access to @docker?

The Docker extension for GitHub Copilot is currently in a limited public beta and is accessible by invitation only. The Docker extension was developed through the GitHub Copilot Partner Program, which invites industry leaders to integrate their tools and services into GitHub Copilot to enrich the ecosystem and provide developers with even more powerful, context-aware tools to accelerate their projects. 

Developers invited to the limited public beta can install the Docker extension on the GitHub Marketplace as an application in their organization and invoke @docker from any context where GitHub Copilot is available (for example, on GitHub or in your favorite editor).

What’s coming to @docker?

During the limited public beta, we’ll be working on adding capabilities to help you get the most out of your Docker subscription. Look for deeper integrations that help you debug your running containers with Docker Debug, fix detected CVEs with Docker Scout, speed up your build with Docker Build Cloud, learn about Docker through our documentation, and more coming soon!

Help shape the future of @docker

We’re excited to continue expanding on @docker during the limited public beta. We would love to hear if you’re using the Docker extension in your organization or are interested in using it once it becomes publicly available. 

If you have a feature request or any issues, we invite you to file an issue on the Docker extension for GitHub Copilot tracker. Your feedback will help us shape the future of Docker tooling.

Thank you for your interest and support. We’re excited to see what you build with GitHub and @docker!

Learn more

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account.

Learn about Docker Build Cloud and how you can leverage cloud resources directly from Docker Desktop.

Subscribe to the Docker Newsletter.

Learn more about GitHub’s Copilot Partner program. 

Quelle: https://blog.docker.com/feed/

How to Check Your Docker Installation: Docker Desktop vs. Docker Engine

Docker has become a leader in providing software development solutions, offering tooling that simplifies the process of developing, testing, deploying, and running applications using containers. As such, understanding Docker’s different products, like Docker Desktop, and components, like Docker Engine, and understanding how they work together is essential for developers looking to maximize their productivity and ensure compliance with Docker’s licensing terms. 

This post will clarify the distinctions and similarities between Docker Desktop and Docker Engine, and provide guidance on verifying which one you are currently using so you can make the most out of your experience.

Read on to explore how to distinguish between Docker Desktop and Docker Engine installations, identify when additional Docker tools are in use, understand Docker contexts, and review your Docker usage to ensure it complies with the Docker Licensing Agreement.

Background

The word “Docker” has become synonymous with containerization to the degree that containerizing an application is increasingly referred to as “dockerizing” an application. Although Docker didn’t create containerization, the company was the first to bring this technology to the developer community in an easily understood set of tooling and artifacts. 

In 2015, Docker took the next step and created the Open Container Initiative (OCI) to define and specify how to build a container image, how to run a container image, and how to share container images. By donating the OCI to the Linux Foundation, Docker provided a level playing field for any application of container technology.

An open source effort headed up by Docker is the Moby Project. Docker created this open framework to assemble specialized container systems without reinventing the wheel. It provides a building block set of dozens of standard components and a framework for assembling them into custom platforms. 

Moby comprises several components, including a container engine, container runtime, networking, storage, and an orchestration system. Both the standalone free Docker Engine (also known Docker Community Edition or Docker CE) and the commercial Docker Desktop originated from the Moby Project. However, Docker Desktop has evolved beyond the Moby Project, with a full product team investing in the features and technology to support individual developers, small teams, and the requirements of large development teams.

Docker Engine vs. Docker Desktop

Docker Desktop is a commercial product sold and supported by Docker, Inc. It includes the Docker Engine and other open source components; proprietary components; and features like an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, and security features that support ECI, air-gapped containers, and administrative settings management. To provide a consistent user experience across different operating systems, Docker Desktop uses the host systems’ native virtualization to run and manage a VM for the Docker Engine. This offers developers a turnkey solution for running a containerization toolset on any device or operating system. Users can leverage the Docker Engine at its core on any platform by downloading Docker Desktop.

Docker Engine is a component free to download individually, not as part of Docker Desktop, and runs as a standalone for free. It can run on any supported Linux distribution and includes the Docker CLI to run commands. Docker Engine will not run natively on Windows or macOS and does not come with a GUI or any of the advanced features provided by Docker Desktop.

How can I tell If I’m running Docker Desktop or just the Docker Engine?

You can determine if you’re using Docker Desktop or Docker Engine in a number of different ways. The following section provides guidance for checking from the filesystem and from within the Docker CLI tooling, which is a component in Docker Desktop as well.

1. GUI or icon

If you are using Docker Desktop, you will have either a windowed GUI or a menubar/taskbar icon of a whale (Figure 1).

Figure 1: The macOS menu.

2. Check for an installation

The easiest way to check for Docker Desktop is to look for the installation; this can be automated by scripting or the use of an MDM solution.

Note that the following instructions assume that Docker Desktop is installed in the default location, which may result in false negatives if other installation paths are used.

Docker Desktop on macOS

On macOS, the Docker Desktop application is installed under the /Applications directory and is named Docker (Figure 2).

$ ls -alt /Applications/Docker.app/
total 0
drwxrwxr-x 49 root admin 1568 Oct 13 09:54 ..
drwxr-xr-x@ 9 jschmidt admin 288 Sep 28 15:36 Contents
drwxr-xr-x@ 3 jschmidt admin 96 Sep 8 02:35 .

Figure 2: Docker application installed on macOS.

Docker Desktop on Windows

On Windows, the Docker Desktop application is installed under the C:Program Files folder and is named Docker (Figure 3).

C:Usersqdzlug>dir "c:Program FilesDocker"
Volume in drive C has no label.
Volume Serial Number is DEFE-FC15

Directory of c:Program FilesDocker

09/28/2023 02:22 PM <DIR> .
09/28/2023 02:22 PM <DIR> ..
09/28/2023 02:22 PM <DIR> cli-plugins
09/28/2023 02:21 PM <DIR> Docker
0 File(s) 0 bytes
4 Dir(s) 52,964,356,096 bytes free

C:Usersqdzlug>

Figure 3: Docker application installed on Windows.

Docker Desktop on Linux

On Linux, the Docker Desktop application is installed under /opt/docker-desktop.

$ ls -lat /opt/docker-desktop/
total 208528
drwxr-xr-x 7 root root 4096 Sep 29 10:58 .
drwxr-xr-x 2 root root 4096 Sep 29 10:58 locales
drwxr-xr-x 5 root root 4096 Sep 29 10:58 resources
drwxr-xr-x 2 root root 4096 Sep 29 10:58 share
drwxr-xr-x 2 root root 4096 Sep 29 10:58 linuxkit
drwxr-xr-x 2 root root 4096 Sep 29 10:58 bin
drwxr-xr-x 7 root root 4096 Sep 29 10:57 ..
-rw-r–r– 1 root root 5313018 Sep 27 12:10 resources.pak
-rw-r–r– 1 root root 273328 Sep 27 12:10 snapshot_blob.bin
-rw-r–r– 1 root root 588152 Sep 27 12:10 v8_context_snapshot.bin
-rw-r–r– 1 root root 107 Sep 27 12:10 vk_swiftshader_icd.json
-rw-r–r– 1 root root 127746 Sep 27 12:10 chrome_100_percent.pak
-rw-r–r– 1 root root 179160 Sep 27 12:10 chrome_200_percent.pak
-rwxr-xr-x 1 root root 1254728 Sep 27 12:10 chrome_crashpad_handler
-rwxr-xr-x 1 root root 54256 Sep 27 12:10 chrome-sandbox
-rw-r–r– 1 root root 398 Sep 27 12:10 componentsVersion.json
-rwxr-xr-x 1 root root 166000248 Sep 27 12:10 'Docker Desktop'
-rw-r–r– 1 root root 10544880 Sep 27 12:10 icudtl.dat
-rwxr-xr-x 1 root root 252920 Sep 27 12:10 libEGL.so
-rwxr-xr-x 1 root root 2877248 Sep 27 12:10 libffmpeg.so
-rwxr-xr-x 1 root root 6633192 Sep 27 12:10 libGLESv2.so
-rwxr-xr-x 1 root root 4623704 Sep 27 12:10 libvk_swiftshader.so
-rwxr-xr-x 1 root root 6402632 Sep 27 12:10 libvulkan.so.1
-rw-r–r– 1 root root 1096 Sep 27 12:10 LICENSE.electron.txt
-rw-r–r– 1 root root 8328249 Sep 27 12:10 LICENSES.chromium.html

Note that the launch icon and location for Docker will depend on the Linux distribution being used.

3. Check a running installation

You can also check a running installation to determine which version of Docker is being used. To do this, you need to use the docker version command; the Server line will indicate the version being used.

Docker Desktop on macOS Arm64

The Server: Docker Desktop 4.24.0 (122432) line indicates using Docker Desktop.

$ docker version
Client:
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:28:49 2023
OS/Arch: darwin/arm64
Context: default

**Server: Docker Desktop 4.24.0 (122432)**
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:31:36 2023
OS/Arch: linux/arm64
Experimental: true
containerd:
Version: 1.6.22
GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
runc:
Version: 1.1.8
GitCommit: v1.1.8-0-g82f18fe
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Docker Desktop on Windows

The Server: Docker Desktop 4.24.0 (122432) line indicates using Docker Desktop.

C:Usersqdzlug>docker version
Client:
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:32:48 2023
OS/Arch: windows/amd64
Context: default

**Server: Docker Desktop 4.24.0 (122432)**
Engine:
Version: dev
API version: 1.44 (minimum version 1.12)
Go version: go1.20.8
Git commit: HEAD
Built: Tue Sep 26 11:52:32 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.22
GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
runc:
Version: 1.1.8
GitCommit: v1.1.8-0-g82f18fe
docker-init:
Version: 0.19.0
GitCommit: de40ad0

C:Usersqdzlug>

Docker Desktop on Linux

C:Usersqdzlug>docker info
Client:
Version: 24.0.6
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.11.2-desktop.5
Path: C:Program FilesDockercli-pluginsdocker-buildx.exe
compose: Docker Compose (Docker Inc.)
Version: v2.2.3
Path: C:Usersqdzlug.dockercli-pluginsdocker-compose.exe
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.0
Path: C:Program FilesDockercli-pluginsdocker-dev.exe
extension: Manages Docker extensions (Docker Inc.)

Docker Engine on Linux

The Server: Docker Engine – Community line indicates using the community edition.

$ docker version
Client: Docker Engine – Community
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Context: default

**Server: Docker Engine – Community**
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: 1.6.24
GitCommit: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
runc:
Version: 1.1.9
GitCommit: v1.1.9-0-gccaecfc
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Docker contexts

Note that multiple contexts can be installed on a system; this is most often seen in Linux where Docker Desktop and Docker Engine are installed on the same host. To switch between the two, the docker context use command is used. When you are in a context, you communicate with the daemon for that context; thus, in a dual installation situation, you would be switching between the Docker Desktop install and the host install.

To view contexts, you use docker context ls, then switch via docker context use CONTEXTNAME. The following example shows a Linux system with both installed.

$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock
desktop-linux moby Docker Desktop unix:///home/jschmidt/.docker/desktop/docker.sock
$ docker version
Client: Docker Engine – Community
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Context: default

Server: Docker Engine – Community
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: 1.6.24
GitCommit: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
runc:
Version: 1.1.9
GitCommit: v1.1.9-0-gccaecfc
docker-init:
Version: 0.19.0
GitCommit: de40ad0
$ docker context use desktop-linux
desktop-linux
Current context is now "desktop-linux"
$ docker version
Client: Docker Engine – Community
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Context: desktop-linux

Server: Docker Desktop 4.24.0 (122432)
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:32:16 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.22
GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
runc:
Version: 1.1.8
GitCommit: v1.1.8-0-g82f18fe
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Other OCI tooling

Because both Docker Engine and Docker Desktop are OCI compliant, a number of solutions are presented and installed as “direct replacements” for Docker. This process usually involves creating helper aliases, scripts, or batch programs to emulate docker commands. 

You can check for aliases by running the command alias docker to see if there is an alias in place. This holds true for Linux and macOS, or a Linux distribution inside WSL2 on Windows.

$ alias docker # Docker aliased to podman
docker='podman'
$ alias docker # No alias present

You can also list the docker binary from the CLI to ensure that it is the official Docker binary:

$ ls -l `which docker` # Docker supplied by Homebrew on the Mac
lrwxr-xr-x 1 jschmidt admin 34 Apr 2 12:03 /opt/homebrew/bin/docker -> ../Cellar/docker/26.0.0/bin/docker

$ ls -l `which docker` # Official Docker binary on the Mac
lrwxr-xr-x 1 root wheel 54 Jan 10 16:06 /usr/local/bin/docker -> /Applications/Docker.app/Contents/Resources/bin/docker

Conclusion

To wrap up our exploration, note that there are also several offerings generically referred to as “Docker” available to use as part of your containerization journey. This post focused on Docker Engine and Docker Desktop.

At this point, you should be comfortable distinguishing between a Docker Desktop installation and a Docker Engine installation and be able to identify when other OCI tooling is being used under the docker command name. You should also have a high-level understanding of Docker contexts as they relate to this topic. Finally, you should be able to review your usage against the Docker Licensing Agreement to ensure compliance or simply log in with your company credentials to get access to your procured entitlements..

Learn more

Docker Engine

Docker Desktop

Docker Desktop Windows Install

Docker Desktop Linux Install

Docker Desktop Macintosh Install

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Quelle: https://blog.docker.com/feed/

Streamline the Development of Real-Time AI Applications with MindsDB Docker Extension

This post was contributed by Martyna Slawinska, Software Engineer at MindsDB, in collaboration with Ajeet Singh, Developer Advocate at Docker.

AI technology has seen several challenges that undoubtedly hinder its progress. Building an AI-powered application requires significant resources, including qualified professionals, cost, and time. Prominent obstacles include:

Bringing (real-time) data to AI models through data pipelines is complex and requires constant maintenance.

Testing different AI/ML frameworks requires dedicated setups.

Customizing AI with dynamic data and making the AI system improve itself automatically sounds like a major undertaking.

These difficulties make AI systems scarcely attainable for small and large enterprises alike. The MindsDB platform, however, helps solve these challenges, and it’s now available in the Extensions Marketplace of Docker Desktop. 

In this article, we’ll show how MindsDB can streamline the development of AI-powered applications and how easily you can set it up via the Docker Desktop Extension.

How does MindsDB facilitate the development of AI-powered apps?

MindsDB is a platform for customizing AI from dynamic data. With its nearly 200 integrations to data sources and AI/ML frameworks, any developer can use their own data to customize AI for their purposes, faster and more securely.

Let’s solve the problems as defined one by one:

MindsDB integrates with numerous data sources, including databases, vector stores, and applications. To make your data accessible to many popular AI/ML frameworks, all you have to do is execute a single statement to connect your data to MindsDB.

MindsDB integrates with popular AI/ML frameworks, including LLMs and AutoML. So once you connect your data to MindsDB, you can pass it to different models to pick the best one for your use case and deploy it within MindsDB.

With MindsDB, you can manage models and data seamlessly, implement custom automation flows, and make your AI systems improve themselves with continuous finetuning.

With MindsDB, you can build AI-powered applications easily, even with no AI/ML experience. You can interact with MindsDB through SQL, MongoDB-QL, REST APIs, Python, and JavaScript.

Follow along to learn how to set up MindsDB in Docker Desktop.

How does MindsDB work?

With MindsDB, you can connect your data from a database, a vector store, or an application, to various AI/ML models, including LLMs and AutoML models (Figure 1). By doing so, MindsDB brings data and AI together, enabling the intuitive implementation of customized AI systems.

Figure 1: Architecture diagram of MindsDB.

MindsDB enables you to easily create and automate AI-powered applications. You can deploy, serve, and fine-tune models in real-time, utilizing data from databases, vector stores, or applications, to build AI-powered apps — using universal tools developers already know.

Find out more about MindsDB and its features, as well as use cases, on the MindsDB website.

Why run MindsDB as a Docker Desktop Extension?

MindsDB can be easily installed on your machine via Docker Desktop. MindsDB provides a Docker Desktop Extension, which lets you use MindsDB within the Docker Desktop environment.

As MindsDB integrates with numerous data sources and AI frameworks, each integration requires a specific set of dependencies. With MindsDB running in Docker Desktop, you can easily install only the required dependencies to keep the image lightweight and less prone to issues.

Running MindsDB as a Docker Desktop Extension gives you the flexibility to:

Set up your MindsDB environment easily by installing the extension.

Customize your MindsDB environment by installing only the required dependencies.

Monitor your MindsDB environment via the logs accessible through the Docker Desktop.

Next, we’ll walk through setting up MindsDB in Docker Desktop. For more information, refer to the documentation.

Getting started

MindsDB setup in Docker Desktop

To get started, you’ll need to download and set up Docker Desktop on your computer. Then, follow the steps below to install MindsDB in Docker Desktop:

First, go to the Extensions page in Docker Desktop, search for MindsDB, and install the MindsDB extension (Figure 2).

Figure 2: Installing the MindsDB Extension in Docker Desktop.

Then, access MindsDB inside Docker Desktop (Figure 3).

Figure 3: Accessing the MindsDB editor in Docker Desktop.

This setup of MindsDB uses the mindsdb/mindsdb:latest Docker image, which is a lightweight Docker image of MindsDB that comes with these integrations preloaded.

Now that you installed MindsDB in Docker Desktop, think of a use case you want to run and list all integrations you want to use. For example, if you want to use data from your PostgreSQL database and one of the models from Anthropic to analyze your data, then you need to install dependencies for Anthropic (as dependencies for PostgreSQL are installed by default).

You can find more use cases on the MindsDB website.

Here is how to install dependencies (Figure 4):

In the MindsDB editor, go to Settings and Manage Integrations.

Select the integrations you want to use and choose Install.

Figure 4: Installing dependencies via the MindsDB editor.

We customized the MindsDB image by installing only the required dependencies. Visit the documentation to learn more.

AI Agents deployment with MindsDB

In this section, we’ll showcase the AI Agents feature developed by MindsDB. AI Agents come with an underlying large language model and a set of skills to answer questions about your data stored in databases, files, or websites (Figure 5).

Figure 5: Diagram of AI Agents.

Agents require a model in the conversational mode. Currently, MindsDB supports the usage of models via the LangChain handler.

There are two types of skills, as follows:

The Text-to-SQL skill translates questions asked in natural language into SQL code to fetch correct data and answer the question.

The Knowledge Base skill stores and searches data assigned to it utilizing embedding models and vector stores.

Let’s get started.

Step 1. Connect your data source to MindsDB.

Here, we use our sample PostgreSQL database and connect it to MindsDB:

CREATE DATABASE example_db
WITH ENGINE = "postgres",
PARAMETERS = {
"user": "demo_user",
"password": "demo_password",
"host": "samples.mindsdb.com",
"port": "5432",
"database": "demo",
"schema": "demo_data"
};

Let’s preview the table of interest:

SELECT *
FROM example_db.car_sales;

This table stores details of cars sold in recent years. This data will be used to create a skill in the next step.

Step 2. Create a skill.

Here, we create a Text-to-SQL skill using data from the car_sales table:

CREATE SKILL my_skill
USING
type = 'text_to_sql',
database = 'example_db',
tables = ['car_sales'],
description = 'car sales data of different car types';

The skill description should be accurate because the model uses it to decide which skill to choose to answer a given question. This skill is one of the components of an agent.

Step 3. Create a conversational model.

AI Agents also require a model in the conversational model. Currently, MindsDB supports the usage of models via the LangChain handler.

Note that if you choose one of the OpenAI models, the following configuration of an engine is required:

CREATE ML_ENGINE langchain_engine
FROM langchain
USING
openai_api_key = 'your-openai-api-key';

Now you can create a model using this engine:

CREATE MODEL my_conv_model
PREDICT answer
USING
engine = 'langchain_engine',
input_column = 'question',
model_name = 'gpt-4',
mode = 'conversational',
user_column = 'question' ,
assistant_column = 'answer',
max_tokens = 100,
temperature = 0,
verbose = True,
prompt_template = 'Answer the user input in a helpful way';

You can adjust the parameter values, such as prompt_template, to fit your use case. This model is one of the components of an agent.

Step 4. Create an agent.

Now that we have a skill and a conversational model, let’s create an AI Agent:

CREATE AGENT my_agent
USING
model = 'my_conv_model',
skills = ['my_skill'];

You can query this agent directly to get answers about data from the car_sales table that has been assigned to the skill (my_skill) that in turn has been assigned to an agent (my_agent).

Let’s ask some questions:

SELECT *
FROM my_agent
WHERE question = 'what is the most commonly sold model?';

Figure 6 shows the output generated by the agent:

Figure 6: Output generated by agent.

Furthermore, you can connect this agent to a chat app, like Slack, using the chatbot object.

Conclusion

MindsDB streamlines data and AI integration for developers, offering seamless connections with various data sources and AI frameworks, enabling users to customize AI workflows and obtain predictions for their data in real time. 

Leveraging Docker Desktop not only simplifies dependency management for MindsDB deployment but also provides broader benefits for developers by ensuring consistent environments across different systems and minimizing setup complexities.

Learn more

Explore the Docker Extension Marketplace.

Install the MindsDB Extension.

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

A Quick Guide to Containerizing Llamafile with Docker for AI Applications

This post was contributed by Sophia Parafina.

Keeping pace with the rapid advancements in artificial intelligence can be overwhelming. Every week, new Large Language Models (LLMs), vector databases, and innovative techniques emerge, potentially transforming the landscape of AI/ML development. Our extensive collaboration with developers has uncovered numerous creative and effective strategies to harness Docker in AI development. 

This quick guide shows how to use Docker to containerize llamafile, an executable that brings together all the components needed to run a LLM chatbot with a single file. This guide will walk you through the process of containerizing llamafile and having a functioning chatbot running for experimentation.

Llamafile’s concept of bringing together LLMs and local execution has sparked a high level of interest in the GenAI space, as it aims to simplify the process of getting a functioning LLM chatbot running locally. 

Containerize llamafile

Llamafile is a Mozilla project that runs open source LLMs, such as Llama-2-7B, Mistral 7B, or any other models in the GGUF format. The Dockerfile builds and containerizes llamafile, then runs it in server mode. It uses Debian trixie as the base image to build llamafile. The final or output image uses debian:stable as the base image.

To get started, copy, paste, and save the following in a file named Dockerfile.

# Use debian trixie for gcc13
FROM debian:trixie as builder

# Set work directory
WORKDIR /download

# Configure build container and build llamafile
RUN mkdir out &&
apt-get update &&
apt-get install -y curl git gcc make &&
git clone https://github.com/Mozilla-Ocho/llamafile.git &&
curl -L -o ./unzip https://cosmo.zip/pub/cosmos/bin/unzip &&
chmod 755 unzip && mv unzip /usr/local/bin &&
cd llamafile && make -j8 LLAMA_DISABLE_LOGS=1 &&
make install PREFIX=/download/out

# Create container
FROM debian:stable as out

# Create a non-root user
RUN addgroup –gid 1000 user &&
adduser –uid 1000 –gid 1000 –disabled-password –gecos "" user

# Switch to user
USER user

# Set working directory
WORKDIR /usr/local

# Copy llamafile and man pages
COPY –from=builder /download/out/bin ./bin
COPY –from=builder /download/out/share ./share/man

# Expose 8080 port.
EXPOSE 8080

# Set entrypoint.
ENTRYPOINT ["/bin/sh", "/usr/local/bin/llamafile"]

# Set default command.
CMD ["–server", "–host", "0.0.0.0", "-m", "/model"]

To build the container, run:

docker build -t llamafile .

Running the llamafile container

To run the container, download a model such as Mistral-7b-v0.1. The example below saves the model to the model directory, which is mounted as a volume.

$ docker run -d -v ./model/mistral-7b-v0.1.Q5_K_M.gguf:/model -p 8080:8080 llamafile

The container will open a browser window with the llama.cpp interface (Figure 1).

Figure 1: Llama.cpp is a C/C++ port of Facebook’s LLaMA model by Georgi Gerganov, optimized for efficient LLM inference across various devices, including Apple silicon, with a straightforward setup and advanced performance tuning features​.

$ curl -s http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."
},
{
"role": "user",
"content": "Compose a poem that explains the concept of recursion in programming."
}
]
}' | python3 -c '
import json
import sys
json.dump(json.load(sys.stdin), sys.stdout, indent=2)
print()
'

Llamafile has many parameters to tune the model. You can see the parameters with man llama file or llama file –help. Parameters can be set in the Dockerfile CMD directive.

Now that you have a containerized llamafile, you can run the container with the LLM of your choice and begin your testing and development journey. 

What’s next?

To continue your AI development journey, read the Docker GenAI guide, review the additional AI content on the blog, and check out our resources. 

 Learn more

Read the Docker AI/ML blog post collection.

Download the Docker GenAI guide.

Read the Llamafile announcement post on Mozilla.org. 

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Empowering Developers at Microsoft Build: Docker Unveils Integrations and Sessions

We are thrilled to announce Docker’s participation at Microsoft Build, which will be held May 21-23 in Seattle, Washington, and online. We’ll showcase how our deep collaboration with Microsoft is revolutionizing the developer experience. Join us to discover the newest and upcoming solutions that enhance productivity, secure applications, and accelerate the development of AI-driven applications.

Our presence at Microsoft Build is more than just a showcase — it’s a portal to the future of application development. Visit our booth to interact with Docker experts, experience live demos, and explore the powerful capabilities of Docker Desktop and other Docker products. Whether you’re new to Docker or looking to deepen your expertise, our team is ready to help you unlock new opportunities in your development projects.

Sessions featuring Docker

Optimizing the Microsoft Developer Experience with Docker: Dive into our partnership with Microsoft and learn how to leverage Docker in Azure, Windows, and Dev Box environments to streamline your development processes. This session is your key to mastering the inner loop of development with efficiency and innovation.

Shifting Test Left with Docker and Microsoft: Learn how to address app quality challenges before the continuous integration stage using Tescontainers Cloud and Docker Debug. Discover how these tools aid in rapid and effective debugging, enabling you to streamline the debugging process for both active and halted containers and create testing efficiencies at scale.

Securing Dockerized Apps in the Microsoft Ecosystem: Learn about Docker’s integrated tools for securing your software supply chain in Microsoft environments. This session is essential for developers aiming to enhance security and compliance while maintaining agility and innovation.

Innovating the SDLC with Insights from Docker CTO Justin Cormack: In this interview, Docker’s CTO will share insights on advancing the SDLC through Docker’s innovative toolsets and partnerships. Watch Thursday 1:45pm PT from the Microsoft Build stage or our Featured Partner page. 

Introducing the Next Generation of Windows on ARM: Experience a special session featuring Docker CTO Justin Cormack as he discusses Docker’s role in expanding the Windows on ARM64 ecosystem, alongside a Microsoft executive.

Where to find us

You can also visit us at Docker booth #FP29 to get hands-on experience and view demos of some of our newest solutions.

If you cannot attend in person, the MSBuild online experience is free. Explore our Microsoft Featured Partner page.

We hope you’ll be able to join us at Microsoft Build — in person or online — to explore how Docker and Microsoft are revolutionizing application development with innovative, secure, and AI-enhanced solutions. Whether you attend in person or watch the sessions on-demand, you’ll gain essential insights and skills to enhance your projects. Don’t miss this chance to be at the forefront of technology. We are eager to help you navigate the exciting future of AI-driven applications and look forward to exploring new horizons of technology together.

Learn more

Explore our Microsoft Featured Partner page.

New to Docker? Create an account. 

Learn how Docker Build Cloud in Docker Desktop can accelerate builds.

Secure your supply chain with Docker Scout in Docker Desktop.

Start testing with Testcontainers Cloud.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.30: Proxy Support with SOCKS5, NTLM and Kerberos, ECI for Build Commands, Build View Features, and Docker Desktop on RHEL Beta

In this post:

Enhancing connectivity with SOCKS proxy support in Docker Desktop

Seamless integration of Docker Desktop with NTLM and Kerberos proxies

Docker Desktop with Enhanced Container Isolation for build commands

Docker Desktop for WSL 2: A leap towards simplification and speed

Enhance your Docker builds experience with new Docker Desktop Build features

Reimagining dev environments: Streamlining development workflows

Docker Desktop support for RHEL beta

Docker Desktop is elevating its capabilities with crucial updates that streamline development workflows and enhance security for developers and enterprises alike. Key enhancements in Docker Desktop 4.30 include improved SOCKS5 proxy support for seamless network connectivity, advanced integration with NTLM and Kerberos for smoother authentication processes, and extended Enhanced Container Isolation (ECI) to secure build environments. Additionally, administrative ease is boosted by simplifying sign-in enforcement through familiar system settings, and WSL 2 configurations have been optimized to enhance performance.

In this blog post, we’ll describe these enhancements and also provide information on future features and available beta features such as Docker Desktop on Red Hat Enterprise Linux (RHEL). Read on to learn more about how these updates are designed to maximize the efficiency and security of your Docker Desktop experience.

Enhancing connectivity with SOCKS proxy support in Docker Desktop

Docker Desktop now supports SOCKS5 proxies, a significant enhancement that broadens its usability in corporate environments where SOCKS proxy is the primary means for internet access or is used to connect to company intranets. This new feature allows users to configure Docker Desktop to route HTTP/HTTPS traffic through SOCKS proxies, enhancing network flexibility and security.

Users can easily configure Docker Desktop to access the internet using socks5:// proxy URLs. This ensures that all outgoing requests, including Docker pulls and other internet access on ports 80/443, are routed through the chosen SOCKS proxy.

The proxy configuration can manually be specified in Settings > Resources > Proxies > Manual proxy configuration, by adding the socks5://host:port URL in the Secure Web Server HTTPS box.

Automatic detection of SOCKS proxies specified in .pac files is also supported.

This advancement not only improves Docker Desktop’s functionality for developers needing robust proxy support but also aligns with business needs for secure and versatile networking solutions. This new feature is available to Docker Business subscribers. 

Visit Docker Docs for detailed information on setting up and utilizing SOCKS proxy support in Docker Desktop.

Seamless integration of Docker Desktop with NTLM and Kerberos proxies

Proxy servers are vital in corporate networks, ensuring security and efficient traffic management. Recognizing their importance, Docker Desktop has evolved to enhance integration with these secured environments, particularly on Windows. Traditional basic authentication often presented challenges, such as repeated login prompts and security concerns. 

Docker Desktop 4.30 introduces major upgrades by supporting advanced authentication protocols such as Kerberos and NTLM, which streamline the user experience by handling the proxy handshake invisibly and reducing interruptions.

These updates simplify workflows and improve security and performance, allowing developers and admins to focus more on their tasks and less on managing access issues. The new version promises a seamless, secure, and more efficient interaction with corporate proxies, making Docker Desktop a more robust tool in today’s security-conscious corporate settings.

For a deeper dive into how Docker Desktop is simplifying proxy navigation and enhancing your development workflow within the Docker Business subscription, be sure to read the full blog post.

Docker Desktop with Enhanced Container Isolation for build commands

Docker Desktop’s latest update marks an important advancement in container security by extending Enhanced Container Isolation (ECI) to docker build and docker buildx commands. This means docker build/buildx commands run in rootless mode when ECI is enabled, thereby protecting the host machine against malicious containers inadvertently used as dependencies while building container images.

This update is significant as it addresses previous limitations where ECI protected containers initiated with docker run but did not extend the same level of security to containers created during the build processes — unless the build was done with the docker-container build driver. 

Prior limitations:

Limited protection: Before this update, while ECI effectively safeguarded containers started with docker run, those spawned by docker build or docker buildx commands, using the default “docker” build driver, did not benefit from this isolation, posing potential security risks.

Security vulnerabilities: Given the nature of build processes, they can be susceptible to various security vulnerabilities, which previously might not have been adequately mitigated. This gap in protection could expose Docker Desktop users to risks during the build phase.

Enhancements in Docker Desktop 4.30:

Rootless build operations: By extending ECI to include build commands, Docker Desktop now ensures that builds run rootless, significantly enhancing security.

Comprehensive protection: This extension of ECI now includes support for docker builds on all platforms (Mac, Windows, Hyper-V, Linux), except Windows WSL, ensuring that all phases of container operation — both runtime and build — are securely isolated.

This development not only strengthens security across Docker Desktop’s operations but also aligns with Docker’s commitment to providing comprehensive security solutions. By safeguarding the entire lifecycle of container management, Docker ensures that users are protected against potential vulnerabilities from development to deployment.

To understand the full scope of these changes and how to leverage them within your Docker Business Subscription, visit the Enhanced Container Isolation docs for additional guidance.

Docker Desktop for WSL 2: A leap toward simplification and speed

We’re excited to announce an update to Docker Desktop that enhances its performance on Windows Subsystem for Linux (WSL 2) by reducing the complexity of the setup process. This update simplifies the WSL 2 setup by consolidating the previously required two Docker Desktop WSL distributions into one.

The simplification of Docker Desktop’s WSL 2 setup is designed to make the codebase easier to understand and maintain, improving our ability to handle failures more effectively. Most importantly, this change will also enhance the startup speed of Docker Desktop on WSL 2, allowing you to get to work faster than ever before.

What’s changing?

Phase 1: Starting with Docker Desktop 4.30, we are rolling out this update incrementally on all fresh installations. If you’re setting up Docker Desktop for the first time, you’ll experience a more streamlined installation process with reduced setup complexity right away.

Phase 2: We plan to introduce data migration in a future update, further enhancing the system’s efficiency and user experience. This upcoming phase will ensure that existing users also benefit from these improvements without any hassle.

To take advantage of phase 1, we encourage all new and existing users to upgrade to Docker Desktop 4.30. By doing so, you’ll be prepared to seamlessly transition to the enhanced version as we roll out subsequent phases.

Keep an eye out for more updates as we continue to refine Docker Desktop and enrich your development experience. 

Enhance your Docker Builds experience with new Docker Desktop Build features

Docker Desktop’s latest updates bring significant improvements to the Builds View, enhancing both the management and transparency of your build processes. These updates are designed to make Docker Desktop an indispensable tool for developers seeking efficiency and detailed insights into their builds.

Bulk delete enhancements:

Extended bulk delete capability: The ability to bulk delete builds has been expanded beyond the current page. Now, by defining a search or query, you can effortlessly delete all builds that match your specified criteria across multiple pages.

Simplified user experience: With the new Select all link next to the header, managing old or unnecessary builds becomes more straightforward, allowing you to maintain a clean and organized build environment with minimal effort (Figure 1).

Figure 1: Docker Desktop Build history view displaying the new Select All or Select Various builds to take action.

Build provenance and OpenTelemetry traces:

Provenance and dependency insights: The updated Builds View now includes an action menu that offers access to the dependencies and provenance of each build (Figure 2). This feature enables access to the origin details and the context of the builds for deeper inspection, enhancing security and compliance.

OpenTelemetry integration: For advanced debugging, Docker Desktop lets you download OpenTelemetry traces to inspect build performance in Jaeger. This integration is crucial for identifying and addressing performance bottlenecks efficiently. Also, depending on your build configuration, you can now download the provenance to inspect the origin details for the build.

Figure 2: Docker Desktop Builds View displaying Dependencies and Build results in more detail.

Overall, these features work together to provide a more streamlined and insightful build management experience, enabling developers to focus more on innovation and less on administrative tasks. 

For more detailed information on how to leverage these new functionalities and optimize your Docker Desktop experience, make sure to visit Builds documentation.

Reimagining Dev Environments: Streamlining development workflows

We are evolving our approach to development environments as part of our continuous effort to refine Docker Desktop and enhance user experience. Since its launch in 2021, Docker Desktop’s Dev Environments feature has been a valuable tool for developers to quickly start projects from GitHub repositories or local directories. However, to better align with our users’ evolving needs and feedback, we will be transitioning from the existing Dev Environments feature to a more robust and integrated solution in the near future. 

What does that mean to those using Dev Environments today? The feature is unchanged. Starting with the Docker Desktop 4.30 release, though, new users trying out Dev Environments will need to explicitly turn it on in Beta features settings. This change is part of our broader initiative to streamline Docker Desktop functionalities and introduce new features in the future (Figure 3).

Figure 3: Docker Desktop Settings page displaying available features in development and beta features.

We understand the importance of a smooth transition and are committed to providing detailed guidance and support to our users when we officially announce the evolution of Dev Environments. Until then, you can continue to leverage Dev Environments and look forward to additional functionality to come.

Docker Desktop support for Red Hat Enterprise Linux beta

As part of Docker’s commitment to broadening its support for enterprise-grade operating systems, we are excited to announce the expansion of Docker Desktop to include compatibility with Red Hat Enterprise Linux (RHEL) distributions, specifically versions 8 and 9. This development is designed to support our users in enterprise environments where RHEL is widely used, providing them with the same seamless Docker experience they expect on other platforms.

To provide feedback on this new beta functionality, engage your Account Executive or join the Docker Desktop Preview Program.

As Docker Desktop continues to evolve, the latest updates are set to significantly enhance the platform’s efficiency and security. From integrating advanced proxy support with SOCKS5, NTLM, and Kerberos to streamlining administrative processes and optimizing WSL 2 setups, these improvements are tailored to meet the needs of modern developers and enterprises. 

With the addition of exciting upcoming features and beta opportunities like Docker Desktop on Red Hat Enterprise Linux, Docker remains committed to providing robust, secure, and user-friendly solutions. Stay connected with us to explore how these continuous advancements can transform your development workflows and enhance your Docker experience.

Learn more

Read Navigating Proxy Servers with Ease: New Advancements in Docker Desktop 4.30.

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account. 

Learn how Docker Build Cloud in Docker Desktop can accelerate builds.

Secure your supply chain with Docker Scout in Docker Desktop.

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/