Build Your Own AI-Driven Code Analysis Chatbot for Developers with the GenAI Stack

The topic of GenAI is everywhere now, but even with so much interest, many developers are still trying to understand what the real-world use cases are. Last year, Docker hosted an AI/ML Hackathon, and genuinely interesting projects were submitted. 

In this AI/ML Hackathon post, we will dive into a winning submission, Code Explorer, in the hope that it sparks project ideas for you. 

For developers, understanding and navigating codebases can be a constant challenge. Even popular AI assistant tools like ChatGPT can fail to understand the context of your projects through code access and struggle with complex logic or unique project requirements. Although large language models (LLMs) can be valuable companions during development, they may not always grasp the specific nuances of your codebase. This is where the need for a deeper understanding and additional resources comes in.

Imagine you’re working on a project that queries datasets for both cats and dogs. You already have functional code in DogQuery.py that retrieves dog data using pagination (a technique for fetching data in parts). Now, you want to update CatQuery.py to achieve the same functionality for cat data. Wouldn’t it be amazing if you could ask your AI assistant to reference the existing code in DogQuery.py and guide you through the modification process? 

This is where Code Explorer, an AI-powered chatbot comes in. 

What makes Code Explorer unique?

The following demo, which was submitted to the AI/ML Hackathon, provides an overview of Code Explorer (Figure 1).

Figure 1: Demo of the Code Explorer extension as submitted to the AI/ML Hackathon.

Code Explorer helps you find answers about your code by searching relevant information based on the programming language and folder location. Unlike chatbots, Code Explorer goes beyond generic coding knowledge. It leverages a powerful AI technique called retrieval-augmented generation (RAG) to understand your code’s specific context. This allows it to provide more relevant and accurate answers based on your actual project.

Code Explorer supports a variety of programming languages, such as *.swift, *.py, *.java, *.cs, etc. This tool can be useful for learning or debugging your code projects, such as Xcode projects, Android projects, AI applications, web dev, and more.

Benefits of the CodeExplorer include:

Effortless learning: Explore and understand your codebase more easily.

Efficient debugging: Troubleshoot issues faster by getting insights from your code itself.

Improved productivity: Spend less time deciphering code and more time building amazing things.

Supports various languages: Works with popular languages like Python, Java, Swift, C#, and more.

Use cases include:

Understanding complex logic: “Explain how the calculate_price function interacts with the get_discount function in billing.py.”

Debugging errors: “Why is my getUserData function in user.py returning an empty list?”

Learning from existing code: “How can I modify search.py to implement pagination similar to search_results.py?”

How does it work?

Code Explorer leverages the power of a RAG-based AI framework, providing context about your code to an existing LLM model. Figure 2 shows the magic behind the scenes.

Figure 2: Diagram of Code Explorer steps.

Step 1. Process documents

The user selects a codebase folder through the Streamlit app. The process_documents function in the file db.py is called. This function performs the following actions:

Parsing code: It reads and parses the code files within the selected folder. This involves using language-specific parsers (e.g., ast module for Python) to understand the code structure and syntax.

Extracting information: It extracts relevant information from the code, such as:

Variable names and their types

Function names, parameters, and return types

Class definitions and properties

Code comments and docstrings

Documents are loaded and chunked: It creates a RecursiveCharacterTextSplitter object based on the language. This object splits each document into smaller chunks of a specified size (5000 characters) with some overlap (500 characters) for better context.

Creating Neo4j vector store: It creates a Neo4j vector store, a type of database that stores and connects code elements using vectors. These vectors represent the relationships and similarities between different parts of the code.

Each code element (e.g., function, variable) is represented as a node in the Neo4j graph database.

Relationships between elements (e.g., function call, variable assignment) are represented as edges connecting the nodes.

Step 2. Create LLM chains

This step is triggered only after the codebase has been processed (Step 1).

Two LLM chains are created:

Create Documents QnA chain: This chain allows users to talk to the chatbot in a question-and-answer style. It will refer to the vector database when answering the coding question, referring to the source code files.

Create Agent chain: A separate Agent chain is created, which uses the QnA chain as a tool. You can think of it as an additional layer on top of the QnA chain that allows you to communicate with the chatbot more casually. Under the hood, the chatbot may ask the QnA chain if it needs help with the coding question, which is an AI discussing with another AI the user’s question before returning the final answer. In testing, the agent appears to summarize rather than give a technical response as opposed to the QA agent only.

Langchain is used to orchestrate the chatbot pipeline/flow.

Step 3. User asks questions and AI chatbot responds

The Streamlit app provides a chat interface for users to ask questions about their code. The user interacts with the Streamlit app’s chat interface, and user inputs are stored and used to query the LLM or the QA/Agent models. Based on the following factors, the app chooses how to answer the user:

Codebase processed:

Yes: The QA RAG chain is used if the user has selected Detailed mode in the sidebar. This mode leverages the processed codebase for in-depth answers.

Yes: A custom agent logic (using the get_agent function) is used if the user has selected Agent mode. This mode might provide more concise answers compared to the QA RAG model.

Codebase not processed:

The LLM chain is used directly if the user has not processed the codebase yet.

Getting started

To get started with Code Explorer, check the following:

Ensure that you have installed the latest version of Docker Desktop.

Ensure that you have Ollama running locally.

Then, complete the four steps explained below.

1. Clone the repository

Open a terminal window and run the following command to clone the sample application.

https://github.com/dockersamples/CodeExplorer

You should now have the following files in your CodeExplorer directory:

tree
.
├── LICENSE
├── README.md
├── agent.py
├── bot.Dockerfile
├── bot.py
├── chains.py
├── db.py
├── docker-compose.yml
├── images
│ ├── app.png
│ └── diagram.png
├── pull_model.Dockerfile
├── requirements.txt
└── utils.py

2 directories, 13 files

2. Create environment variables
Before running the GenAI stack services, open the .env and modify the following variables according to your needs. This file stores environment variables that influence your application’s behavior.

OPENAI_API_KEY=sk-XXXXX
LLM=codellama:7b-instruct
OLLAMA_BASE_URL=http://host.docker.internal:11434
NEO4J_URI=neo4j://database:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=XXXX
EMBEDDING_MODEL=ollama
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_TRACING_V2=true # false
LANGCHAIN_PROJECT=default
LANGCHAIN_API_KEY=ls__cbaXXXXXXXX06dd

Note:

If using EMBEDDING_MODEL=sentence_transformer, uncomment code in requirements.txt and chains.py. It was commented out to reduce code size.

Make sure to set the OLLAMA_BASE_URL=http://llm:11434 in the .env file when using the Ollama Docker container. If you’re running on Mac, set OLLAMA_BASE_URL=http://host.docker.internal:11434 instead.

3. Build and run Docker GenAI services
Run the following command to build and bring up Docker Compose services:

docker compose –profile linux up –build

This gets the following output:

+] Running 5/5
✔ Network codeexplorer_net Created 0.0s
✔ Container codeexplorer-database-1 Created 0.1s
✔ Container codeexplorer-llm-1 Created 0.1s
✔ Container codeexplorer-pull-model-1 Created 0.1s
✔ Container codeexplorer-bot-1 Created 0.1s
Attaching to bot-1, database-1, llm-1, pull-model-1
llm-1 | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
llm-1 | Your new public key is:
llm-1 |
llm-1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGEM2BIxSSje6NFssxK7J1+X+46n+cWTQufEQjMUzLGC
llm-1 |
llm-1 | 2024/05/23 15:05:47 routes.go:1008: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
llm-1 | time=2024-05-23T15:05:47.265Z level=INFO source=images.go:704 msg="total blobs: 0"
llm-1 | time=2024-05-23T15:05:47.265Z level=INFO source=images.go:711 msg="total unused blobs removed: 0"
llm-1 | time=2024-05-23T15:05:47.265Z level=INFO source=routes.go:1054 msg="Listening on [::]:11434 (version 0.1.38)"
llm-1 | time=2024-05-23T15:05:47.266Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2106292006/runners
pull-model-1 | pulling ollama model codellama:7b-instruct using http://host.docker.internal:11434
database-1 | Installing Plugin 'apoc' from /var/lib/neo4j/labs/apoc-*-core.jar to /var/lib/neo4j/plugins/apoc.jar
database-1 | Applying default values for plugin apoc to neo4j.conf
pulling manifest
pull-model-1 | pulling 3a43f93b78ec… 100% ▕████████████████▏ 3.8 GB
pulling manifest
pulling manifest
pull-model-1 | pulling 3a43f93b78ec… 100% ▕████████████████▏ 3.8 GB
pull-model-1 | pulling 8c17c2ebb0ea… 100% ▕████████████████▏ 7.0 KB
pull-model-1 | pulling 590d74a5569b… 100% ▕████████████████▏ 4.8 KB
pull-model-1 | pulling 2e0493f67d0c… 100% ▕████████████████▏ 59 B
pull-model-1 | pulling 7f6a57943a88… 100% ▕████████████████▏ 120 B
pull-model-1 | pulling 316526ac7323… 100% ▕████████████████▏ 529 B
pull-model-1 | verifying sha256 digest
pull-model-1 | writing manifest
pull-model-1 | removing any unused layers
pull-model-1 | success
llm-1 | time=2024-05-23T15:05:52.802Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cuda_v11]"
llm-1 | time=2024-05-23T15:05:52.806Z level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="7.7 GiB" available="2.5 GiB"
pull-model-1 exited with code 0
database-1 | 2024-05-23 15:05:53.411+0000 INFO Starting…
database-1 | 2024-05-23 15:05:53.933+0000 INFO This instance is ServerId{ddce4389} (ddce4389-d9fd-4d98-9116-affa229ad5c5)
database-1 | 2024-05-23 15:05:54.431+0000 INFO ======== Neo4j 5.11.0 ========
database-1 | 2024-05-23 15:05:58.048+0000 INFO Bolt enabled on 0.0.0.0:7687.
database-1 | [main] INFO org.eclipse.jetty.server.Server – jetty-10.0.15; built: 2023-04-11T17:25:14.480Z; git: 68017dbd00236bb7e187330d7585a059610f661d; jvm 17.0.8.1+1
database-1 | [main] INFO org.eclipse.jetty.server.handler.ContextHandler – Started o.e.j.s.h.MovedContextHandler@7c007713{/,null,AVAILABLE}
database-1 | [main] INFO org.eclipse.jetty.server.session.DefaultSessionIdManager – Session workerName=node0
database-1 | [main] INFO org.eclipse.jetty.server.handler.ContextHandler – Started o.e.j.s.ServletContextHandler@5bd5ace9{/db,null,AVAILABLE}
database-1 | [main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor – NO JSP Support for /browser, did not find org.eclipse.jetty.jsp.JettyJspServlet
database-1 | [main] INFO org.eclipse.jetty.server.handler.ContextHandler – Started o.e.j.w.WebAppContext@38f183e9{/browser,jar:file:/var/lib/neo4j/lib/neo4j-browser-5.11.0.jar!/browser,AVAILABLE}
database-1 | [main] INFO org.eclipse.jetty.server.handler.ContextHandler – Started o.e.j.s.ServletContextHandler@769580de{/,null,AVAILABLE}
database-1 | [main] INFO org.eclipse.jetty.server.AbstractConnector – Started http@6bd87866{HTTP/1.1, (http/1.1)}{0.0.0.0:7474}
database-1 | [main] INFO org.eclipse.jetty.server.Server – Started Server@60171a27{STARTING}[10.0.15,sto=0] @5997ms
database-1 | 2024-05-23 15:05:58.619+0000 INFO Remote interface available at http://localhost:7474/
database-1 | 2024-05-23 15:05:58.621+0000 INFO id: F2936F8E5116E0229C97F43AD52142685F388BE889D34E000D35E074D612BE37
database-1 | 2024-05-23 15:05:58.621+0000 INFO name: system
database-1 | 2024-05-23 15:05:58.621+0000 INFO creationDate: 2024-05-23T12:47:52.888Z
database-1 | 2024-05-23 15:05:58.622+0000 INFO Started.

The logs indicate that the application has successfully started all its components, including the LLM, Neo4j database, and the main application container. You should now be able to interact with the application through the user interface.

You can view the services via the Docker Desktop dashboard (Figure 3).

Figure 3: The Docker Desktop dashboard showing the running Code Explorer powered with GenAI stack.

The Code Explorer stack consists of the following services:

Bot

The bot service is the core application. 

Built with Streamlit, it provides the user interface through a web browser. The build section uses a Dockerfile named bot.Dockerfile to build a custom image, containing your Streamlit application code. 

This service exposes port 8501, which makes the bot UI accessible through a web browser.

Pull model

This service downloads the codellama:7b-instruct model. 

The model is based on the Llama2 model, which achieves similar performance to OpenAI’s LLM but is trained with additional code context. 

However, codellama:7b-instruct is additionally trained on code-related contexts and fine-tuned to understand and respond in human language. 

This specialization makes it particularly adept at handling questions about code.

Note: You may notice that pull-model-1 service exits with code 0, which indicates successful execution. This service is designed just to download the LLM model (codellama:7b-instruct). Once the download is complete, there’s no further need for this service to remain running. Exiting with code 0 signifies that the service finished its task successfully (downloading the model).

Database

This service manages a Neo4j graph database.

It efficiently stores and retrieves vector embeddings, which represent the code files in a mathematical format suitable for analysis by the LLM model.

The Neo4j vector database can be explored at http://localhost:7474 (Figure 4).

Figure 4: Neo4j database information.

LLM

This service acts as the LLM host, utilizing the Ollama framework. 

It manages the downloaded LLM model (not the embedding), making it accessible for use by the bot application.

4. Access the application
You can now view your Streamlit app in your browser by accessing http://localhost:8501 (Figure 5).

Figure 5: View the app.

In the sidebar, enter the path to your code folder and select Process files (Figure 6). Then, you can start asking questions about your code in the main chat.

Figure 6: The app is running.

You will find a toggle switch in the sidebar. By default Detailed mode is enabled. Under this mode, the QA RAG chain chain is used (detailedMode=true) . This mode leverages the processed codebase for in-depth answers. 

When you toggle the switch to another mode (detailedMode=false), the Agent chain gets selected. This is similar to how one AI discusses with another AI to create the final answer. In testing, the agent appears to summarize rather than a technical response as opposed to the QA agent only.

Here’s a result when detailedMode=true (Figure 7):

Figure 7: Result when detailedMode=true.

Figure 8 shows a result when detailedMode=false:

Figure 8: Result when detailedMode=false.

Start exploring

Code Explorer, powered by the GenAI Stack, offers a compelling solution for developers seeking AI assistance with coding. This chatbot leverages RAG to delve into your codebase, providing insightful answers to your specific questions. Docker containers ensure smooth operation, while Langchain orchestrates the workflow. Neo4j stores code representations for efficient analysis. 

Explore Code Explorer and the GenAI Stack to unlock the potential of AI in your development journey!

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Announces SOC 2 Type 2 Attestation & ISO 27001 Certification

Docker is pleased to announce that we have received our SOC 2 Type 2 attestation and ISO 27001 certification with no exceptions or major non-conformities. 

Security is a fundamental pillar to Docker’s operations, which is embedded into our overall mission and company strategy. Docker’s products are core to our user community and our SOC 2 Type 2 attestation and ISO 27001 certification demonstrate Docker’s ongoing commitment to security to our user base.

What is a SOC 2 Type 2?

Defined by the American Institute of Certified Public Accountants (AICPA), a System and Organization Controls (SOC) is a suite of reports produced during an audit. A SOC 2 Type 2 is an audit report or attestation that evaluates the design and operating effectiveness of internal controls of information systems over five criteria principles, known as the Trust Services Principles: Security (also referred to as the common criteria), Availability, Confidentiality, Processing Integrity, and Privacy.

What is ISO 27001?

The International Organization for Standardization (ISO) is an independent, non-governmental international organization of national standards bodies. ISO was established in 1947 and has a long history of producing standards, requirements, and certifications to demonstrate different control environments.

ISO 27001 is a worldwide recognized standard for the information security management system (ISMS). An ISMS is a framework of policies, procedures, and controls for systematically managing an organization’s sensitive data. 

Continued compliance

Going forward, Docker will provide an annual SOC 2 Type 2 attestation and ISO 27001 certification following the timing of our fiscal year.

Docker is committed to providing our customers with secure products. Our compliance posture provides our commitment to lead the industry in providing developers with tools they can trust. 

To learn more about Docker’s security posture, visit our Docker Trust Center website. If you would like access to our compliance platform to receive the documents, fill out the Security Documentation form, and the Docker Sales team will follow up with you. 

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Highlights from Microsoft Build: Docker’s Innovations with AI and Windows on Arm

Windows is back! That is my big takeaway from Microsoft Build last week. In recent years, Microsoft has focused on a broader platform that includes Windows and Linux and has adapted to the centrality of the browser in the modern world. But last week’s event was dominated by the launch of the Copilot+ PC, initially launched with Arm-based machines. We announced Docker Desktop support for Windows on Arm (long-awaited by many of you!) to accompany this exciting development.

The buzz around Arm-based machines

Sadly, we did not get to try any of the new hardware in-depth, but there was a lot of love and longing for the Snapdragon Dev Kit from those who had tried it and our team back home. Arm Windows machines will ship from major manufacturers soon. Developers are power users of their machines, and AI has pushed up the local performance requirements, which means more, faster machines sooner. What’s not to like? (Well, the Recall feature preview won that prize.)

Copilots everywhere

It wasn’t all about Windows. Copilots were everywhere, including the opening keynote and announcing our partner collaboration with Docker’s extension for GitHub Copilot. If you missed it and thought Copilot was just the original assistant from GitHub, now there are 365 Copilots for everything from Excel to Power BI to Minecraft. Just emerging is the ability to build your own Copilots and an ecosystem of Copilots. Docker launched in the first wave of Copilot integrations, initially integrating into GitHub Copilot chat — with more to come. Check out our blog post for more on how the extension can help you with Dockerfiles and Compose files and how to use Docker.

Satya Nadella presents GitHub Copilot Extensions, including Docker, at Microsoft Build 2024.

Connecting with the community

The event’s vibe wasn’t just about the launches; it was about connecting with the people. As a hybrid event, Microsoft Build had a lively ongoing broadcast that was great fun and was being produced right across from the Docker booth. 

The Docker booth was constantly busy, with a stream of people with questions, requests, problems, and ideas, ranging from new Docker users to experienced dockhands and those checking out our new products, like Docker Build Cloud, learning more about how that can Secure Dockerized apps in the Microsoft ecosystem, and getting hands-on with features like Docker Debug in Docker Desktop.

Justin Cormack recording in front of the Docker booth at Microsoft Build 2024.

Better together with Docker and Microsoft

I also really enjoyed getting the chance to share a handful of the better-together solutions that we’re collaborating on with Microsoft. You can watch my session from Thursday, Optimizing the Microsoft developer experience with Docker. And in a short session, Innovating the SDLC with insights from Docker, I shared a fresh perspective on how to navigate and streamline workflows through the SDLC. 

Microsoft Build was a fantastic opportunity to showcase our innovations and connect with the Microsoft developer community. We are excited about the solutions we are bringing to the Microsoft ecosystem and look forward to continuing our collaboration to enhance the developer experience with Docker and Microsoft’s better-together solutions.

Watch Docker talks at Microsoft Build

Optimizing the Microsoft developer experience with Docker

Innovating the SDLC with insights from Docker CTO Justin Cormack

Securing Dockerized apps in the Microsoft ecosystem

Shift test left and effectively debug to beat app quality challenges

Also check out

Discover how Docker and Microsoft tools work together to solve complex development challenges.

Read “@docker can you help me…”: An Early Look at the Docker Extension for GitHub Copilot.

Learn about new Docker Desktop support for Windows on Arm.

Read Experimental Windows Containers Support for BuildKit Released in v0.13.0.

Get started with Docker Desktop today!

Quelle: https://blog.docker.com/feed/

Experimental Windows Containers Support for BuildKit Released in v0.13.0

We are excited to announce that the latest BuildKit release, v0.13.0, contains experimental Windows Containers support. BuildKit has been around for many years and has been the default build engine on Linux since Docker Engine 23.0.0. 

BuildKit is a toolkit for converting source code to build artifacts (like container images) in an efficient, expressive, and repeatable manner. BuildKit introduced the following benefits as compared with the previous Docker Builder:

Parallelize building independent build stages and skip any unused stages.

Incrementally transfer only the changed files in your build context between builds, also skip the transfer of unused files in your build context.

Use Dockerfile frontend implementations with many new features.

Avoid side effects with the rest of the API (intermediate images and containers).

Prioritize your build cache for automatic pruning.

Since 2018, Windows Container customers have been asking for Windows support for BuildKit, as seen in the BuildKit repo and Windows Containers repo, with hundreds of reactions and comments. We have listened to our users and focused resources in the past year to light up Windows Containers support on BuildKit.

Until now, we only shipped the Buildx client on Windows for building Linux images and some very limited Windows images using cross-compilation. Today, we are introducing experimental support for Windows Containers in BuildKit, with the aim of making this available soon in your standard Docker Build.

What’s next?

In the upcoming months, we will work toward further improvements, including:

General Availability (GA) ready: Improving release materials, including guides and documentation.

Integration with Docker Engine: So you can just run docker build.

OCI worker support: On Linux, there is an option to run BuildKit with only runc using the OCI worker. Currently, only the containerd worker is supported for Windows.

Container driver: Add support for running in the container driver.

Image outputs: Some image outputs supported by Linux may not work on Windows and need to be tested and assessed. These include exporting an image to multiple registries, checking if keys for image output are supported, and testing multi-platform image-building support.

Building other artifacts: BuildKit can be used to build other artifacts beyond container images. Work needs to be done in this area to cross-check whether other artifacts, such as binaries, libraries, and documentation, are also supported on Windows as it is on Linux.

Running buildkitd doesn’t require Admin: Currently, running buildkitd on Windows requires admin privileges. We will be looking into running buildkitd on low privileges, aka “rootless”.

Export cache: Investigations need to be done to confirm whether specific cache exporters (inline, registry, local, gha [GitHub Actions], 3, azblob) are also supported on Windows.

Linux parity: Identifying, accessing, and closing the feature parity gap between Windows and Linux.

Walkthrough — Build a basic “Hello World” image with BuildKit and Windows Containers 

Let’s walk through the process of setting up BuildKit, including the necessary dependencies, and show how to build a basic Windows image. For feedback and issues, file a ticket at Issues · moby/buildkit (github.com) tagged with area/windows. 

The platform requirements are listed below. In our scenario, we will be running a nanoserver:ltsc2022 base image with AMD64. 

Architecture: AMD64, Arm64 (binaries available but not officially tested yet). 

Supported operating systems: Windows Server 2019, Windows Server 2022, Windows 11. 

Base images: servercore:ltsc2019, servercore:ltsc2022, nanoserver:ltsc2022. See the  compatibility map. 

The workflow will cover the following steps:

Enable Windows Containers.

Install containerd.

Install BuildKit.

Build a simple “Hello World” image.

1. Enable Windows Containers 

Start a PowerShell terminal in admin privilege mode. Run the following command to ensure the Containers feature is enabled:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V, Containers -All

If you see RestartNeeded as True on your setup, restart your machine and reopen an Administrator PowerShell terminal (Figure 1). Otherwise, continue to the next step.

Figure 1: Enabling Windows Containers in PowerShell.

2. Install containerd

Next, we need to install containerd, which is used as the container runtime for managing containers and images.

Note: We currently only support the containerd worker. In the future, we plan to add support for the OCI worker, which uses runc and will therefore remove this dependency

Run the following script to install the latest containerd release. If you have containerd already installed, skip the script below and run Start-Service containerd to start the containerd service. 

Note: containerd v1.7.7+ is required.

# If containerd previously installed run:
Stop-Service containerd

# Download and extract desired containerd Windows binaries
$Version="1.7.13" # update to your preferred version
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
tar.exe xvf .containerd-windows-amd64.tar.gz

# Copy and configure
Copy-Item -Path ".bin" -Destination "$Env:ProgramFilescontainerd" -Recurse -Container:$false -Force
cd $Env:ProgramFilescontainerd
.containerd.exe config default | Out-File config.toml -Encoding ascii

# Copy
Copy-Item -Path .bin* -Destination (New-Item -Type Directory $Env:ProgramFilescontainerd -Force) -Recurse -Force

# add the binaries (containerd.exe, ctr.exe) in $env:Path
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + [IO.Path]::PathSeparator + "$Env:ProgramFilescontainerd"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
# reload path, so you don't have to open a new PS terminal later if needed
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")

# configure
containerd.exe config default | Out-File $Env:ProgramFilescontainerdconfig.toml -Encoding ascii
# Review the configuration. Depending on setup you may want to adjust:
# – the sandbox_image (Kubernetes pause image)
# – cni bin_dir and conf_dir locations
Get-Content $Env:ProgramFilescontainerdconfig.toml

# Register and start service
containerd.exe –register-service
Start-Service containerd

3. Install BuildKit

Note: Ensure you have updated to the latest version of Docker Desktop.

Run the following script to download and extract the latest BuildKit release.

$version = "v0.13.0" # specify the release version, v0.13+
$arch = "amd64" # arm64 binary available too
curl.exe -LO https://github.com/moby/buildkit/releases/download/$version/buildkit-$version.windows-$arch.tar.gz
# there could be another `.bin` directory from containerd instructions
# you can move those
mv bin bin2
tar.exe xvf .buildkit-$version.windows-$arch.tar.gz
## x bin/
## x bin/buildctl.exe
## x bin/buildkitd.exe

Next, run the following commands to add the BuildKit binaries to your Program Files directory, then add them to the PATH so they can be called directly.

# after the binaries are extracted in the bin directory
# move them to an appropriate path in your $Env:PATH directories or:
Copy-Item -Path ".bin" -Destination "$Env:ProgramFilesbuildkit" -Recurse -Force
# add `buildkitd.exe` and `buildctl.exe` binaries in the $Env:PATH
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + `
[IO.Path]::PathSeparator + "$Env:ProgramFilesbuildkit"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + `
[System.Environment]::GetEnvironmentVariable("Path","User")

Run buildkitd.exe. You should expect to see something as shown in Figure 2:

Figure 2: Successfully starting buildkitd without any errors in the logs.

Now we can set up buildx (the BuildKit client) to use our BuildKit instance. Here we will create a Builder item that points to our Buildkit instance we just started, by running:

docker buildx create –name buildkit-exp –use –driver=remote npipe:////./pipe/buildkitd

Here we are creating a new instance of a builder and pointing it to our BuildKit instance. BuildKit will listen on npipe:////./pipe/buildkitd.

Notice that we also name the builder, here, we call it buildkit-exp, but you can name it whatever you want. Just remember to add –use to set this as the current builder.

Let’s test our connection by running docker buildx inspect (Figure 3):

Figure 3: Docker buildx inspect shows that our new builder is connected.

All good!

You can also list and manage your builders. Run docker buildx ls (Figure 4).

Figure 4: Run docker buildx ls to return a list of all builders and nodes. Here we can see our new builder added to the list.

4. Build “Hello World” image 

We will be building a simple “hello world” image as shown in the following the Dockerfile.

FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
COPY hello.txt C:
CMD ["cmd", "/C", "type C:hello.txt"]

Run the following commands to create a directory and change directory to sample_dockerfile.

mkdir sample_dockerfile
cd sample_dockerfile

Run the following script to add the Dockerfile shown above and hello.txt to the sample_dockerfile directory.

Set-Content Dockerfile @"
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
USER ContainerAdministrator
COPY hello.txt C:/
RUN echo "Goodbye!" >> hello.txt
CMD ["cmd", "/C", "type C:hello.txt"]
"@

Set-Content hello.txt @"
Hello from buildkit!
This message shows that your installation appears to be working correctly.
"@

Now we can use buildx to build our image and push it to the registry (see Figure 5):

docker buildx build –builder buildkit-exp –push -t <your_username>/hello-buildkit .

Figure 5: Here we can see our build running to a successful completion.

If you are utilizing Docker Hub as your registry, run docker login before running buildx build (Figure 6).

Figure 6: Successful login to Docker Hub so we can publish our images.

Congratulations! You can now run containers with standard docker run:

docker run <HUB ACCOUNT NAME>/hello-buildkit

Get started with BuildKit

We encourage you to test out the released experimental Windows BuildKit support v0.13.0. To start out, feel free to follow the documentation or blog, which will walk you through building a simple Windows image with BuildKit. File feedback and issues at Issues · moby/buildkit (github.com) tagged with area/windows.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Thank you

A big thanks to @gabriel-samfira, @TBBle, @tonistiigi, @AkihiroSuda, @crazy-max, @jedevc, @thaJeztah, @profnandaa, @iankingori[LX11] , and many other key community members who have contributed to enabling Windows Containers support on BuildKit. We also thank Windows Container developers who continue to provide valuable feedback and insights.
Quelle: https://blog.docker.com/feed/

Announcing Docker Desktop Support for Windows on Arm: New AI Innovation Opportunities

Docker Desktop now supports running on Windows on Arm (WoA) devices. This exciting development was unveiled during Microsoft’s “Introducing the Next Generation of Windows on Arm” session at Microsoft Build. Docker CTO, Justin Cormack, highlighted how this strategic move will empower developers with even more rapid development capabilities, leveraging Docker Desktop on Arm-powered Windows devices.

The Windows on Arm platform is redefining performance and user experience for applications. With this integration, Docker Desktop extends its reach to a new wave of hardware architectures, broadening the horizons for containerized application development.

Justin Cormack announcing Docker Desktop support for Windows on Arm devices with Microsoft Principal TPM Manager Jamshed Damkewala in the Microsoft Build session “Introducing the next generation of Windows on Arm.” 

Docker Desktop support for Windows on Arm

Read on to learn why Docker Desktop support for Windows on Arm is a game changer for developers and organizations.

Broader accessibility

By supporting Arm devices, Docker Desktop becomes accessible to a wider audience, including users of popular Arm-based devices like the Microsoft devices. This inclusivity fosters a larger, more diverse Docker community, enabling more developers to harness the power of containerization on their preferred devices.

Enhanced developer experience

Developers can seamlessly work on the newest Windows on Arm devices, streamlining the development process and boosting productivity. Docker Desktop’s consistent, cross-platform experience ensures that development workflows remain smooth and efficient, regardless of the underlying hardware architecture.

Future-proofing development

As the tech industry gradually shifts toward Arm architecture for its efficiency and lower power consumption, Docker Desktop’s support for WoA devices ensures we remain at the forefront of innovation. This move future-proofs Docker Desktop, keeping it relevant and competitive as this transition accelerates.

Innovation and experimentation

With Docker Desktop on a new architecture, developers and organizations have more opportunities to innovate and experiment. Whether designing applications for traditional x64 or the emerging Arm ecosystems, Docker Desktop offers a versatile platform for creative exploration.

Market expansion

Furthering compatibility in the Windows Arm space opens new markets and opportunities for Docker, including new relationships with device manufacturers and increased adoption in sectors prioritizing energy efficiency and portability while supporting Docker’s users and customers in leveraging the dev environments that support their goals.

Accelerating developer innovation with Microsoft’s investment in WoA dev tooling

Windows on Arm is arguably as successful as it has ever been. Today, multiple Arm-powered Windows laptops and tablets are available, capable of running nearly the entire range of Windows apps thanks to x86-to-Arm code translation. While Windows on Arm still represents a small fraction of the entire Windows ecosystem, the development of native Arm apps provides a wealth of fresh opportunities for AI innovation.

Microsoft’s investments align with Docker’s strategic goals of cross-platform compatibility and user-centric development, ensuring Docker remains at the forefront of containerization technologies in a diversifying hardware landscape.

Expand your development landscape with Docker Desktop on Windows Arm devices. Update to Docker Desktop 4.31 or consider upgrading to Pro or Business subscriptions to unlock the full potential of cross-platform containerization. Embrace the future of development with Docker, where innovation, efficiency, and cross-platform compatibility drive progress.

Learn more

Watch the Docker Breakout Session Optimizing the Microsoft developer experience with Docker to learn more about Docker and Microsoft better together opportunities.

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account.

Learn about Docker Build Cloud and how you can leverage cloud resources directly from Docker Desktop.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

The Strategic Imperative of AI in 2024

The winds of change are sweeping across industries, propelled by the transformative power of generative artificial intelligence (GenAI). In 2024, AI has become a strategic imperative for enterprises seeking to stay ahead of the curve. Although some organizations may view AI with hesitation, the reality is that ignoring its potential puts them at risk of falling behind. 

In this article, we examine the incredible growth of AI and explore its potential power to transform industries and help enterprises accelerate innovation.

Download the white paper: Docker, Putting the AI in Containers

The Cambrian explosion of artificial intelligence

You are probably familiar with chatbots for desktop users, such as ChatGPT and Google Gemini. However, the landscape of enterprise applications is teeming with examples of AI driving differentiation and success. Consider healthcare, where AI algorithms can aid in early disease detection and personalized treatment plans, or finance, where AI-powered fraud detection systems and algorithmic trading are reshaping the industry. In manufacturing, AI-driven robots can optimize production lines, and predictive maintenance can help minimize downtime. 

We are seeing an even more significant expansion as new types of AI systems provide solutions to problems previously not attainable with machine learning. New GenAI systems offer capabilities to solve organizations’ most pressing issues faster and more efficiently than ever.

In 2023, IBM reported that 42% of IT professionals at large organizations report that they have actively deployed AI, while an additional 40% are actively exploring using the technology. Across the board, businesses are leveraging AI to innovate, gain market share, and secure a competitive edge.

The landscape of AI models has undergone a fascinating shift in a very short time. We have witnessed the initial explosion of behemoths like Amazon’s GPT-3, boasting billions of parameters and impressive capabilities. These large language models (LLMs) captivated the world with their ability to generate human-quality text, translate languages, and answer complex questions.

Shift in scale

The sheer scale of these LLMs, however, has presented challenges in terms of computational resources, training costs, and environmental impact. As sustainability concerns have intensified and accessibility has become a priority, a new breed of AI models has emerged: the small and robust models.

These smaller models, exemplified by projects like Mixtral, Microsoft’s Phi, Google’s Gemini, and others, operate with significantly fewer parameters, often in the millions or even tens of millions. This reduction in size does not equate to a decrease in capability. These models leverage innovative architectures and training techniques to achieve impressive performance metrics, sometimes rivaling their larger counterparts.

As the number and type of models have increased, there has also been growth of open source ethos in AI. Hugging Face, a repository for open source AI software, datasets, and development tools, has seen its list of models grow to more than 500,000 models of all shapes and sizes suited for various applications (Figure 1). Many of these models are ideally suited for deployment in containers that can be developed locally or in the data center.

Figure 1: Hugging Face provides a repository of open source models and tools to help test and develop large language models.

This shift toward smaller, more efficient models signifies a crucial change in focus. The emphasis is no longer solely on raw power but also on practicality, resourcefulness, and accessibility. These models help democratize AI by lowering the barrier to entry for researchers, enterprise software developers, and even small and medium businesses with limited resources. They pave the way for deployment on edge devices, fostering advancements in areas like AI at the edge and ubiquitous computing.

These models will also provide the foundation for enterprises to adapt and fine-tune these models for their usage. They will do so using existing practices of containerization and will need tools that can provide the ability to move quickly through each phase of the software development lifecycle. As the industry’s de facto development and deployment environment for enterprise applications, Docker containerization offers an ideal approach. 

The arrival of these small yet powerful models also signals a new era in AI development. This change is a testament to the ingenuity of researchers and represents a shift towards responsible and sustainable AI advancement. Although large models will likely continue to play a vital role, the future of AI will increasingly be driven by these smaller, more impactful models.

Operational drivers

Beyond the competitive landscape, AI presents a compelling value proposition through its operational benefits. Imagine automating repetitive tasks, extracting actionable insights from massive datasets, and delivering more personalized experiences. AI facilitates data-driven decision-making as users push projects to completion, improving efficiency, cost reduction, and resource optimization.

Alignment with business goals

Users must align AI initiatives with specific business goals and objectives, however, rather than simply deploying AI as a technology standalone. Whether driving revenue growth, expanding market share, or enhancing operational excellence, AI-driven projects can be powerful when directed toward strategic priorities. For instance, AI-powered recommendation engines can help boost sales, while chatbots can improve customer service, ultimately contributing to overall business success.

Digital transformation

Moreover, AI has become a cornerstone of digital transformation initiatives. Businesses are undergoing a fundamental shift toward data-driven, interconnected operations, and AI plays a critical role in unlocking new opportunities and accelerating this transformation. From personalized marketing campaigns to hyper-efficient supply chains, AI empowers organizations to adapt to ever-changing market dynamics and achieve sustainable growth.

The AI imperative

As competitors leverage AI to fuel innovation and gain a competitive edge, businesses that fail to embrace this transformative technology risk being left behind. AI has the potential to revolutionize a variety of industries, from manufacturing to healthcare, and can provide enterprises with a host of benefits, including:

Enhanced decision-making: AI algorithms can analyze vast amounts of data to identify patterns, trends, and insights beyond human analysis capabilities. This capability enables businesses to make informed decisions, optimize operations, and minimize risks.

Streamlined and automated processes: AI-powered automation can handle repetitive and time-consuming tasks precisely and efficiently, freeing up valuable human resources for more strategic and creative endeavors. This approach can increase productivity, cost savings, and improve customer satisfaction.

Enhanced customer experience: AI-driven chatbots and virtual assistants can provide seamless and personalized customer support, resolving queries promptly and efficiently. AI can also analyze customer data to tailor marketing campaigns, product recommendations, and offers, thereby creating a more engaging and satisfying customer experience.

Innovation and product development: AI can accelerate innovation by allowing businesses to explore new ideas, test hypotheses, and rapidly prototype solutions. This approach can lead to the development of innovative products and services that meet changing customer needs.

The adoption of AI also comes with challenges that businesses must carefully navigate. For example, hurdles that enterprises must address include ethical considerations, data privacy concerns, and the need for skilled AI professionals.

Conclusion

In 2024 and beyond, AI is poised to reshape the business landscape. Enterprises that recognize the strategic imperative of AI and embrace it will stay ahead of the curve, while those that lag may struggle to remain competitive. Businesses need to consider how best to invest in AI, develop a clear AI strategy, and adopt this transformative technology. 

To learn more, read the whitepaper Docker, Putting the AI in Containers, which aims to equip you with the knowledge and tools to unlock the transformative potential of AI, starting with the powerful platform of Docker containerization.

Read the white paper: Docker, Putting the AI in Containers

Learn more

Read Docker, Putting the AI in Containers.

Get started with Artificial Intelligence and Machine Learning With Docker.

Read Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces.

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

This post was contributed by Mark Hinkle, CEO and Founder of Peripety Labs.
Quelle: https://blog.docker.com/feed/

Docker Documentation Gets an AI-Powered Assistant

We recently launched a new tool to enhance Docker documentation: an AI-powered documentation assistant incorporating kapa.ai. Docker Docs AI is designed to get you the information you need by providing instant, accurate answers to your Docker-related questions directly within our documentation pages.

Docker Docs AI

Docker documentation caters to a diverse range of users, from beginner users eager to learn the basics to advanced users keen on exploring Docker’s new functionalities and CLI options (Figure 1).

Figure 1: Docker Docs AI in action.

Navigating a large documentation website can be daunting, especially when you’re in a hurry to solve specific issues or implement new features. Context-switching, trying to locate the right information, and piecing together information from different sections are all examples of pain points users face when looking up a complex command or configuration file. 

The AI assistant addresses these pain points by simplifying the search process, interpreting your questions, and guiding you to the precise information you need when you need it (Figure 2).

Figure 2: Docker Docs AI text box for asking questions.

Find what you’re looking for

Docker documentation consists of more than 1,000 pages of content covering various topics, products, and services. The docs get about 13 million views every month, and most of those views originate from search engines. Although search engines are great, it isn’t always easy to conjure the right keywords together to get the result you’re looking for. That’s where we think that an AI-powered search can help:

It’s better at recognizing your intent and personalizing the results.

It lets you search in a more conversational style.

More importantly, kapa.ai is a Retrieval-Augmented Generation (RAG) system that uses the Docker technical documentation as a knowledge source for answering questions. This makes it capable of handling highly specific questions, contextual to Docker, with high accuracy, and with backlinks to the relevant content for additional reading.

Language options

Additionally, the new docs AI search can answer user questions in your preferred language. For example, when a user asks a question about Docker in Simplified Chinese, the AI search detects the language of the query, processes the question to understand the context and intent, and then translates the response into Simplified Chinese (Figure 3). 

This multilingual capability allows users to interact with the AI search seamlessly in their native language, thereby improving accessibility and enhancing the overall user experience.

Figure 3: Docker Docs AI can answer questions in your preferred language.

Using the Docker Docs AI

We’re thrilled to see that our users are highly engaged with the AI search since its launch, and we’re processing around 1,000 queries per day! Users can vote on answers and optionally leave comments, which provides us with great insights into the types of questions asked and allows us to improve responses.

The following section shows interesting ways that people are using Docker Docs AI.

Answers from multiple sources

Sometimes, the answer you need requires digging into multiple pages, extracting information from each page, and piecing it together. In the following example, the user instructs the agent to generate an inline Dockerfile in a Compose file. 

This specific example doesn’t exist in the Docker documentation, but the AI assistant generates a file using different sources (Figure 4):

Figure 4: Docker Docs AI can generate answers containing information from multiple sources.

In this case, the AI derived the answer from the following sources:

Building multi-platform images / cross-compilation

Compose Build Specification / dockerfile_inline

Multi-stage builds

Debugging commands

Often, you need to consult the documentation when you’re faced with a specific problem in building or running your application. Docker docs cannot cover every possible error case for every type of application, so finding the right information to debug your problem can be time-consuming. 

The AI assistant comes in handy here as a debugging tool (Figure 5):

Figure 5: Docker Docs AI can help with debugging.

Here, the question contains a specific error message of a failed build. Given the error message, the AI can deduce the problematic line of code in the Dockerfile that caused this error, and suggest ways to solve it, including links to the relevant documentation for additional reading.

Contextual help

One of the most important capabilities unlocked with AI search is the ability to provide contextual help for your application and source code. The conversational user interface lets you provide additional context to your questions that just isn’t possible with a traditional search tool (Figure 6):

Figure 6: You can provide additional context to help Docker Docs AI generate an answer.

Dive into Docker documentation

The new AI search capability within Docker documentation has emerged as an indispensable resource. The tool streamlines access to essential information to a wide range of users, ensuring a smoother developer experience. 

We invite you to try it out, use it to debug your Dockerfiles, Compose files, and docker run commands, and let us know what you think by leaving a comment using the feedback feature in the AI widget.

Explore new Docker concept guides

What is a container? This guide includes a video, explanation, and hands-on module so you can learn all about the basics of building with Docker. 

Building images: Get started with the guide for understanding the image layers.

Running containers: Learn about publishing and exposing ports.

GenAI video transcription and chat: Our new GenAI guide presents a project on video transcription and analysis using a set of technologies related to the GenAI Stack.

Administration overview: Administrators can manage companies and organizations using Docker Hub or the Docker Admin Console. Check out the administration manual to learn the right setup for your organization.

Data science with JupyterLab: A new use-case guide explains how to use Docker and JupyterLab to create and run reproducible data science environments.

Learn more

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account. 

Learn how Docker Build Cloud in Docker Desktop can accelerate builds.

Secure your supply chain with Docker Scout in Docker Desktop.

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

“@docker can you help me…”: An Early Look at the Docker Extension for GitHub Copilot

At this point, every developer has probably heard about GitHub Copilot. Copilot has quickly become an indispensable tool for many developers, helping novice to seasoned developers become more productive by improving overall efficiency and expediting learning. 

Today, we are thrilled to announce that we are joining GitHub’s Partner Program and have shipped an experience as part of their limited public beta. 

At Docker, we want to make it easy for anyone to reap the benefits of containers without all the overhead of getting started. We aim to meet developers wherever they are, whether in their favorite editor, their terminal, Docker Desktop, and now, even on GitHub.

What is the Docker Copilot extension?

In short, the Docker extension for GitHub Copilot (@docker) is an integration that extends GitHub Copilot’s technology to assist developers in working with Docker. 

What can I use @docker for? 

This initial scope for the Docker extension aims to take any developer end-to-end, from learning about containerization to validating and using generated Docker assets for inner loop workflows (Figure 1). Here’s a quick overview of what’s possible today:

Initiate a conversation with the Docker extension: In GitHub Copilot Chat, get in the extension context by using “@docker” at the beginning of your session.

Learn about containerization: Ask the Docker extension for GitHub Copilot to give you an overview of containerization with a question like,  “@docker, What does containerizing an application mean?”

Generate the correct Docker assets for your project: Get help containerizing your application and watch it generate the Dockerfiles, docker-compose.yml, and .dockerignore files tailored to your project’s languages and file structure: “@docker How would I use Docker to containerize this project?” 

Open a pull request with the assets to save you time: With your consent, the Docker extension can even ask if you want to open a PR with these generated Docker assets on GitHub, allowing you to review and merge them at your convenience.

Find project vulnerabilities with Docker Scout: The Docker extension also integrates with Docker Scout to surface a high-level summary of detected vulnerabilities and provide the next steps to continue using Scout in your terminal via CLI: “@docker can you help me find vulnerabilities in my project?”

From there, you can quickly jump into an editor, like Codespaces, VS Code, or JetBrains IDEs, and start building your app using containers. The Docker Copilot extension currently supports Node, Python, and Java-based projects (single-language or multi-root/multi-language projects).

Figure 1: Docker extension for GitHub Copilot in action.

How do I get access to @docker?

The Docker extension for GitHub Copilot is currently in a limited public beta and is accessible by invitation only. The Docker extension was developed through the GitHub Copilot Partner Program, which invites industry leaders to integrate their tools and services into GitHub Copilot to enrich the ecosystem and provide developers with even more powerful, context-aware tools to accelerate their projects. 

Developers invited to the limited public beta can install the Docker extension on the GitHub Marketplace as an application in their organization and invoke @docker from any context where GitHub Copilot is available (for example, on GitHub or in your favorite editor).

What’s coming to @docker?

During the limited public beta, we’ll be working on adding capabilities to help you get the most out of your Docker subscription. Look for deeper integrations that help you debug your running containers with Docker Debug, fix detected CVEs with Docker Scout, speed up your build with Docker Build Cloud, learn about Docker through our documentation, and more coming soon!

Help shape the future of @docker

We’re excited to continue expanding on @docker during the limited public beta. We would love to hear if you’re using the Docker extension in your organization or are interested in using it once it becomes publicly available. 

If you have a feature request or any issues, we invite you to file an issue on the Docker extension for GitHub Copilot tracker. Your feedback will help us shape the future of Docker tooling.

Thank you for your interest and support. We’re excited to see what you build with GitHub and @docker!

Learn more

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account.

Learn about Docker Build Cloud and how you can leverage cloud resources directly from Docker Desktop.

Subscribe to the Docker Newsletter.

Learn more about GitHub’s Copilot Partner program. 

Quelle: https://blog.docker.com/feed/

How to Check Your Docker Installation: Docker Desktop vs. Docker Engine

Docker has become a leader in providing software development solutions, offering tooling that simplifies the process of developing, testing, deploying, and running applications using containers. As such, understanding Docker’s different products, like Docker Desktop, and components, like Docker Engine, and understanding how they work together is essential for developers looking to maximize their productivity and ensure compliance with Docker’s licensing terms. 

This post will clarify the distinctions and similarities between Docker Desktop and Docker Engine, and provide guidance on verifying which one you are currently using so you can make the most out of your experience.

Read on to explore how to distinguish between Docker Desktop and Docker Engine installations, identify when additional Docker tools are in use, understand Docker contexts, and review your Docker usage to ensure it complies with the Docker Licensing Agreement.

Background

The word “Docker” has become synonymous with containerization to the degree that containerizing an application is increasingly referred to as “dockerizing” an application. Although Docker didn’t create containerization, the company was the first to bring this technology to the developer community in an easily understood set of tooling and artifacts. 

In 2015, Docker took the next step and created the Open Container Initiative (OCI) to define and specify how to build a container image, how to run a container image, and how to share container images. By donating the OCI to the Linux Foundation, Docker provided a level playing field for any application of container technology.

An open source effort headed up by Docker is the Moby Project. Docker created this open framework to assemble specialized container systems without reinventing the wheel. It provides a building block set of dozens of standard components and a framework for assembling them into custom platforms. 

Moby comprises several components, including a container engine, container runtime, networking, storage, and an orchestration system. Both the standalone free Docker Engine (also known Docker Community Edition or Docker CE) and the commercial Docker Desktop originated from the Moby Project. However, Docker Desktop has evolved beyond the Moby Project, with a full product team investing in the features and technology to support individual developers, small teams, and the requirements of large development teams.

Docker Engine vs. Docker Desktop

Docker Desktop is a commercial product sold and supported by Docker, Inc. It includes the Docker Engine and other open source components; proprietary components; and features like an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, and security features that support ECI, air-gapped containers, and administrative settings management. To provide a consistent user experience across different operating systems, Docker Desktop uses the host systems’ native virtualization to run and manage a VM for the Docker Engine. This offers developers a turnkey solution for running a containerization toolset on any device or operating system. Users can leverage the Docker Engine at its core on any platform by downloading Docker Desktop.

Docker Engine is a component free to download individually, not as part of Docker Desktop, and runs as a standalone for free. It can run on any supported Linux distribution and includes the Docker CLI to run commands. Docker Engine will not run natively on Windows or macOS and does not come with a GUI or any of the advanced features provided by Docker Desktop.

How can I tell If I’m running Docker Desktop or just the Docker Engine?

You can determine if you’re using Docker Desktop or Docker Engine in a number of different ways. The following section provides guidance for checking from the filesystem and from within the Docker CLI tooling, which is a component in Docker Desktop as well.

1. GUI or icon

If you are using Docker Desktop, you will have either a windowed GUI or a menubar/taskbar icon of a whale (Figure 1).

Figure 1: The macOS menu.

2. Check for an installation

The easiest way to check for Docker Desktop is to look for the installation; this can be automated by scripting or the use of an MDM solution.

Note that the following instructions assume that Docker Desktop is installed in the default location, which may result in false negatives if other installation paths are used.

Docker Desktop on macOS

On macOS, the Docker Desktop application is installed under the /Applications directory and is named Docker (Figure 2).

$ ls -alt /Applications/Docker.app/
total 0
drwxrwxr-x 49 root admin 1568 Oct 13 09:54 ..
drwxr-xr-x@ 9 jschmidt admin 288 Sep 28 15:36 Contents
drwxr-xr-x@ 3 jschmidt admin 96 Sep 8 02:35 .

Figure 2: Docker application installed on macOS.

Docker Desktop on Windows

On Windows, the Docker Desktop application is installed under the C:Program Files folder and is named Docker (Figure 3).

C:Usersqdzlug>dir "c:Program FilesDocker"
Volume in drive C has no label.
Volume Serial Number is DEFE-FC15

Directory of c:Program FilesDocker

09/28/2023 02:22 PM <DIR> .
09/28/2023 02:22 PM <DIR> ..
09/28/2023 02:22 PM <DIR> cli-plugins
09/28/2023 02:21 PM <DIR> Docker
0 File(s) 0 bytes
4 Dir(s) 52,964,356,096 bytes free

C:Usersqdzlug>

Figure 3: Docker application installed on Windows.

Docker Desktop on Linux

On Linux, the Docker Desktop application is installed under /opt/docker-desktop.

$ ls -lat /opt/docker-desktop/
total 208528
drwxr-xr-x 7 root root 4096 Sep 29 10:58 .
drwxr-xr-x 2 root root 4096 Sep 29 10:58 locales
drwxr-xr-x 5 root root 4096 Sep 29 10:58 resources
drwxr-xr-x 2 root root 4096 Sep 29 10:58 share
drwxr-xr-x 2 root root 4096 Sep 29 10:58 linuxkit
drwxr-xr-x 2 root root 4096 Sep 29 10:58 bin
drwxr-xr-x 7 root root 4096 Sep 29 10:57 ..
-rw-r–r– 1 root root 5313018 Sep 27 12:10 resources.pak
-rw-r–r– 1 root root 273328 Sep 27 12:10 snapshot_blob.bin
-rw-r–r– 1 root root 588152 Sep 27 12:10 v8_context_snapshot.bin
-rw-r–r– 1 root root 107 Sep 27 12:10 vk_swiftshader_icd.json
-rw-r–r– 1 root root 127746 Sep 27 12:10 chrome_100_percent.pak
-rw-r–r– 1 root root 179160 Sep 27 12:10 chrome_200_percent.pak
-rwxr-xr-x 1 root root 1254728 Sep 27 12:10 chrome_crashpad_handler
-rwxr-xr-x 1 root root 54256 Sep 27 12:10 chrome-sandbox
-rw-r–r– 1 root root 398 Sep 27 12:10 componentsVersion.json
-rwxr-xr-x 1 root root 166000248 Sep 27 12:10 'Docker Desktop'
-rw-r–r– 1 root root 10544880 Sep 27 12:10 icudtl.dat
-rwxr-xr-x 1 root root 252920 Sep 27 12:10 libEGL.so
-rwxr-xr-x 1 root root 2877248 Sep 27 12:10 libffmpeg.so
-rwxr-xr-x 1 root root 6633192 Sep 27 12:10 libGLESv2.so
-rwxr-xr-x 1 root root 4623704 Sep 27 12:10 libvk_swiftshader.so
-rwxr-xr-x 1 root root 6402632 Sep 27 12:10 libvulkan.so.1
-rw-r–r– 1 root root 1096 Sep 27 12:10 LICENSE.electron.txt
-rw-r–r– 1 root root 8328249 Sep 27 12:10 LICENSES.chromium.html

Note that the launch icon and location for Docker will depend on the Linux distribution being used.

3. Check a running installation

You can also check a running installation to determine which version of Docker is being used. To do this, you need to use the docker version command; the Server line will indicate the version being used.

Docker Desktop on macOS Arm64

The Server: Docker Desktop 4.24.0 (122432) line indicates using Docker Desktop.

$ docker version
Client:
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:28:49 2023
OS/Arch: darwin/arm64
Context: default

**Server: Docker Desktop 4.24.0 (122432)**
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:31:36 2023
OS/Arch: linux/arm64
Experimental: true
containerd:
Version: 1.6.22
GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
runc:
Version: 1.1.8
GitCommit: v1.1.8-0-g82f18fe
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Docker Desktop on Windows

The Server: Docker Desktop 4.24.0 (122432) line indicates using Docker Desktop.

C:Usersqdzlug>docker version
Client:
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:32:48 2023
OS/Arch: windows/amd64
Context: default

**Server: Docker Desktop 4.24.0 (122432)**
Engine:
Version: dev
API version: 1.44 (minimum version 1.12)
Go version: go1.20.8
Git commit: HEAD
Built: Tue Sep 26 11:52:32 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.22
GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
runc:
Version: 1.1.8
GitCommit: v1.1.8-0-g82f18fe
docker-init:
Version: 0.19.0
GitCommit: de40ad0

C:Usersqdzlug>

Docker Desktop on Linux

C:Usersqdzlug>docker info
Client:
Version: 24.0.6
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.11.2-desktop.5
Path: C:Program FilesDockercli-pluginsdocker-buildx.exe
compose: Docker Compose (Docker Inc.)
Version: v2.2.3
Path: C:Usersqdzlug.dockercli-pluginsdocker-compose.exe
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.0
Path: C:Program FilesDockercli-pluginsdocker-dev.exe
extension: Manages Docker extensions (Docker Inc.)

Docker Engine on Linux

The Server: Docker Engine – Community line indicates using the community edition.

$ docker version
Client: Docker Engine – Community
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Context: default

**Server: Docker Engine – Community**
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: 1.6.24
GitCommit: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
runc:
Version: 1.1.9
GitCommit: v1.1.9-0-gccaecfc
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Docker contexts

Note that multiple contexts can be installed on a system; this is most often seen in Linux where Docker Desktop and Docker Engine are installed on the same host. To switch between the two, the docker context use command is used. When you are in a context, you communicate with the daemon for that context; thus, in a dual installation situation, you would be switching between the Docker Desktop install and the host install.

To view contexts, you use docker context ls, then switch via docker context use CONTEXTNAME. The following example shows a Linux system with both installed.

$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock
desktop-linux moby Docker Desktop unix:///home/jschmidt/.docker/desktop/docker.sock
$ docker version
Client: Docker Engine – Community
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Context: default

Server: Docker Engine – Community
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: 1.6.24
GitCommit: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
runc:
Version: 1.1.9
GitCommit: v1.1.9-0-gccaecfc
docker-init:
Version: 0.19.0
GitCommit: de40ad0
$ docker context use desktop-linux
desktop-linux
Current context is now "desktop-linux"
$ docker version
Client: Docker Engine – Community
Cloud integration: v1.0.35+desktop.5
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Context: desktop-linux

Server: Docker Desktop 4.24.0 (122432)
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:32:16 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.22
GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
runc:
Version: 1.1.8
GitCommit: v1.1.8-0-g82f18fe
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Other OCI tooling

Because both Docker Engine and Docker Desktop are OCI compliant, a number of solutions are presented and installed as “direct replacements” for Docker. This process usually involves creating helper aliases, scripts, or batch programs to emulate docker commands. 

You can check for aliases by running the command alias docker to see if there is an alias in place. This holds true for Linux and macOS, or a Linux distribution inside WSL2 on Windows.

$ alias docker # Docker aliased to podman
docker='podman'
$ alias docker # No alias present

You can also list the docker binary from the CLI to ensure that it is the official Docker binary:

$ ls -l `which docker` # Docker supplied by Homebrew on the Mac
lrwxr-xr-x 1 jschmidt admin 34 Apr 2 12:03 /opt/homebrew/bin/docker -> ../Cellar/docker/26.0.0/bin/docker

$ ls -l `which docker` # Official Docker binary on the Mac
lrwxr-xr-x 1 root wheel 54 Jan 10 16:06 /usr/local/bin/docker -> /Applications/Docker.app/Contents/Resources/bin/docker

Conclusion

To wrap up our exploration, note that there are also several offerings generically referred to as “Docker” available to use as part of your containerization journey. This post focused on Docker Engine and Docker Desktop.

At this point, you should be comfortable distinguishing between a Docker Desktop installation and a Docker Engine installation and be able to identify when other OCI tooling is being used under the docker command name. You should also have a high-level understanding of Docker contexts as they relate to this topic. Finally, you should be able to review your usage against the Docker Licensing Agreement to ensure compliance or simply log in with your company credentials to get access to your procured entitlements..

Learn more

Docker Engine

Docker Desktop

Docker Desktop Windows Install

Docker Desktop Linux Install

Docker Desktop Macintosh Install

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Quelle: https://blog.docker.com/feed/

Streamline the Development of Real-Time AI Applications with MindsDB Docker Extension

This post was contributed by Martyna Slawinska, Software Engineer at MindsDB, in collaboration with Ajeet Singh, Developer Advocate at Docker.

AI technology has seen several challenges that undoubtedly hinder its progress. Building an AI-powered application requires significant resources, including qualified professionals, cost, and time. Prominent obstacles include:

Bringing (real-time) data to AI models through data pipelines is complex and requires constant maintenance.

Testing different AI/ML frameworks requires dedicated setups.

Customizing AI with dynamic data and making the AI system improve itself automatically sounds like a major undertaking.

These difficulties make AI systems scarcely attainable for small and large enterprises alike. The MindsDB platform, however, helps solve these challenges, and it’s now available in the Extensions Marketplace of Docker Desktop. 

In this article, we’ll show how MindsDB can streamline the development of AI-powered applications and how easily you can set it up via the Docker Desktop Extension.

How does MindsDB facilitate the development of AI-powered apps?

MindsDB is a platform for customizing AI from dynamic data. With its nearly 200 integrations to data sources and AI/ML frameworks, any developer can use their own data to customize AI for their purposes, faster and more securely.

Let’s solve the problems as defined one by one:

MindsDB integrates with numerous data sources, including databases, vector stores, and applications. To make your data accessible to many popular AI/ML frameworks, all you have to do is execute a single statement to connect your data to MindsDB.

MindsDB integrates with popular AI/ML frameworks, including LLMs and AutoML. So once you connect your data to MindsDB, you can pass it to different models to pick the best one for your use case and deploy it within MindsDB.

With MindsDB, you can manage models and data seamlessly, implement custom automation flows, and make your AI systems improve themselves with continuous finetuning.

With MindsDB, you can build AI-powered applications easily, even with no AI/ML experience. You can interact with MindsDB through SQL, MongoDB-QL, REST APIs, Python, and JavaScript.

Follow along to learn how to set up MindsDB in Docker Desktop.

How does MindsDB work?

With MindsDB, you can connect your data from a database, a vector store, or an application, to various AI/ML models, including LLMs and AutoML models (Figure 1). By doing so, MindsDB brings data and AI together, enabling the intuitive implementation of customized AI systems.

Figure 1: Architecture diagram of MindsDB.

MindsDB enables you to easily create and automate AI-powered applications. You can deploy, serve, and fine-tune models in real-time, utilizing data from databases, vector stores, or applications, to build AI-powered apps — using universal tools developers already know.

Find out more about MindsDB and its features, as well as use cases, on the MindsDB website.

Why run MindsDB as a Docker Desktop Extension?

MindsDB can be easily installed on your machine via Docker Desktop. MindsDB provides a Docker Desktop Extension, which lets you use MindsDB within the Docker Desktop environment.

As MindsDB integrates with numerous data sources and AI frameworks, each integration requires a specific set of dependencies. With MindsDB running in Docker Desktop, you can easily install only the required dependencies to keep the image lightweight and less prone to issues.

Running MindsDB as a Docker Desktop Extension gives you the flexibility to:

Set up your MindsDB environment easily by installing the extension.

Customize your MindsDB environment by installing only the required dependencies.

Monitor your MindsDB environment via the logs accessible through the Docker Desktop.

Next, we’ll walk through setting up MindsDB in Docker Desktop. For more information, refer to the documentation.

Getting started

MindsDB setup in Docker Desktop

To get started, you’ll need to download and set up Docker Desktop on your computer. Then, follow the steps below to install MindsDB in Docker Desktop:

First, go to the Extensions page in Docker Desktop, search for MindsDB, and install the MindsDB extension (Figure 2).

Figure 2: Installing the MindsDB Extension in Docker Desktop.

Then, access MindsDB inside Docker Desktop (Figure 3).

Figure 3: Accessing the MindsDB editor in Docker Desktop.

This setup of MindsDB uses the mindsdb/mindsdb:latest Docker image, which is a lightweight Docker image of MindsDB that comes with these integrations preloaded.

Now that you installed MindsDB in Docker Desktop, think of a use case you want to run and list all integrations you want to use. For example, if you want to use data from your PostgreSQL database and one of the models from Anthropic to analyze your data, then you need to install dependencies for Anthropic (as dependencies for PostgreSQL are installed by default).

You can find more use cases on the MindsDB website.

Here is how to install dependencies (Figure 4):

In the MindsDB editor, go to Settings and Manage Integrations.

Select the integrations you want to use and choose Install.

Figure 4: Installing dependencies via the MindsDB editor.

We customized the MindsDB image by installing only the required dependencies. Visit the documentation to learn more.

AI Agents deployment with MindsDB

In this section, we’ll showcase the AI Agents feature developed by MindsDB. AI Agents come with an underlying large language model and a set of skills to answer questions about your data stored in databases, files, or websites (Figure 5).

Figure 5: Diagram of AI Agents.

Agents require a model in the conversational mode. Currently, MindsDB supports the usage of models via the LangChain handler.

There are two types of skills, as follows:

The Text-to-SQL skill translates questions asked in natural language into SQL code to fetch correct data and answer the question.

The Knowledge Base skill stores and searches data assigned to it utilizing embedding models and vector stores.

Let’s get started.

Step 1. Connect your data source to MindsDB.

Here, we use our sample PostgreSQL database and connect it to MindsDB:

CREATE DATABASE example_db
WITH ENGINE = "postgres",
PARAMETERS = {
"user": "demo_user",
"password": "demo_password",
"host": "samples.mindsdb.com",
"port": "5432",
"database": "demo",
"schema": "demo_data"
};

Let’s preview the table of interest:

SELECT *
FROM example_db.car_sales;

This table stores details of cars sold in recent years. This data will be used to create a skill in the next step.

Step 2. Create a skill.

Here, we create a Text-to-SQL skill using data from the car_sales table:

CREATE SKILL my_skill
USING
type = 'text_to_sql',
database = 'example_db',
tables = ['car_sales'],
description = 'car sales data of different car types';

The skill description should be accurate because the model uses it to decide which skill to choose to answer a given question. This skill is one of the components of an agent.

Step 3. Create a conversational model.

AI Agents also require a model in the conversational model. Currently, MindsDB supports the usage of models via the LangChain handler.

Note that if you choose one of the OpenAI models, the following configuration of an engine is required:

CREATE ML_ENGINE langchain_engine
FROM langchain
USING
openai_api_key = 'your-openai-api-key';

Now you can create a model using this engine:

CREATE MODEL my_conv_model
PREDICT answer
USING
engine = 'langchain_engine',
input_column = 'question',
model_name = 'gpt-4',
mode = 'conversational',
user_column = 'question' ,
assistant_column = 'answer',
max_tokens = 100,
temperature = 0,
verbose = True,
prompt_template = 'Answer the user input in a helpful way';

You can adjust the parameter values, such as prompt_template, to fit your use case. This model is one of the components of an agent.

Step 4. Create an agent.

Now that we have a skill and a conversational model, let’s create an AI Agent:

CREATE AGENT my_agent
USING
model = 'my_conv_model',
skills = ['my_skill'];

You can query this agent directly to get answers about data from the car_sales table that has been assigned to the skill (my_skill) that in turn has been assigned to an agent (my_agent).

Let’s ask some questions:

SELECT *
FROM my_agent
WHERE question = 'what is the most commonly sold model?';

Figure 6 shows the output generated by the agent:

Figure 6: Output generated by agent.

Furthermore, you can connect this agent to a chat app, like Slack, using the chatbot object.

Conclusion

MindsDB streamlines data and AI integration for developers, offering seamless connections with various data sources and AI frameworks, enabling users to customize AI workflows and obtain predictions for their data in real time. 

Leveraging Docker Desktop not only simplifies dependency management for MindsDB deployment but also provides broader benefits for developers by ensuring consistent environments across different systems and minimizing setup complexities.

Learn more

Explore the Docker Extension Marketplace.

Install the MindsDB Extension.

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/