We are thrilled to announce that Granite models, IBM’s family of open source and proprietary models built for business, as well as Red Hat InstructLab model alignment tools, are now available on Docker Hub.
Now, developer teams can easily access, deploy, and scale applications using IBM’s AI models specifically designed for developers.
This news will be officially announced during the AI track of the keynote at IBM TechXchange on October 22. Attendees will get an exclusive look at how IBM’s Granite models on Docker Hub accelerate AI-driven application development across multiple programming languages.
Why Granite on Docker Hub?
With a principled approach to data transparency, model alignment, and security, IBM’s open source Granite models represent a significant leap forward in natural language processing. The models are available under an Apache 2.0 license, empowering developer teams to bring generative AI into mission-critical applications and workflows.
Granite models deliver superior performance in coding and targeted language tasks at lower latencies, all while requiring a fraction of the compute resources and reducing the cost of inference. This efficiency allows developers to experiment, build, and scale generative AI applications both on-premises and in the cloud, all within departmental budgetary limits.
Here’s what this means for you:
Simplified deployment: Pull the Granite image from Docker Hub and get up and running in minutes.
Scalability: Docker offers a lightweight and efficient method for scaling artificial intelligence and machine learning (AI/ML) applications. It allows you to run multiple containers on a single machine or distribute them across different machines in a cluster, enabling horizontal scalability.
Flexibility: Customize and extend the model to suit your specific needs without worrying about underlying infrastructure.
Portability: By creating Docker images once and deploying them anywhere, you eliminate compatibility problems and reduce the need for configurations.
Community support: Leverage the vast Docker and IBM communities for support, extensions, and collaborations.
In addition to the IBM Granite models, Red Hat also made the InstructLab model alignment tools available on Docker Hub. Developers using InstructLab can adapt pre-trained LLMs using far less real-world data and computing resources than alternative methodologies. InstructLab is model-agnostic and can be used to fine-tune any LLM of your choice by providing additional skills and knowledge.
With IBM Granite AI models and InstructLab available on Docker Hub, Docker and IBM enable easy integration into existing environments and workflows.
Getting started with Granite
You can find the following images available on Docker Hub:
InstructLab: Ideal for desktop or Mac users looking to explore InstructLab, this image provides a simple introduction to the platform without requiring specialized hardware. It’s perfect for prototyping and testing before scaling up.
InstructLab with CUDA support: Designed for running full training workflows on GPU-equipped Linux servers, this image accelerates the synthetic data generation and training process by leveraging NVIDIA GPUs.
Granite-7b-lab: This image is optimized for model serving and inference on desktop or Mac environments, using the Granite-7B model. It allows for efficient and scalable inference tasks without needing a GPU, perfect for smaller-scale deployments or local testing.
Granite-7b-lab with CUDA support: For those with GPU-equipped Linux servers, this image supports faster model inference and serving through CUDA acceleration. This is ideal for high-performance AI applications where response times and throughput are critical.
How to pull and run IBM Granite images from Docker Hub
IBM Granite provides a toolset for building and managing cloud-native applications. Follow these steps to pull and run an IBM Granite image using Docker and the CLI. You can follow similar steps for the Red Hat InstructLab images.
Authenticate to Docker Hub
Enter your Docker username and password when prompted.
Pull the IBM Granite Image
Pull the IBM Granite image from Docker Hub. There are two versions of the image:
redhat/granite-7b-lab-gguf: For Mac/desktop users with no GPU support
redhat/granite-7b-lab-gguf-cuda: For Linux NVIDIA® CUDA® support
Run the Image in a Container
Start a container with the IBM Granite image. The container can be started in two modes: CLI (default) and server.
To start the container in CLI mode, run the following:docker run –ipc=host -it redhat/granite-7b-lab-gguf
This command opens an interactive bash session within the container, allowing you to use the tools.
To run the container in server mode, run the following command:
docker run –ipc=host -it redhat/granite-7b-lab-gguf -s
You can check IBM Granite’s documentation for details on using IBM Granite Models.
Join us at IBM TechXchange
Granite on Docker Hub will be officially announced at the IBM TechXchange Conference, which will be held October 21-24 in Las Vegas. Our head of technical alliances, Eli Aleyner, will show a live demonstration at the AI track of the keynote during IBM TechXchange. Oleg Šelajev, Docker’s staff developer evangelist, will show how app developers can test their GenAI apps with local models. Additionally, you’ll learn how Docker’s collaboration with Red Hat is improving developer productivity.
The availability of Granite on Docker Hub marks a significant milestone in making advanced AI models accessible to all. We’re excited to see how developer teams will harness the power of Granite to innovate and solve complex challenges.
Stay anchored for more updates, and as always, happy coding!
Learn more
Read the Docker Labs GenAI series.
Subscribe to the Docker Newsletter.
What is InstructLab?
What are Granite Models?
Accelerating AI Development with IBM Granite AI Models and Docker — IBM TechXchange session with Eli Aleyner.
Developer productivity for apps with AI – IBM TechXchange session with Oleg Šelajev.
Quelle: https://blog.docker.com/feed/
Published by