Custom MCP Catalogs and Profiles: Advancing Enterprise MCP Adoption

We’re excited to announce the general availability of Custom Catalogs and Profiles for managing Model Context Protocol (MCP) servers. These two complementary capabilities fundamentally change how teams package, distribute, and manage AI tooling. 

Custom MCP Catalogs let organizations curate and distribute approved collections of MCP servers. MCP Profiles enable individual developers to easily build, run, and share their MCP tools and configurations across projects and teams.

In this post, we’ll walk through how to create your own custom catalog – building on and improving our previous approach. We’ll also introduce Profiles, a new primitive that lets you define portable, named groupings of MCP servers. Profiles are designed to solve several practical use cases today, while giving us a foundation to expand in the future.

Creating custom catalogs with Docker

As organizations adopt MCP, we consistently hear the same need: teams need a way to curate a trusted list of MCP servers, including internally built servers.

To address these needs, we built Custom Catalogs. Instead of every team member searching for MCP servers across the open internet, organizations can publish and distribute catalogs that define approved servers. This allows developers to centrally discover and use trusted MCP servers within organizational boundaries.

Custom Catalogs can reference servers from Docker’s MCP Catalog, community sources, and custom MCP servers developed internally, bringing flexibility, control, and trust together in a single experience. We will show you how to do that with a Custom Catalog. 

Step-by-step: Building and sharing a custom MCP catalog 

In this example, we will create a Custom Catalog containing servers from the Docker MCP Catalog and an MCP server we created ourselves from the CLI. Then we will show you how to use Docker Desktop to import the catalog.

All the functionality we will show can be exercised through the CLI, while a subset of primarily user-centric features can be exercised through Docker Desktop.

Here, we will use my personal Docker Hub ID roberthouse224 in the commands, but you should adapt to use your information where appropriate (e.g. pushing an image).

Step 1: Creating my custom MCP server and pushing it to Docker Hub

We built a reference server called roll-dice (GitHub Repository). It is a regular MCP server that communicates over stdio and can be built as a Docker image. The image has already been built and pushed to Docker Hub.

We can create the metadata that describes the server including where the image can be found and save it to a file named mcp-dice.yaml to be used when creating our catalog.

name: roll-dice
title: Roll Dice
type: server
image: roberthouse224/mcp-dice@latest
description: An mcp server that can roll dice

Step 2: Creating a catalog that includes servers from the Docker MCP Catalog alongside a server you have built yourself

Now we can create a custom catalog containing servers from the Docker MCP Catalog and the MCP server we created ourselves.

docker mcp catalog create roberthouse224/our-catalog
–title "Our Catalog"
–server catalog://mcp/docker-mcp-catalog/playwright
–server catalog://mcp/docker-mcp-catalog/github-official
–server catalog://mcp/docker-mcp-catalog/context7
–server catalog://mcp/docker-mcp-catalog/atlassian
–server catalog://mcp/docker-mcp-catalog/notion
–server catalog://mcp/docker-mcp-catalog/markitdown
–server file://./mcp-dice.yaml

Step 3: Verifying the MCP servers in the custom catalog 

We can now list our catalogs and see the catalog that we createddocker mcp catalog list

We can also inspect the contents of the catalogdocker mcp catalog show roberthouse224/our-catalog –format yaml

Step 4: Share the catalog

At the moment our custom catalog only lives on our machine. But what we have – and this is really powerful – is an immutable OCI artifact containing our trusted MCP servers that can be easily shared.

We can push our catalog to a container registry, in this example we’re using Docker Hub. Now, anyone that has access to your organization’s namespace can access the catalog.

docker mcp catalog push roberthouse224/our-catalog

Using a custom MCP catalog

Now that our custom catalog has been shared, colleagues can import it from within Docker Desktop (or from the cli using docker mcp catalog pull).

Import the catalog from Docker Desktop by selecting “Import catalog,” and then specifying the OCI reference in the dialog.

Figure 1: Importing a custom catalog from OCI reference

The catalog is now browsable. You can double click into the catalog and see all of the servers contained within it. Notice the custom MCP server that we added named “Roll Dice.”

Figure 2: A custom MCP catalog within the Docker Desktop app, including a newly added “Roll Dice” server.

To make this a private catalog all you need to do is manage access to the repository the way you always have for container images – no new infrastructure to manage or systems to learn.

This is exactly what Jim Clark was describing in his post Private MCP Catalogs and the Path to Composable Enterprise AI.

This simple pattern can be extended to support more complex use cases. For example, you might use a private container registry instead of Docker Hub, or connect to a remote MCP server over streamable HTTP you host yourself rather than running a containerized server as shown in the example.

Now that we have a shareable custom catalog of trusted MCP servers we can shift focus to how individuals can effectively leverage MCP servers from the catalog we built in their workflows.

Using Profiles to create and share MCP Workflows

With MCP Profiles, developers can organize workflows efficiently and maintain separate server collections and configurations for different use cases. Profiles can be shared across teams, enabling collaboration on server setups and ensuring consistent configurations for teams working within the same projects or contexts.

Switch between Profiles

At a basic level, a Profile is a named grouping of MCP servers that can be connected to an agent session. This makes it straightforward to define different Profiles for different ways of working.

Now let’s see an example in action. 

We create a profile named coding and another named planning. We browse our custom catalog, select the MCP servers that we want (e.g. Playwright, GitHub, and Context7) then select the “Add to” drop down, and select “New profile”.

Figure 3: Selecting MCP servers to be added to a new profile

Give the profile a name, select the client you want to connect to, and select “Create”.

Figure 4: Creating a new MCP profile named coding in Docker Desktop.

From the Profiles tab, we can see the profile we just created. Our client is connected and our tools are ready to use. 

Figure 5: Example of a profile that is connected to a client.

Next we create a profile named planning with servers relevant to planning (e.g. Atlassian, Markitdown, Notion). 

Navigate back to “our-catalog” (if not already there), select the servers relevant to planning, and select “Add to” -> “New profile.” Give the profile a name (e.g. planning). Then select “Create” to create the planning profile without a client. Specifying the client is optional.

Figure 6: Example of creating multiple profiles, including separate profiles for coding and planning 

Now we have two profiles that mirror two modes of working. When we switch to planning mode we only want the tools from our planning profile to be in context. To do that, we can easily reassign our client to the planning profile.

Figure 7: Reassign Claude Code to the planning profile.

If we go back to coding mode, we just reassign our client back to the coding profile. You can have any number of Profiles that mirror your many ways of working and easily switch between them, keeping only the tools you care about in context.

This will work with any agent, not just Claude Code. Profiles provide a truly portable way to manage your MCP server setups and avoid vendor lock-in.

Persist configuration

You can avoid repeatedly configuring MCP servers by using a Profile. Profiles add a persistence layer for MCP server configurations. When an MCP server exposes configurable options, you can define them once in a Profile and reload them as needed, avoiding repeated configuration.

In this example, we are specifying which paths Markitdown can access.

Figure 8: Using an MCP profile to save server configurations for reuse

Context windows can easily fill up when the MCP servers you use export a lot of tools. With Profiles you can specify which tools are enabled, making sure only the tools you need for a specific task are used.

Here we enable the get_me tool from the GitHub MCP server and disable all the others. All the other tools will not show up in our agent session or contribute to the context window.

Figure 9: Optimize your context window by enabling only the tools you need in the MCP profile

This model of saved configuration becomes far more powerful for MCP servers you build in-house. By exposing richer configuration options, you can reuse the same server across projects, reconfigure its behavior per context, and achieve more predictable outcomes.

Share Profiles

Identifying MCP servers and configurations that work well for a project doesn’t need to be repeated by every team member. Once you’ve found a setup that works, share it with the rest of the team.

To share a Profile you can push it as an OCI artifact to a container registry just like we did with our custom catalog. Just provide a name for it along with an OCI reference.

➜ ~ docker mcp profile push coding [your-namespace]/coding

For someone to pull it down, all they have to do is issue the corresponding pull command.

➜ ~ docker mcp profile pull [your-namspace]/coding

Although the example above demonstrates sharing Profiles across a team, the concept extends naturally to agents as well. An agent skill could, for instance, reference a Profile and pull in the required MCP servers and their configurations as dependencies.

Conclusion and What’s Next 

As MCP adoption grows, the challenge isn’t access to tools — it’s coordination. Teams need a way to standardize what’s trusted and supported without constraining how individuals actually work. Custom Catalogs and Profiles are designed to solve exactly that problem.

Custom Catalogs: shared foundation

Custom Catalogs allow platform and admin teams to define approved MCP servers, bundle internal and public tooling together, and distribute those choices as a single, portable artifact. This creates clarity and consistency while significantly reducing the cost of discovery and evaluation.

Profiles: supercharge workflow

Profiles give individual developers a lightweight way to assemble, configure, and reuse MCP servers for specific contexts like coding, planning, or research. Profiles persist configuration, limit context to what matters, and make effective setups easy to share across teams.

Together, these primitives separate:

What an organization recommends (via Custom Catalogs)

How people work day to day (via Profiles)

This separation enables a healthy balance. Platform teams can publish “golden paths” that establish standards and guardrails, while developers retain the freedom to adapt, experiment, and compose profiles that fit their needs.

The result is a system that is portable, composable, and scalable — making MCP easier to adopt, safer to manage, and more effective as it grows across an organization.

What’s Next?

Custom Catalogs and Profiles are the foundation for managing MCP at scale, and we’re just getting started. Next, we’re focused on extending these primitives to support stronger governance, better reuse, and more advanced agent workflows:

Governance and policy controls to restrict MCP usage to approved Custom Catalogs and trusted server sources

Improved discoverability and sharing for both Catalogs and Profiles, making proven setups easier to find and reuse across teams

Expanded Profile-scoped secrets and configuration, providing a more secure and flexible alternative to project-level mcp.json files

Clear best practices for Profiles, including saving dynamic MCP server configurations for reuse and pairing Profiles with emerging workflow optimizations like agent skills

Getting started with Custom Catalogs and Profiles

If you have Docker Desktop 4.56 you are already using Catalogs – our Docker MCP Catalog is now distributed as an OCI artifact and Profiles are supported starting with Docker Desktop 4.63. Try creating your first Profile by exploring the MCP Toolkit in Docker Desktop.

Learn more

Dive into our documentation on Custom Catalogs and Profiles to get started quickly.

Explore Docker’s MCP Catalog and Toolkit on our website.

Ready to go hands-on? Open Docker Desktop or the CLI and start using MCP to streamline and automate your development workflows.

Quelle: https://blog.docker.com/feed/

AWS Organizations now supports higher quotas for service control policies (SCPs)

AWS Organizations now supports higher quotas for service control policies (SCPs). The maximum number of SCPs that can be attached to a single node (root, OU, or account) has increased from 5 to 10, and the maximum SCP size has increased from 5,120 to 10,240 characters.
With these higher quotas, you can write SCPs with finer-grained permissions and conditions, and attach more SCPs per node to build more comprehensive security controls across your organization.
These higher quotas are available in all commercial AWS Regions, the AWS GovCloud (US) Regions, and the China Regions, and are available automatically to all organizations with no action required. To learn more, see quotas for AWS Organizations in the AWS Organizations User Guide.
Quelle: aws.amazon.com

AWS announces AWS Interconnect – multicloud connectivity with Oracle Cloud Infrastructure in preview

AWS announces the public preview of AWS Interconnect — multicloud with Oracle Cloud Infrastructure (OCI).
Customers have been adopting multicloud strategies while migrating more applications to the cloud. They do so for many reasons including interoperability requirements, the freedom to choose technology that best suits their needs, and the ability to build and deploy applications on any environment with greater ease and speed. Previously, when interconnecting workloads across multiple cloud service providers (CSPs), customers had to go the route of a ‘do-it-yourself’ multicloud approach, leading to complexities of building and managing global multi-layered networks at scale. AWS Interconnect – multicloud is the first purpose-built product of its kind and a new way of how clouds connect and talk to each other, allowing customers to quickly provision resilient, scalable private connections to other cloud providers.
OCI is the latest CSP to adopt the open specification that powers AWS Interconnect. This allows AWS to provide a consistent, simple experience to our customers on OCI (preview), Google Cloud (Generally Available), and Microsoft Azure (coming later in 2026).
Interconnect – multicloud is available in preview with OCI in the us-east-1 (N. Virginia) AWS Region. You can create a preview Interconnect using the AWS Management Console, Command Line Interface (CLI), or API. For more information, see the AWS Interconnect – multicloud documentation.
Quelle: aws.amazon.com

Amazon Managed Grafana now supports in-place upgrade to Grafana version 12.4

Amazon Managed Grafana now supports in-place upgrade from Grafana version 10.4 to 12.4. You can upgrade with just a few clicks from the AWS Console or via AWS SDK or AWS CLI.
Upgrading to version 12.4 brings native Grafana Scenes-powered dashboards for faster rendering and queryless Drilldown apps for point-and-click exploration of Prometheus metrics, Loki logs, Tempo traces, and Pyroscope profiles. Amazon CloudWatch plugin enhancements simplify log analysis with PPL/SQL query support, broaden visibility through cross-account Metrics Insights, and surface issues proactively with log anomaly detection. The rebuilt table visualization delivers smoother performance with CSS cell styling and interactive Actions buttons, while trendline transformations and navigation bookmarks streamline data exploration. 
In-place upgrade to Grafana 12.4 is supported in all AWS regions where Amazon Managed Grafana is generally available.  For a complete list of new features, refer to Differences between Grafana versions in the Amazon Managed Grafana User Guide. For upgrade instructions, see Update your workspace version.  To learn more about Amazon Managed Grafana features and its pricing, visit the product page and pricing page.
Quelle: aws.amazon.com

Amazon RDS for PostgreSQL announces Extended Support minor versions 11.22-rds.20260224, 12.22-rds.20260224, and 13.23-rds.20260224

Amazon Relational Database Service (RDS) for PostgreSQL announces Amazon RDS Extended Support minor versions 11.22-rds.20260224, 12.22-rds.20260224, and 13.23-rds.20260224. We recommend that you upgrade to these versions to fix known security vulnerabilities and bugs in prior versions of PostgreSQL. Amazon RDS Extended Support provides up to three additional years of critical security and bug fixes beyond a major version’s end of standard support date, giving you more time to upgrade to a new major version. Learn more about Extended Support in the Amazon RDS User Guide. You can upgrade your databases during scheduled maintenance windows using automatic minor version upgrades. To simplify operations at scale, enable automatic minor version upgrades and use the AWS Organizations Upgrade Rollout Policy to orchestrate thousands of upgrades in phases, first to development environments before upgrading production systems. You can also use Amazon RDS Blue/Green deployments with physical replication to minimize downtime for minor version upgrades. Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console or by using the AWS Command Line Interface (CLI).
Quelle: aws.amazon.com

Amazon Connect Cases now lets you edit related items and delete cases from the agent workspace

Amazon Connect Cases now supports editing and deleting related items, and deleting cases directly from the agent workspace without administrator help. Agents can update comments, unlink contacts associated with the wrong case, or delete cases opened in error. Agents can also create, edit, and delete custom related items such as orders, returns, and invoices to capture additional case context. Amazon Connect Cases is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town). To learn more and get started, visit the Amazon Connect Cases webpage and documentation.
Quelle: aws.amazon.com