Docker AI Governance: Unlock Agent Autonomy, Safely

Introducing Docker AI Governance: centralized control over how agents execute, what they can reach on the network, which credentials they can use, and which MCP tools they can call, so every developer in your company can run AI agents safely, wherever they work.

Your laptop is the new prod

Agents are the biggest productivity unlock the modern workplace has seen in a generation, and engineering is where the shift is most obvious. Developers aren’t using agents to autocomplete a function anymore. They’re using them to read whole codebases, refactor across services, and ship entire products, end to end. Vibe coding is real, it’s shipping to main, and it’s happening on laptops everywhere today.

The same shift is moving through every other function. A new class of agents called Claws is already in production, sending emails, managing calendars, booking travel, pulling CRM data, reconciling reports, and querying production systems. Marketing, finance, sales, and support are adopting them as fast as engineering is, because the productivity gains are too large to ignore and the companies that move first will out-execute the ones that don’t. Org-wide rollouts that used to take quarters are landing in weeks.

What’s more interesting than the speed of adoption is where all of this actually runs. Agents and Claws live outside the systems enterprises spent two decades hardening. They don’t sit behind your CI/CD pipeline, they don’t live inside your VPC, and they don’t follow your IAM model. They run on the developer’s machine, with the developer’s credentials, reaching into private repos, production APIs, customer records, and the open internet, often in the same session. The laptop just became the most powerful node in your enterprise, and it also became the most exposed. Laptop and agent environments are the new prod, and they need to be governed like prod.

What governance actually has to solve

The instinct in most enterprises is to reach for the tools that already exist, but none of them see what an agent is doing. CI/CD doesn’t see it because the agent isn’t a pipeline. The VPC doesn’t see it because the laptop is outside the perimeter. IAM doesn’t see it because the agent is acting as the developer. The result is that CISOs can’t tell what an agent touched, what it ran, or where the data went, and they also can’t tell the business to slow down. This is the bind every security leader is in right now.

Strip the problem to first principles and an agent has two paths to do significant harm. It either executes code itself, touching files and opening network connections, or it calls a tool through an MCP server to act on an external system. Govern both paths and you’ve governed the agent. Miss either one and you haven’t.

That’s the test for any AI governance solution worth taking seriously, and it has two parts. The controls have to live at the runtime layer where the agent actually executes, not as advisory rules layered on top that a clever prompt can route around. And they have to work consistently wherever the agent ends up running, because agents don’t stay on the laptop. They migrate to CI runners, to staging clusters, to production. A policy that only holds in one of those places is a gap waiting to be found.

Why Docker

Docker is the only company that meets both parts of that test, and the reason is structural.

Docker built the sandbox that contains the first path. Every agent session runs inside an microVM-based isolated environment where filesystem and network access are controlled by a hard boundary, which means enforcement happens at the level of the process, not as a suggestion the agent can ignore. Docker built the MCP Gateway that contains the second path. Every tool call routes through a single chokepoint where it can be authenticated, authorized, and logged before it reaches the external system. These controls at a primitive level, Docker Sandboxes and Docker MCP Gateway, make enforcement strict instead of advisory. We own the substrate the agent is running on, so the policy isn’t a wrapper around someone else’s runtime, it’s the runtime.

The second part is what makes this durable. The same sandbox primitive runs on the developer’s laptop, inside Kubernetes, and across cloud environments, with the same policy model and the same enforcement guarantees. When an agent moves from a developer’s machine to a CI runner to a production cluster, the policy moves with it, because the runtime underneath is the same in all three places. No other vendor can say that, because no other vendor is the runtime. Endpoint security tools don’t extend to clusters. Cluster security tools don’t reach the laptop. Cloud security tools don’t run on either. Docker covers all three because Docker is what’s actually executing the agent in all three.

Docker AI Governance is the control plane that sits on top of that runtime. It turns the sandbox and the MCP Gateway into centralized policy, defined once in the admin console, enforced at every node the agent touches, and auditable from end to end.

How Docker AI Governance works

From a single admin console, security teams define and enforce policy across four control surfaces: network, filesystem, credentials, and MCP tools. One policy layer that doesn’t need a per-machine setup and that consistently works across thousands of developers.

Sandbox policy for network and filesystem. Admins define allow and deny rules for domains, IPs, and CIDRs, alongside mount rules for filesystem paths with read-only or read-write scope. Every agent session runs inside an isolated sandbox where only approved endpoints are reachable and only approved directories are mountable, with enforcement happening at the proxy and mount level rather than as an advisory layer the agent can ignore.

Credential governance. Agents are dangerous in proportion to what they can authenticate as, so Docker AI Governance controls which credentials, tokens, and secrets an agent session can see, scopes them to the duration of that session, and blocks exfiltration to unapproved destinations. Developers stop pasting tokens into prompts, and security stops wondering where those tokens ended up.

MCP tool governance. Admins control which MCP servers and tools are available through organization-wide managed policies, with unapproved servers blocked by default. Every MCP call flows through the same policy engine as network, filesystem, and credential requests, so there’s no separate surface to configure and no bypass path.

Role-based policy assignment. Different teams need different levels of access, and security research will reasonably require broader MCP usage than finance. Create policy groups, assign users through your IdP, and layer team-specific rules on top of organization-wide guardrails that can’t be overridden. It scales to thousands of developers through existing SAML and SCIM integrations with no per-user setup.

Audit and visibility. Every policy evaluation generates a structured event with user identity, timestamp, session context, and the rule that triggered the decision, and logs export cleanly to your existing SIEM and compliance systems. This is the evidence CISOs need to approve AI usage at scale rather than tolerate it under the table.

Automatic policy propagation. When a developer authenticates, their machine pulls the latest policy, and updates reach every device automatically. Admins define policy once and Docker enforces it everywhere.

What this unlocks

CISOs get the governance layer they’ve been missing and the confidence to approve agent usage at scale rather than block it. Platform teams get an easy way to set up governance: by defining a policy once and having it enforced everywhere with full audibility. This removes the operational burden of scaling AI adoption across the company. Developers get what agents promised in the first place: real speed and autonomy, with governance that stays out of the way. We built Docker AI Governance with these principles in mind: agents should be autonomous and governance should be invisible.

Available today

Docker AI Governance is available now. If you’re a security leader trying to close the AI governance gap, or a platform team ready to roll out agents without compromising control, it was built for you.

Contact sales to learn more.

Quelle: https://blog.docker.com/feed/

Amazon Redshift launches RG instances powered by AWS Graviton

Amazon Redshift announces the general availability of RG instances, a new generation of provisioned cluster nodes powered by AWS Graviton processors that deliver better performance, running data warehouse and data lake workloads up to 2.4x as fast as previous generation RA3 instances, at 30% lower price per vCPU. RG instances include Redshift’s custom-built vectorized data lake query engine that processes Apache Iceberg and Parquet data on your cluster nodes — enabling you to run SQL analytics across your data warehouse and data lake using a single engine. This eliminates the need for Redshift Spectrum’s separate scanning fleet and its associated per-terabyte charges. Whether you’re running structured data warehouse workloads on Redshift Managed Storage or querying open-format data lake tables in Amazon S3, RG instances deliver significant performance improvements — up to 2.2x as fast as RA3 instances for data warehouse workloads, up to 2.4x as fast for Apache Iceberg queries, and up to 1.5x as fast for Parquet workloads. The natively built data lake engine features a purpose-built I/O subsystem with smart prefetch, NVMe caching, vectorized Parquet scans, and advanced file and partition-level pruning. Just-in-Time (JIT) Analyze delivers consistently fast queries without manual tuning — automatically collecting and updating table statistics as your data and workload patterns evolve. Intelligent NVMe caching keeps frequently accessed datasets close to compute, reducing round-trips to your data lake for faster response times on repeated queries. RG instances are available at launch in two instance sizes — rg.xlarge and rg.4xlarge. Existing RA3 clusters can migrate using Snapshot & Restore, Elastic Resize, or Classic Resize. RG instances are available with flexible pricing options, including On-Demand, and 1-year and 3-year Reserved Instances with No Upfront payment. For pricing details, visit the Amazon Redshift pricing page.
Amazon Redshift RG instances are now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Spain), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Malaysia), Asia Pacific (Hyderabad), Asia Pacific (Taiwan), and Asia Pacific (Melbourne).
To get started, refer to the following resources:

Amazon Redshift RG Instance Documentation
RA3 to RG Upgrade Guide
Amazon Redshift Pricing

Quelle: aws.amazon.com

Karpenter now supports Amazon Application Recovery Controller zonal shift

Amazon Elastic Kubernetes Service (Amazon EKS) now supports Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift when using the open source Karpenter project for compute provisioning. ARC helps you manage and coordinate recovery for your applications across AWS Regions and Availability Zones (AZs). With this launch, you can better maintain Kubernetes application availability by automating the process of shifting in-cluster network traffic away from an impaired AZ. Customers increasingly deploy highly available applications in Amazon EKS across multiple AZs to eliminate a single point of failure. With ARC zonal shift, you can temporarily mitigate an AZ impairment by redirecting in-cluster network traffic away from the impacted AZ. For a fully automated experience, authorize AWS to manage this on your behalf using ARC zonal autoshift, which includes practice runs to verify your cluster functions as expected with one less AZ. When a zonal shift is activated for your EKS cluster, Karpenter stops provisioning new capacity in the impaired AZ, halts voluntary disruptions such as consolidation and drift for nodes in that AZ, and prevents voluntary disruptions in healthy zones if they depend on scheduling pods to the impaired zone. Pods with strict scheduling requirements such as volume affinities that require the impaired zone will not trigger launch attempts. When the zonal shift expires or is canceled, Karpenter resumes normal operations. This Karpenter feature works with both manual zonal shifts and zonal autoshifts. No custom ARC resources are required as Karpenter integrates directly with the existing EKS cluster ARC resource. To enable zonal shift support, set the ENABLE_ZONAL_SHIFT setting in your Karpenter settings. To learn more, visit the Karpenter documentation and the ARC zonal shift documentation.
Quelle: aws.amazon.com

Amazon SageMaker Feature Store now supports SageMaker Python SDK V3

Amazon SageMaker Feature Store now supports the SageMaker Python SDK v3, including new capabilities for Lake Formation access controls and Apache Iceberg table properties configuration. Feature Store is a fully managed repository to store, share, and manage features for machine learning models. Data scientists can now use the modern, modular SDK v3 interfaces to manage feature groups with fine-grained access control and optimized offline storage. Data scientists can use the SageMaker Python SDK v3 to manage feature groups with streamlined workflows and reduced boilerplate. With Lake Formation integration, data scientists can enforce column-level and row-level access control on offline store data through an opt-in setting at feature group creation. With Iceberg properties support, data scientists can configure additional table properties such as compaction and snapshot expiration directly through the SDK to optimize storage and query performance. These capabilities allow data scientists to govern access to feature data and optimize offline store performance from a single SDK without managing separate tools. These capabilities are available in all AWS Regions where Amazon SageMaker Feature Store is available. To get started, install SageMaker Python SDK v3.8.0 or later. For more information, see Lake Formation access controls and Iceberg metadata management documentation.
Quelle: aws.amazon.com

Amazon EventBridge Scheduler adds 619 new SDK API actions, including Lambda Managed Instances

Amazon EventBridge Scheduler expands its AWS SDK integrations with 13 additional services and 619 new API actions across new and existing AWS services, including AWS Lambda Managed Instances. You can now schedule direct invocations of a broader set of AWS services without writing custom integration code. EventBridge Scheduler is a serverless scheduler that allows you to create, run, and manage billions of scheduled events and tasks across more than 270 AWS services, without provisioning or managing the underlying infrastructure. With this expansion, you can now schedule a broader set of AWS API actions directly from Scheduler, including scaling Lambda managed instances up or down on a time-based schedule for precise control over capacity provisioning. These enhancements are now generally available in all AWS Regions where AWS EventBridge Scheduler is available. Specific services and API actions are subject to the availability of the target service in the AWS Region. To learn more about AWS EventBridge Scheduler SDK integrations, visit the Developer Guide.
Quelle: aws.amazon.com