AWS Glue zero-ETL integrations with Amazon DynamoDB as the source support new configurations

AWS Glue zero-ETL now supports configurable change data capture (CDC) refresh intervals and on-demand data ingestion for integrations with Amazon DynamoDB as the source. This enhancement can help you to customize how frequently data changes are captured from your Amazon DynamoDB tables, with refresh intervals ranging from 15 minutes to 6 days, and trigger immediate data ingestion when needed. These capabilities bring zero-ETL integrations from Amazon DynamoDB sources to feature parity with zero-ETL integrations from SaaS sources, like Salesforce, SAP, and ServiceNow, ensuring consistent functionality across different source types. With configurable CDC refresh intervals, you can optimize your data pipeline performance by adjusting the frequency of change capture to match your specific business requirements—whether you need near real-time updates every 15 minutes or can work with longer intervals up to 6 days to reduce costs. The on-demand ingestion capability allows you to immediately capture critical data changes without waiting for the next scheduled CDC interval. This functionality is ideal for scenarios that require data to be immediately available for analytics, reporting, or downstream applications and helps strike a balance between data freshness requirements and operational efficiency. These features are available today in all AWS regions where AWS Glue zero-ETL is supported.
To get started with configuring CDC refresh intervals and on-demand ingestion for your Amazon DynamoDB integrations, see the AWS Glue User Guide. To learn more about AWS Glue zero-ETL integrations, visit the AWS Glue documentation. 
Quelle: aws.amazon.com

AWS CDK Mixins is now generally available

AWS announces the general availability of CDK Mixins, a new feature of the AWS Cloud Development Kit (CDK) that lets you add composable, reusable abstractions to any AWS construct, whether L1, L2, or custom, without rebuilding your existing infrastructure code. CDK Mixins are available through the aws-cdk-lib package and work across all construct types, giving you flexibility to apply the right abstractions where and when you need them. Previously, teams had to choose between immediate access to new AWS features using L1 constructs or the convenience of higher-level abstractions with L2 constructs, often requiring significant rework to meet security, compliance, or operational requirements. CDK Mixins simplify the maintenance of custom construct libraries. CDK Mixins let you apply features like auto-delete, bucket encryption, versioning, and block public access directly to constructs using a simple .with() syntax, combine multiple Mixins into custom L2 constructs, and apply compliance policies across an entire scope. Developers can use Mixins.of() for advanced resource type or path-pattern filtering. Enterprise teams can now enforce reusable security and compliance policies across their infrastructure while maintaining day-one access to new AWS features. CDK Mixins are available in all AWS regions where AWS CloudFormation is supported.
To get started with CDK Mixins, visit the AWS documentation.
Quelle: aws.amazon.com

AWS Private CA Connector for SCEP now supports AWS PrivateLink

AWS Private CA Connector for SCEP now supports AWS PrivateLink, allowing your clients to request certificates from within your Amazon Virtual Private Cloud (VPC) without traversing the public internet. With this launch, you can create VPC endpoints to connect to your SCEP connector privately, keeping all traffic within the AWS network. AWS Private CA Connector for SCEP is a managed connector that enables you to use the Simple Certificate Enrollment Protocol (SCEP) to issue certificates from AWS Private Certificate Authority (CA). SCEP is widely used for automated certificate enrollment and renewal for mobile devices, network equipment, and IoT devices. AWS PrivateLink support simplifies network connectivity by eliminating the need for internet gateways, NAT devices, or VPN connections to access your SCEP connector endpoints, while helping you meet compliance requirements that mandate private connectivity for certificate management. AWS PrivateLink support for AWS Private CA Connector for SCEP is available in all AWS Regions where the connector is available. For more information about Regional availability, see the AWS Region Table. To learn more and get started, visit the AWS Private CA Connector for SCEP documentation. For more information, please refer to the AWS PrivateLink documentation.
Quelle: aws.amazon.com

OpenSearch UI supports Cross Account Data Access to OpenSearch domains

Amazon OpenSearch Service now supports cross-account data access, enabling users to access OpenSearch domains hosted in different AWS accounts from within a single OpenSearch UI application. With this feature, you can query or build dashboard with data from OpenSearch domains across different accounts in the same region – without switching to a new endpoint or replicating data. Cross-account data access is available for OpenSearch domains hosted in both public and Virtual Private Cloud (VPC) configurations. With cross-account data access, teams no longer need to consolidate data into a single account or maintain costly data pipelines to enable unified analysis across organizational boundaries. This makes it easier to build centralized observability, search, and security analytics workflows that span multiple AWS accounts while keeping data in place and maintaining each account’s access controls. Cross-account data access supports both IAM (including SAML via IAM federation) and IAM Identity Center (IdC) for end user authentication. Cross-account data access to OpenSearch domains is available in all AWS Regions where OpenSearch UI is available. To learn more, see Cross-account data access to OpenSearch domains in the Amazon OpenSearch Service Developer Guide.
Quelle: aws.amazon.com

Flexibility Over Lock-In: The Enterprise Shift in Agent Strategy

Building agents is now a strategic priority for 95% of respondents in our latest State of Agentic AI research, which surveyed more than 800 developers and decision-makers worldwide. The shift is happening quickly: agent adoption has moved beyond experiments and demos into early operational maturity. But the road to enterprise-scale adoption is still complex. The foundations are forming, yet far from fully integrated, production-grade platforms that teams can confidently build on.

Security continues to surface as a top blocker to agent adoption. But it’s not the only one. Technical complexity is rising fast as well. Vendor lock-in is a big concern for the vast majority of the respondents surveyed. 

So how do teams cut through the complexity and prepare for a world of multi-model, multi-tool, and multi-framework agents, while avoiding vendor lock-in in their agent workflows? In this blog, we break down the key findings from our research: what teams are actually using to power their agentic workloads, and what it takes to build a more scalable, future-ready agent architecture.

Multi-model and multi-cloud are the new normal. And complexity is rising

Our recent Agent AI study found that enterprises are embracing multi-model and multi-cloud architectures to gain greater control over performance, customization, privacy, and compliance. Multi-model is now the norm. Nearly two-thirds of organizations (61%) combine cloud-hosted and local models. And complexity doesn’t stop there: 46% report using between four and six models within their agents, while just 2% rely on a single model.

Deployment environments are just as diverse. 79% of respondents operate agents across two or more environments; 51% in public clouds, 40% on-premises, and 32% on serverless platforms.

This architectural flexibility delivers control, but it also multiplies orchestration and governance efforts. Coordinating models, tools, frameworks, and environments is consistently cited as one of the hardest parts of building agents. Nearly half of respondents (48%) identify operational complexity in managing multiple components as their biggest challenge, while 43% point to increased security exposure driven by orchestration sprawl.

The strategic shift away from vendor lock-in

As organizations double down on agent investments, concerns about supply chain fragility are rising. Seventy-six percent of global respondents report active worries about vendor lock-in.

 Seventy-six percent of global respondents report active concerns about vendor lock-in

Rather than consolidating, teams are responding by diversifying. They’re distributing workloads across multiple models, tools, and cloud environments to reduce dependency and maintain leverage. Among the 61% of organizations using both cloud-hosted and locally hosted models, the primary drivers are control (64%), data privacy (60%), and compliance (54%). Cost ranks significantly lower at 41%, underscoring that flexibility and governance, not cost savings are shaping architectural decisions.

Containers power the next wave of agent adoption

Containerization is already foundational to agent development. Nearly all organizations surveyed (94%) use containers in their agent development or production workflows and the remainder plan to adopt them.

Nearly all organizations surveyed (94%) use containers in their agent development or production workflows and the remainder plan to adopt them.

As agent initiatives scale, teams are extending the same cloud-native practices that power their application pipelines such as microservices architectures, CI/CD, and container orchestration to support agent workloads. Containers are not an add-on; they are the operational backbone. In fact, 94% of teams building agents rely on them.

At the same time, early signs of orchestration standardization are emerging. Among teams building agents with Docker, 40% are using Docker Compose as their orchestration layer, a signal that familiar, container-based tooling is becoming a practical coordination layer for increasingly complex agent systems.

The agentic future won’t be monolithic

The agentic future won’t be monolithic. It’s already multi-cloud, multi-model, and multi-environment. That reality makes open standards and portable infrastructure foundational for sustaining enterprise trust and long-term flexibility.

What’s needed next isn’t reinvention, but standardization around an open, interoperable and portable infrastructure: the flexibility to work across any model, tool, and agent framework, secure-by-default runtimes, consistent orchestration and integrated policy controls. Teams that invest now in this container-based trust layer will move beyond isolated productivity gains to sustainable enterprise-wide outcomes while reducing vendor lock-in risk.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise.  

Join us on March 25, 2026, for a webinar where we’ll walk through the key findings and the strategies that can help you prioritize what comes next.

Learn more:

Get your copy of the latest State of Agentic AI report! 

Learn more about Docker’s AI solutions

Read more about why AI agents challenge existing governance approaches and explore a new framework designed for agentic AI.

Quelle: https://blog.docker.com/feed/