Amazon Neptune now supports reading S3 data using openCyper

Amazon Neptune now supports reading data from Amazon S3 within openCypher queries. Through the new `neptune.read()` procedure, customers now have an additional option of federating with external data stored in S3 versus needing to load data into Neptune. Organizations using Neptune for graph analytics can now dynamically incorporate S3-stored data without the traditional multi-step workflow requirements.
Key use cases include real-time graph analytics that combine S3 data with existing graph structures, dynamic node and edge creation from external datasets, and complex graph queries requiring external reference data. The procedure supports comprehensive data types including standard and Neptune-specific formats such as geometry and datetime, while maintaining security through the caller’s IAM credentials.
Read from S3 is available in all regions where Amazon Neptune Database is currently offered. To learn more, check out the Neptune Database documentation.
Quelle: aws.amazon.com

SageMaker HyperPod now supports idle resource sharing for dynamic cluster utilization

Amazon SageMaker HyperPod task governance now supports dynamic resource sharing, allowing teams to borrow unallocated compute capacity in HyperPod clusters beyond their guaranteed quotas. Administrators can also configure borrow limits for specific resource types, such as accelerators, vCPU, or memory, to ensure fair distribution across teams. Administrators running shared compute clusters for generative AI workloads often face underutilization challenges. When data scientists do not fully consume their allocated quotas, expensive compute instances remain idle. Idle resource sharing solves this by automatically identifying unallocated cluster capacity and making it available for teams to borrow on a best-effort basis. HyperPod task governance monitors your cluster state and automatically recalculates borrowable resources when instances and compute quota policies change, eliminating manual configuration. Eligible instances that are in a ready and schedulable state, including instances with partitioned GPU configurations, contribute to the borrowable pool of unallocated compute capacity. Administrators can also define absolute borrow limits in addition to percentage-based borrow limits of idle compute. This helps administrators maximize compute utilization and maintain fine-grained control over how idle capacity is distributed across teams, while ensuring guaranteed compute quota isolation for each team. This capability is currently available for Amazon SageMaker HyperPod clusters using the EKS orchestrator across the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). To learn more, visit SageMaker HyperPod webpage, and HyperPod task governance documentation.
Quelle: aws.amazon.com

Amazon CloudWatch Logs now supports log ingestion using HTTP-based protocol

Amazon CloudWatch Logs now supports HTTP Log Collector (HLC), ND-JSON, Structured JSON and OTEL for sending logs using HTTP-based protocol with bearer token. With this launch, customers can ingest logs where AWS SDK integration is not feasible, such as with third-party or packaged software. The new endpoints are:

HTTP Log Collector (HLC) Logs (https://logs .<region>.amazonaws.com/services/collector/event) — for JSON events, ideal for migrating existing log pipelines. 

ND-JSON Logs (https://logs.<region>.amazonaws.com/ingest/bulk) — for newline-delimited JSON, where each line is an independent log event. Perfect for high-volume streaming and bulk log ingestion. 

Structured JSON Logs (https://logs .<region>.amazonaws.com/ingest/json) — Send a single JSON object or a JSON array of objects.

OpenTelemetry Logs (https://logs .<region>.amazonaws.com/v1/logs) — for OTLP-formatted logs in JSON or Protobuf encoding to CloudWatch.

To enable the HLC endpoint, navigate to CloudWatch Settings in the AWS Console and generate an API key. CloudWatch creates the necessary IAM user with service-specific credentials and permissions. API keys can be configured with expiration periods of 1, 5, 30, 90, or 365 days. Customers must enable bearer token authentication on each log group before it can accept logs, which protects from unintended ingestion. Customers can use service control policies to block the creation of service-specific credentials.
These endpoints are available in the following AWS Regions: US East (N. Virginia), US West (N. California), US West (Oregon), and US East (Ohio). To learn more about the HLC endpoint and security best practices, refer to the CloudWatch Logs Documentation. 
Quelle: aws.amazon.com

Amazon CloudWatch introduces organization-wide EC2 detailed monitoring enablement

Amazon CloudWatch now allows customers to automatically enable Amazon Elastic Compute Cloud (EC2) detailed monitoring across their AWS Organization. Customers can create enablement rules in CloudWatch Ingestion that automatically enable detailed monitoring for both existing and newly launched EC2 instances matching the rule scope, ensuring consistent metrics collection at 1-minute intervals across their EC2 instances.
EC2 detailed monitoring enablement rules can be scoped to the whole organization, specific accounts, or specific resources based on resource tags to standardize the configuration across EC2 instances. For example, the central DevOps team can create an enablement rule to automatically turn on detailed monitoring for EC2 instances with specific tags, e.g., env:production, and ensure Auto Scaling policies respond quickly to changes in instance utilization.
CloudWatch’s auto-enablement capability is available in all AWS commercial regions. Detailed monitoring metrics will be billed according to CloudWatch Pricing.
To learn more about org-wide EC2 detailed monitoring enablement, visit the Amazon CloudWatch documentation.
Quelle: aws.amazon.com

Amazon Bedrock AgentCore Runtime now supports the AG-UI protocol

Amazon Bedrock AgentCore Runtime now supports the Agent-User Interaction (AG-UI) protocol, enabling developers to deploy AG-UI servers that deliver responsive, real-time agent experiences to user-facing applications. With AG-UI support, AgentCore Runtime handles authentication, session isolation, and scaling for AG-UI workloads, allowing developers to focus on building interactive frontends for their agents.
AG-UI is an open, event-based protocol that standardizes how AI agents communicate with user interfaces. It complements the existing Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocol support in AgentCore Runtime. Where MCP provides agents with tools and A2A enables agent-to-agent communication, AG-UI brings agents into user-facing applications. Key capabilities include streaming text chunks, reasoning steps, and tool results to frontends as they happen; real-time state synchronization that can update UI elements such as progress bars and dashboards; structured tool call visualization that enables UIs to render agent actions transparently; and support for both Server-Sent Events (SSE) and WebSocket transport for bidirectional communication.
AG-UI servers in AgentCore Runtime are supported across fourteen AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Canada (Central), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm).
To learn more, see Deploy AG-UI servers in AgentCore Runtime.
Quelle: aws.amazon.com