AWS Backup announces PrivateLink support for SAP HANA on AWS

AWS Backup now supports AWS PrivateLink for SAP HANA systems running on Amazon EC2. This enables customers to route all backup traffic through private network connections without traversing the public internet, helping organizations meet security and compliance requirements for regulated workloads.
Customers in regulated industries such as financial services, healthcare, and government agencies often require that all traffic remain on private networks. Previously, while SAP HANA application workloads could use AWS PrivateLink for secure, private communication with AWS services, backup traffic to AWS Backup had to traverse public endpoints. With this release, you can now use AWS PrivateLink for AWS Backup storage endpoints, ensuring your SAP HANA workloads on EC2 maintain end-to-end private connectivity for both application traffic and backup data. This helps organizations subject to HIPAA, EU/US Privacy Shield, and PCI DSS regulations implement fully private data protection strategies.
This feature is available in all AWS Regions where AWS Backup supports SAP HANA databases on EC2. To get started, update your Backint agent and add the backup-storage VPCE to your VPC.
Quelle: aws.amazon.com

Amazon EC2 M7i instances are now available in the Israel (Tel Aviv) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Israel (Tel Aviv) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers. M7i deliver up to 15% better price-performance compared to M6i. M7i instances are a great choice for workloads that need the largest instance sizes or continuous high CPU usage, such as gaming servers, CPU-based machine learning (ML), and video-streaming. M7i offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. To learn more, visit Amazon EC2 M7i Instances. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

Announcing new high performance computing Amazon EC2 Hpc8a instances

AWS announces Amazon EC2 Hpc8a instances, the next generation of high performance computing optimized instance, powered by 5th Gen AMD EPYC processors (formerly code named Turin). With a maximum frequency of 4.5GHz, Hpc8a instances deliver up to 40% higher performance and up to 25% better price performance compared to Hpc7a instances, helping customers accelerate compute-intensive workloads while optimizing costs. Built on the latest sixth-generation AWS Nitro Cards, Hpc8a instances are designed for compute-intensive, latency-sensitive HPC workloads. They are ideal for tightly coupled applications such as computational fluid dynamics (CFD), weather forecasting, explicit finite element analysis (FEA), and multiphysics simulations that require fast inter-node communication and consistent high performance. Hpc8a instances feature 192 cores, 768 GiB memory and 300 Gbps of Elastic Fabric Adapter (EFA) network bandwidth, enabling fast, low-latency cluster scaling for large-scale HPC workloads. Compared to Hpc7a instances, Hpc8a instances also provide up to 42% higher memory bandwidth, further improving performance for memory-intensive simulations and scientific computing workloads. Hpc8a instances are available today in US East (Ohio) and Europe (Stockholm). Customers can purchase Hpc8a instances via Savings Plans or On-Demand instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 Hpc8a instance page or AWS news blog.
Quelle: aws.amazon.com

AWS HealthImaging launches additional metrics for monitoring data stores

AWS HealthImaging has launched additional metrics through Amazon CloudWatch that enable monitoring storage at the account and data store levels. These new metrics help customers better understand their medical imaging storage and growth trends over time. HealthImaging now provides customers with granular CloudWatch metrics to monitor their data stores. Customers can track storage by volume, number of image sets, and the number of DICOM studies, series, and instances. These metrics provide the insights needed to manage both single-tenant and multi-tenant workloads at petabyte scale. To learn more, visit Using Amazon CloudWatch with HealthImaging. AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images. AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).
Quelle: aws.amazon.com

Running NanoClaw in a Docker Shell Sandbox

Ever wanted to run a personal AI assistant that monitors your WhatsApp messages 24/7, but worried about giving it access to your entire system? Docker Sandboxes’ new shell sandbox type is the perfect solution. In this post, I’ll show you how to run NanoClaw, a lightweight Claude-powered WhatsApp assistant, inside a secure, isolated Docker sandbox.

What is the Shell Sandbox?

Docker Sandboxes provides pre-configured environments for running AI coding agents like Claude Code, Gemini CLI, and others. But what if you want to run a different agent or tool that isn’t built-in?That’s where the shell sandbox comes in. It’s a minimal sandbox that drops you into an interactive bash shell inside an isolated microVM. No pre-installed agent, no opinions — just a clean Ubuntu environment with Node.js, Python, git, and common dev tools. You install whatever you need.

Why Run NanoClaw in a Sandbox?

NanoClaw already runs its agents in containers, so it’s security-conscious by design. But running the entire NanoClaw process inside a Docker sandbox adds another layer:

Filesystem isolation – NanoClaw can only see the workspace directory you mount, not your home directory

Credential management – API keys are injected via Docker’s proxy, never stored inside the sandbox

Clean environment – No conflicts with your host’s Node.js version or global packages

Disposability – Nuke it and start fresh anytime with docker sandbox rm

Prerequisites

Docker Desktop installed and running

Docker Sandboxes CLI (docker sandbox command available) (v.0.12.0 available in the nightly build as of Feb 13)

An Anthropic API key in an env variable

Setting It Up

Create the sandbox

Pick a directory on your host that will be mounted as the workspace inside the sandbox. This is the only part of your filesystem the sandbox can see:

mkdir -p ~/nanoclaw-workspace
docker sandbox create –name nanoclaw shell ~/nanoclaw-workspace

Connect to it

docker sandbox run nanoclaw

You’re now inside the sandbox – an Ubuntu shell running in an isolated VM. Everything from here on happens inside the sandbox.

Install Claude Code

The shell sandbox comes with Node.js 20 pre-installed, so we can install Claude Code directly via npm:

npm install -g @anthropic-ai/claude-code

Configure the API key

This is the one extra step needed in a shell sandbox. The built-in claude sandbox type does this automatically, but since we’re in a plain shell, we need to tell Claude Code to get its API key from Docker’s credential proxy:

mkdir -p ~/.claude && cat > ~/.claude/settings.json << 'EOF'
{
"apiKeyHelper": "echo proxy-managed",
"defaultMode": "bypassPermissions",
"bypassPermissionsModeAccepted": true
}
EOF

What this does: apiKeyHelper tells Claude Code to run echo proxy-managed to get its API key. The sandbox’s network proxy intercepts outgoing API calls and swaps this sentinel value for your real Anthropic key, so the actual key never exists inside the sandbox.

Clone NanoClaw and install dependencies

cd ~/workspace
git clone https://github.com/†/nanoclaw.git
cd nanoclaw
npm install

Run Claude and set up NanoClaw

NanoClaw uses Claude Code for its initial setup – configuring WhatsApp authentication, the database, and the container runtime:

claude

Once Claude starts, run /setup and follow the prompts. Claude will walk you through scanning a WhatsApp QR code and configuring everything else.

Start NanoClaw

After setup completes, start the assistant:

npm start

NanoClaw is now running and listening for WhatsApp messages inside the sandbox.

Managing the Sandbox

# List all sandboxes
docker sandbox ls

# Stop the sandbox (stops NanoClaw too)
docker sandbox stop nanoclaw

# Start it again
docker sandbox start nanoclaw

# Remove it entirely
docker sandbox rm nanoclaw

What Else Could You Run?

The shell sandbox isn’t specific to NanoClaw. Anything that runs on Linux and talks to AI APIs is a good fit:

Custom agents built with the Claude Agent SDK or any other AI agent: Claude code, Codex, Github Copilot, OpenCode, Kiro, and more. 

AI-powered bots and automation scripts

Experimental tools you don’t want running on your host

The pattern is always the same: create a sandbox, install what you need, configure credentials via the proxy, and run it.

docker sandbox create –name my-shell shell ~/my-workspace
docker sandbox run my-shell

Quelle: https://blog.docker.com/feed/