Amazon GameLift Servers expands instance support with next-generation EC2 instance families

Amazon GameLift Servers now supports Amazon EC2 5th through 8th generation instances, offering enhanced price-performance, efficiency, and flexibility for game server hosting. This update allows developers to leverage the latest advancements in EC2 compute, memory, and networking across three main instance families:

General Purpose (M-series): Balanced CPU, memory, and networking for a wide range of game workloads.
Compute Optimized (C-series): High-performance compute instances with a 2:1 memory ratio, ideal for CPU-intensive game servers.
Memory Optimized (R-Series): Optimized for high-memory workloads with an 8:1 memory ratio, supporting complex simulations and large player sessions.

Each new EC2 generation brings significant improvements:

5th Gen: Proven reliability with Intel processors with balanced performance
6th Gen: Includes AWS Graviton2 ARM-based options alongside Intel and AMD variants offering enhanced price-performance efficiency.
7th Gen: The latest evolution featuring DDR5 memory, enhanced networking, and offering significant performance gains over previous generations.
8th Gen: Cutting-edge AWS Graviton4 and Intel Xeon-based instances for demanding workloads

Customers can also choose variants with local storage (d), enhanced networking (n), and different processor architectures (Intel, AMD, Graviton – i/a/g). This update empowers developers with greater flexibility, scalability, and cost efficiency to optimize game server performance. Customers can now seamlessly transition workloads to newer EC2 generations, leveraging AWS’s continuous innovation for building, scaling, and operating multiplayer games globally. These next-generation instances are available in Amazon GameLift Servers supported regions, except AWS China. For more information on launching fleets with next-generation EC2 instances, visit the Amazon GameLift Servers documentation and EC2 Instance Types overview.
Quelle: aws.amazon.com

Amazon CloudWatch Logs now supports data protection, OpenSearch PPL and OpenSearch SQL for the Infrequent Access ingestion class

Amazon CloudWatch Logs now supports expanded analytics and data protection capabilities for the Infrequent Access (Logs IA) ingestion class, including support for data protection, OpenSearch’s Piped Processing Language (PPL) and OpenSearch SQL. These enhancements make it easier for customers to perform flexible analytics and protect sensitive data while cost-effectively consolidating all your logs natively on AWS, making Logs IA ideal for ad-hoc troubleshooting and forensic analysis on infrequently accessed logs.
Logs IA is a cost-effective ingestion class for consolidating logs that are queried occasionally, such as forensic investigations. Logs IA currently offers log analytics with Logs Insights Query Language, export to S3, and encryption with a lower ingestion price per GB compared to the Standard log class. With today’s launch, customers can now use OpenSearch SQL and OpenSearch PPL queries to perform advanced analytics. In addition, data protection allows customers to automatically detect and mask sensitive information in logs, helping organizations meet security and compliance requirements.
Learn more about CloudWatch Logs IA pricing and read the user guide here. For Regional availability, visit the AWS Builder Center.
Quelle: aws.amazon.com

Amazon Timestream for InfluxDB Now Supports Advanced Metrics

Amazon Timestream for InfluxDB now offers Advanced Metrics, providing comprehensive visibility into your database performance and health. This new capability automatically publishes detailed operational metrics from your Timestream for InfluxDB 2 instances directly to Amazon CloudWatch, enabling real-time monitoring and alerting without requiring additional configuration or instrumentation for both Single-AZ and Multi-AZ Timestream for InfluxDB 2 databases. With Advanced Metrics, customers can track critical database performance indicators, set up custom dashboards, and configure automated alerts based on predefined thresholds. This enhanced observability helps DevOps teams quickly identify potential issues, optimize database performance, and ensure high availability for time-series applications by providing deeper insights into resource utilization, query performance, and system health across their InfluxDB 2 environments. Amazon Timestream for InfluxDB Advanced Metrics is available in all Regions where Timestream for InfluxDB is offered. To get started with Amazon Timestream for InfluxDB, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.
Quelle: aws.amazon.com

Building a News Roundup with Docker Agent, Docker Model Runner, and Skill

Hello, I’m Philippe, and I am a Principal Solutions Architect helping customers with their usage of Docker. I wanted a lightweight way to automate my IT news roundups without burning through AI credits. So I built a Docker Agent skill that uses the Brave Search API to fetch recent articles on a topic, then hands the results to a local model running with Docker Model Runner to analyze the stories and generate a Markdown report.

In this setup, Qwen3.5-4B handles the reasoning and skill invocation, while the skill itself does the retrieval work. The result is a simple local workflow for turning a prompt like “use news roundup skill with tiny language models” into a structured news brief I can save, review, and reuse.

It is a bit slower than doing the same thing with Claude Code, but that tradeoff works for me: I keep the workflow local, I save my Claude credits, and I get a practical example of how skills make Docker Agent more useful for repeatable tasks.

Prerequisites for building the news roundup:

Docker and Docker Compose, obviously.

A Brave Search account with an API key (you can get one here). (There’s a free plan.)

A local model that supports a large context window and knows how to do function calling.

I chose to use qwen3.5-4b from Qwen (I went with the Unsloth version), a 4-billion-parameter model optimized for text understanding and generation, with native support for up to 262144 context tokens.

I started my tests with qwen3.5-9b, but on my MacBook Air, it’s a bit slow and qwen3.5-4b does the job just fine.

Let’s get into the setup.

Step-by-step guide to building the news roundup

Step 1: Creating the Dockerfile

I used an ubuntu:22.04 base image and installed curl to make requests to the Brave Search API. I also copied the docker-agent binary from the docker/docker-agent:1.32.5 image to run the agents.

FROM –platform=$BUILDPLATFORM docker/docker-agent:1.32.5 AS coding-agent

FROM –platform=$BUILDPLATFORM ubuntu:22.04 AS base

LABEL maintainer="@k33g_org"
ARG TARGETOS
ARG TARGETARCH

ARG USER_NAME=docker-agent-user

ARG DEBIAN_FRONTEND=noninteractive

ENV LANG=en_US.UTF-8
ENV LANGUAGE=en_US.UTF-8
ENV LC_COLLATE=C
ENV LC_CTYPE=en_US.UTF-8

# ————————————
# Install Tools
#————————————
RUN <<EOF
apt-get update
apt-get install -y wget curl
apt-get clean autoclean
apt-get autoremove –yes
rm -rf /var/lib/{apt,dpkg,cache,log}/
EOF

# ————————————
# Install docker-agent
# ————————————
COPY –from=coding-agent /docker-agent /usr/local/bin/docker-agent

# ————————————
# Create a new user
# ————————————
RUN adduser ${USER_NAME}
# Set the working directory
WORKDIR /home/${USER_NAME}
# Set the user as the owner of the working directory
RUN chown -R ${USER_NAME}:${USER_NAME} /home/${USER_NAME}
# Switch to the regular user
USER ${USER_NAME}

Let’s move on to the agent configuration.

Step 2: Creating the Docker Agent configuration file

For the Docker Agent configuration, I defined a root agent using the brain model, which is an alias for qwen3.5-4b. I also enabled skills support (skills: true) and provided detailed instructions so the agent behaves like an expert IT journalist, capable of searching, analyzing, and summarizing the latest tech news.

For the toolsets, Docker Agent ships with some ready-to-use ones, but I preferred a script-type toolset with an execute_command that can run any shell command and capture its output. This gives me the flexibility to interact with the Brave Search API directly from shell commands, without having to implement specific tools for it — and most importantly, it keeps the agent’s instructions lightweight. 

agents:
root:

model: brain
description: News Roundup Expert
skills: true
instruction: |
You are an expert IT journalist with deep knowledge of software engineering, cloud infrastructure, artificial intelligence, cybersecurity, and the open-source ecosystem.
Your role is to gather, analyze, and summarize the latest technology news in a clear, accurate, and engaging way.
You write for a technical audience and always provide context, highlight trends, and explain the impact of each piece of news.

toolsets:
– type: script
shell:

execute_command:
description: Execute a shell command and return its stdout and stderr output.
args:
command:
description: The shell command to execute.
cmd: |
bash -c "$command" 2>&1

models:

brain:
provider: dmr
model: huggingface.co/unsloth/qwen3.5-4b-gguf:Q4_K_M
temperature: 0.0
top_p: 0.95
presence_penalty: 1.5
provider_opts:
# llama.cpp flags
runtime_flags: ["–context_size=65536"]

Now let’s look at the skill.

Step 3: Building the news roundup skill

I created a news-roundup skill that uses the Brave Search API to search for the latest news on a given topic, enriches each article with additional web searches, and generates a structured Markdown report.

Inside the .agents/skills folder, I created a news-roundup directory with a SKILL.md file that describes the skill in detail, with the steps to follow and the commands to execute at each step.

├── .agents
│ └── skills
│ └── news-roundup
│ └── SKILL.md

Here’s the content of SKILL.md:


name: news-roundup
description: search the news using Brave News Search API with a query as argument. Use this skill when the user asks to search for recent news or current events.

# News Roundup

## Purpose

Generate a comprehensive Markdown news report on a given topic (default: "small ai local models").

## Steps to follow

### Step 1 — Search for recent news

#### Command to execute

“`bash
curl -s "https://api.search.brave.com/res/v1/news/search?q=$(echo "$ARGUMENTS_REST" | sed 's/ /+/g')&count=3&freshness=pw"
-H "X-Subscription-Token: ${BRAVE}"
-H "Accept: application/json"
“`

### Step 2 — Enrich each article

For each article returned in Step 1, use the below command with the article URL to retrieve additional context and details.

#### Command to execute

“`bash
curl -s "https://api.search.brave.com/res/v1/web/search?q=$(echo "$ARTICLE_URL" | sed 's/ /+/g')&count=10"
-H "X-Subscription-Token: ${BRAVE}"
-H "Accept: application/json"
“`

### Step 3 — Generate the Markdown report

Using all the collected information, write a well-structured Markdown report saved to `/workspace/news-report.md`.

The report must follow this structure:

“`markdown
# IT News Report — {topic}

> Generated on {date}

## Summary

A short paragraph summarizing the main trends found across all articles.

## Articles

### {Article Title}

– **Source**: {source name}
– **URL**: {url}
– **Published**: {date}

{2-3 sentence summary of the article content and its significance for IT professionals}

(repeat for each article)

## Key Trends

A bullet list of the main technology trends identified across all articles.
“`

Save the final report to `/workspace/data/news-report-{YYYYMMDD-HHMMSS}.md` using the `write_file` tool, where `{YYYYMMDD-HHMMSS}` is the current date and time (e.g. `news-report-20260318-143012.md`).
To get the current timestamp, run:

“`bash
date +"%Y%m%d-%H%M%S"
“`

All that’s left is to create the compose.yml file to launch the agent.

Step 4: Updating the compose.yml file

Here’s the content of compose.yml. 

Note: you’ll need a .env file with your Brave Search API key (e.g. BRAVE=abcdef1234567890).

services:
news-roundup:
build:
context: .
dockerfile: Dockerfile
stdin_open: true
tty: true
command: docker-agent run /workspace/config.yaml
volumes:
– ./config.yaml:/workspace/config.yaml:ro
– ./.agents:/workspace/.agents:ro
– ./data:/workspace/data
working_dir: /workspace

env_file:
– .env

models:
qwen3.5-4b:

models:

qwen3.5-4b:
model: huggingface.co/unsloth/qwen3.5-4b-gguf:Q4_K_M
context_size: 65536

And that’s it — everything we need to run our IT news roundup agent.

Step 5: Let’s test it out!

Just run the following command in your terminal:

docker compose run –rm –build news-roundup

And ask the agent:

use news roundup skill with tiny language models

The agent will then execute the news-roundup skill, query the Brave Search API, analyze the articles, and generate a Markdown report in the data folder. 

Note: this can take a little while, so feel free to grab a coffee (or get some work done).

The agent will detect that it needs to run tools (the curl commands from the news-roundup skill) — you can validate each command manually or let the agent run them automatically: 

Your agent will work for a few minutes…

…and at the end, it will give you the path of the generated report, which you can open to read your personalized IT news roundup:

You can find examples of generated reports in the data folder of the project on this repository: https://codeberg.org/docker-agents/news-roundup/src/branch/main/data.

Final Thoughts 

That’s the full setup: a Docker Agent skill for news retrieval, the Brave Search API for fresh articles, and Docker Model Runner with Qwen3.5-4B for local analysis and report generation.

You now have a fully local IT news roundup agent. I have written a lot of content on use cases for local models, including context packaging and making small LLMs smarter. See you soon for more Docker Agent use cases with local language models!

Quelle: https://blog.docker.com/feed/

AWS Step Functions adds 28 new service integrations, including Amazon Bedrock AgentCore

AWS Step Functions expands its AWS SDK integrations with 28 additional services and over 1,100 new API actions across new and existing AWS services, including Amazon Bedrock AgentCore and Amazon S3 Vectors. This expansion enables you to orchestrate a broader set of AWS services directly from your workflows without writing integration code. AWS Step Functions is a visual workflow service capable of orchestrating over 220 AWS services to help customers build distributed applications at scale. With the Amazon Bedrock AgentCore service integration, you can invoke AI agent runtimes with built-in retries, run multiple agents in parallel using Map states, and automate agent provisioning workflows that create, update, and tear down agent infrastructure as workflow steps. This expansion also includes Amazon S3 Vectors for automating document ingestion pipelines that populate knowledge bases for AI applications. It also adds support for AWS Lambda durable execution APIs, allowing you to pass an execution name for idempotent invocations of Lambda durable functions and manage durable executions directly from your workflows. These enhancements are now generally available in all AWS Regions where AWS Step Functions is available. Specific services and API actions are subject to the availability of the target service in the AWS Region. To learn more about AWS Step Functions SDK integrations, visit the Developer Guide, or see the full list of supported services at AWS SDK service integrations.
Quelle: aws.amazon.com

AWS HealthImaging announces study-level fine-grained access control

AWS HealthImaging now supports fine-grained access control, enabling organizations to securely manage access to medical imaging data at the DICOM study and series levels. Medical imaging workflows are typically organized around DICOM studies, which are stored in AWS HealthImaging as one or more image set resources. Now customers can easily grant users access to all image sets for a set of DICOM Studies or Series with easy-to-maintain IAM policies.
Customers can now grant permissions for DICOMweb APIs using DICOM Study Instance UIDs and Series Instance UIDs directly in their IAM policies, eliminating the need to list individual image set ARNs. Customers can now create dynamic, temporary access grants using AWS Security Token Service (STS) session policies with low-latency authentication. This capability provides enhanced protection for Protected Health Information (PHI) by scoping access grants to specific Studies or Series rather than entire data stores. This launch better supports use cases such as pathologist case-level access, radiology study sharing with external partners, and controlled research data distribution. To learn more, see the AWS HealthImaging Developer Guide.
AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers, life sciences organizations, and their software partners to store, analyze, and share medical images. AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Ireland), and Europe (London). 
Quelle: aws.amazon.com

AWS Management Console now supports settings to control service and Region visibility

Today, AWS announces the general availability of Visible services and Visible Regions account settings in the AWS Management Console. These settings allow you to customize which services and regions appear in the Management Console for authorized users in your account, helping your users easily identify what is available to them and simplifying navigation. You can configure these settings in the AWS Management Console under Unified Settings in the Account Settings tab. You can also configure these setting programmatically via User Experience Customization (UXC) in AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. The Visible services and Visible Regions settings are available in AWS Commercial Regions at no additional cost. Visit the AWS User Experience Customization documentation page and API guide to learn more.
Quelle: aws.amazon.com

AWS Lambda supports up to 32 GB of memory and 16 vCPUs for Lambda Managed Instances

AWS Lambda now supports up to 32 GB of memory and 16 vCPUs for functions running on Lambda Managed Instances, enabling customers to run compute-intensive workloads such as large-scale data processing, media transcoding, and scientific simulations without managing any infrastructure. Customers can also configure the memory-to-vCPU ratio — 2:1, 4:1, or 8:1 — to match the resource profile of their workload. Lambda Managed Instances lets you run Lambda functions on managed Amazon EC2 instances with built-in routing, load balancing, and auto-scaling, giving you access to specialized compute configurations including the latest-generation processors and high-bandwidth networking, with no operational overhead. Customers building compute-intensive applications such as data processing pipelines, high-throughput API backends, and batch computation workloads require substantial memory and CPU resources to process large datasets, serve low-latency responses at scale, and run complex computations efficiently. Previously, function execution environments on Lambda were limited to 10 GB of memory and approximately 6 vCPUs, with no option to customize the memory-to-vCPU ratio. Functions on Lambda Managed Instances can now be configured with up to 32 GB of memory, and a choice of memory-to-vCPU ratio — 2:1, 4:1, or 8:1 — allowing customers to select the right balance of memory and compute for their workload. For example, at 32 GB of memory, customers can configure 16 vCPUs (2:1), 8 vCPUs (4:1), or 4 vCPUs (8:1) depending on whether their workload is CPU-intensive or memory-intensive. This feature is available in all AWS Regions where Lambda Managed Instances is generally available. You can configure these settings using the AWS Console, AWS CLI, AWS CloudFormation, AWS CDK, or AWS SAM. To learn more, visit the AWS Lambda Managed Instances product page and documentation.
Quelle: aws.amazon.com