Announcing compute-optimized instance bundles for Amazon Lightsail

Amazon Lightsail now offers compute-optimized instance bundles with up to 72 vCPUs. The new instance bundles are available in 7 sizes with both IPv6-only and dual-stack networking types. All Lightsail blueprints are supported with compute-optimized instance bundles, including Linux and Windows operating system (OS) and application blueprints. You can create instances using the new bundles with pre-configured OS and application blueprints including WordPress, cPanel & WHM, Plesk, Drupal, Magento, MEAN, LAMP, Node.js, Ruby on Rails, Amazon Linux, Ubuntu, CentOS, Debian, AlmaLinux, and Windows.
The new compute-optimized instances enable you to run compute-intensive workloads that require high CPU. These high-performance instances deliver consistent, dedicated CPU performance ensuring your applications always have the full processing power they need. These new instance bundles are ideal for workloads such as batch processing, distributed analytics, high-performance web servers, scientific modeling, dedicated gaming servers, ad serving engines, video encoding, and CPU-intensive machine learning inference applications.
Amazon Lightsail is available in 15 AWS Regions including US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), and Asia Pacific (Jakarta). To get started, visit the Lightsail console. For pricing and other details, visit the Amazon Lightsail pricing.
Quelle: aws.amazon.com

AWS Deadline Cloud now supports configurable job scheduling modes for queues

Today, AWS Deadline Cloud announces support for configurable job scheduling modes, giving you control over how workers are distributed across jobs in a queue. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design. Previously, all available workers were assigned to the highest-priority, earliest-submitted job first, which could delay feedback on other submitted jobs. You can now choose from three scheduling modes when creating or updating a queue: priority FIFO (the existing default behavior), priority balanced (workers are distributed evenly across all jobs at the highest priority level), and weighted balanced (jobs are weighted based on configurable parameters including priority, error count, submission time, and rendering task count). Priority balanced and weighted balanced scheduling modes enable artists to get immediate feedback on their submissions without waiting for earlier jobs to complete. Configurable job scheduling modes are available in all AWS Regions where AWS Deadline Cloud is supported. To get started, visit the Deadline Cloud developer guide.
Quelle: aws.amazon.com

Amazon CloudWatch launches OTel Container Insights for Amazon EKS (Preview)

Amazon CloudWatch introduces Container Insights with OpenTelemetry metrics for Amazon EKS, available in public preview. Building on the existing Container Insights experience, this capability provides deeper visibility into EKS clusters by collecting more metrics from widely adopted open source and AWS collectors and sending them to CloudWatch using the OpenTelemetry Protocol (OTLP). Each metric is automatically enriched with up to 150 descriptive labels, including Kubernetes metadata and customer-defined labels such as team, application, or business unit. Curated dashboards in the Container Insights console present cluster, node, and pod health with the ability to aggregate and filter metrics by instance type, availability zone, node group, or any custom label. For deeper analysis, customers can write queries using the Prometheus Query Language (PromQL) in CloudWatch Query Studio. The CloudWatch Observability EKS add-on provides one-click installation through the Amazon EKS console, or can be deployed through CloudFormation, CDK, or Terraform. The add-on automatically detects accelerated compute hardware including NVIDIA GPUs, Elastic Fabric Adapters, and AWS Trainium and Inferentia accelerators. For existing customers of the add-on, CloudWatch supports publishing both OpenTelemetry and existing Container Insights metrics at the same time. Container Insights with OpenTelemetry metrics is available in public preview in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Singapore), and Europe (Ireland). There is no charge for OpenTelemetry metrics from Container Insights during preview. To get started, see the Container Insights with OpenTelemetry metrics for Amazon EKS.
Quelle: aws.amazon.com

Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity

Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity, expanding beyond the IPv4 connectivity that was previously available. This gives you greater flexibility in how your applications connect to your Serverless caches.
When creating an ElastiCache Serverless cache, you can now choose from three network type options — IPv4, IPv6, or dual stack. With dual stack connectivity, your cache accepts connections over both IPv4 and IPv6 simultaneously, making it ideal for migrating to IPv6 gradually while maintaining backward compatibility with applications connecting over IPv4. IPv6 connectivity enables you to use IPv6-only subnets with your Serverless caches, eliminating the need for IPv4 addresses and helping you meet compliance requirements for IPv6 adoption.
IPv6 and dual stack connectivity for ElastiCache Serverless is available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions, at no additional charge. To learn more, visit the Amazon ElastiCache product page and Choosing a network type for serverless caches in the Amazon ElastiCache documentation.
Quelle: aws.amazon.com

Docker Offload now Generally Available: The Full Power of Docker, for Every Developer, Everywhere.

Docker Desktop is one of the most widely used developer tools in the world, yet for millions of enterprise developers, running it simply hasn’t been an option. The environments they rely on, such as virtual desktop infrastructure (VDI) platforms and managed desktops, often lack the resources or capabilities needed to run Docker Desktop.

As enterprises scaled to support remote and contractor teams, these environments became the default, effectively blocking many developers from using Docker Desktop altogether. This slowed teams down and cut developers off from faster builds, the latest Docker features, and meaningful productivity gains. As a result, teams were forced into expensive workarounds that are difficult to secure and painful to maintain. 

Today, that changes.

Docker Offload is a fully managed cloud service that moves the container engine into Docker’s secure cloud, allowing developers to run Docker from any environment without changing their existing workflows. As of today, Docker Offload is generally available.

What this means in practice is simple. Developers keep using the same terminal, the same docker run commands, and the same Docker Desktop UI they are already familiar with. The only thing that has changed is where the engine runs, and by moving it to the cloud, Docker Desktop now works in every environment that once blocked it.

How It Works

When you run Docker Offload, it automatically routes the container engine to Docker’s secure cloud. The developer opens Docker Desktop exactly as they always have. No configuration. No retraining or reconfiguring applications for new tools. Containers run in Docker’s cloud infrastructure, and everything, including bind mounts, port forwarding, and Docker Compose, works identically to local.

Every connection runs over an encrypted tunnel on SOC 2 Certified infrastructure, and session activity is logged centrally, giving security teams the audit trail they already require without any changes to existing tooling, firewall rules, or endpoint policies. Every session runs in a temporary, isolated environment without data persistence, and closes cleanly.

What Can You Do With Docker Offload?

Run full Docker in any environment

Every Docker CLI command and every Docker Desktop feature works in VDI, locked-down laptops, remote workstations, and policy-restricted networks. Developers are productive from day one, using the exact CLI commands, workflows, and muscle memory they already have.

Same Infrastructure. New Capabilities. 

Offload deploys alongside your existing VDI infrastructure without touching a single piece of it. Infrastructure and platform teams get a clean drop-in: existing network segmentation, IAM boundaries, and access control policies all stay exactly in place. Centralized admin controls, SSO, and per-user access management are built in from day one. 

Keep security non-negotiable

Dedicated cloud sessions are destroyed at every session end, data stays clean, developer devices stay completely unaffected, and your security perimeter stays intact. Offload operates within your existing security architecture, not around it. SOC 2 Certified, with deployment options that scale from multi-tenant VM-level isolation up to a dedicated single-tenant VPC with private network connectivity for regulated environments.

Unblock developers in minutes

Offload detects constrained environments automatically and activates without developer configuration. Teams go from blocked to building without tickets, setup queues, or IT escalations. When nothing changes for the developer, adoption actually happens.

Current Deployment Options

Docker Offload is currently  available in two deployment methods.

Multi-Tenant provides VM-level isolation on Docker-managed infrastructure. It’s the fastest path for most enterprise teams: no ops overhead, no infrastructure to maintain, productive from the moment it’s enabled.

Single-Tenant provides a dedicated VPC and private network access available, important for organizations in Finance, Healthcare, Government, and other regulated industries. Traffic never traverses the public internet, meeting the network isolation requirements most regulated enterprises enforce as a baseline. For security architects evaluating data residency and compliance requirements, this is the deployment model built for you.

Docker Offload is an add-on to Docker Business. Available now, through Docker’s Sales Team.

Coming Soon

Today’s launch addresses the environment problem. Developers in managed and constrained environments can finally run Docker, without workarounds and without compromise. But we’re not stopping there. Also shipping this year:

Single-Tenant Bring-Your-Own-Cloud (BYOC): Compute runs in your cloud account, your data never leaves your environment, and SOC 2 Certified security stays intact. 

CI/CD Pipeline Integration:  Bring Offload to GitHub Actions, GitLab CI, and Jenkins to give every developer the same Docker experience in CI as locally, with cloud-based pipeline compute. 

GPU-backed instances: Unlocking AI/ML workloads in managed environments for the first time.

The Road Ahead

Development has outgrown the local machine. Docker Offload closes that gap. Infrastructure teams keep their architecture intact. Security teams get the compliance they require. Developers keep the workflows they know. The full power of Docker, for every developer, everywhere. 

This is just the beginning. Learn more about the power of Docker Offload , explore our Docker Offload Docs, and reach out to the Docker Sales Team to start your journey with Offload. 

Quelle: https://blog.docker.com/feed/

Gemma 4 is Here: Now Available on Docker Hub

Docker Hub is quickly becoming the home for AI models, serving millions of developers and bringing together a curated lineup that spans lightweight edge models to high-performance LLMs, all packaged as OCI artifacts.

Today, we’re excited to welcome Gemma 4, the latest generation of lightweight, state-of-the-art open models. Built on the same technology behind Gemini, Gemma 4 introduces three architectures that scale from low-power efficiency to high-end server performance.

By packaging models as OCI artifacts, models behave just like containers. They become versioned, shareable, and instantly deployable, with no custom toolchains required. You can pull ready-to-run models from Docker Hub, push your own, integrate with any OCI registry, and plug everything directly into your existing CI/CD pipelines using familiar tooling for security, access control, and automation.

And this is just the start. Over the next few weeks, Gemma 4 support is coming to Docker Model Runner, so you will not just discover models on Hub, you will be able to run, manage, and deploy them directly from Docker Desktop with the same simplicity you expect from Docker.

Docker Hub’s growing GenAI catalog already includes popular models like IBM Granite, Llama, Mistral, Phi, and SolarLLM, alongside apps like JupyterHub and H2O.ai, plus essential tools for inference, optimization, and orchestration.

What Docker Brings to Gemma 4

Gemma 4 expands what efficient, high-performance models can do. Docker makes them simple to run, share, and scale anywhere.

Run efficiently at the edge: Smaller Gemma 4 variants are optimized for on-device performance. Docker enables consistent deployment across laptops, edge devices, and local environments.

Scale performance with ease: From sparse to dense architectures, you can run any model like a container, making it easy to scale across cloud or on-prem infrastructure. 

One command to get started: Gemma 4 is just one command away:

docker model pull gemma4

No proprietary download tools. No custom authentication flows. Just the same pull, tag, push, and deploy workflow you already use.

By bringing Gemma 4 to Docker Hub, you get powerful models with a familiar, production-ready workflow.

What’s New in Gemma 4?

Gemma 4 redefines what “small” models can do, with architectures optimized across multiple sizes and use cases:

Small & Efficient (E2B, E4B): Built for on-device performance with high throughput and low memory use.

Sparsely Activated (26B A4B): Mixture-of-Experts design delivers large-model quality with smaller-model speed.

Flagship Dense (31B): High-performance model with a 256K context window for long-context reasoning.

Key capabilities include multimodal support (text, image, audio), advanced reasoning with “thinking” tokens, and strong coding plus function-calling abilities.

Technical Specifications

Model Name

Type

Total Params

Input Modalities

Context Window

Gemma 4 E2B

Dense (Small)

5.1B

Text, Vision, Audio

128K

Gemma 4 E4B

Dense (Small)

8.0B

Text, Vision, Audio

128K

Gemma 4 26B A4B

MoE

26.8B (3.8B active)

Text, Vision

256K – 512K

Gemma 4 31B

Dense

31.3B

Text, Vision

256K – 512K

Build the Future of AI with Docker Hub

The arrival of Gemma 4 on Docker Hub reinforces our commitment to making Docker Hub the best place to discover, share, and run AI models. Whether you are building a voice-activated mobile assistant or a large-scale document retrieval system, Docker Hub makes it simple to find the right model, pull it instantly, and run it anywhere.

Ready? Head over to Docker Hub to pull the modelsWant to join the Docker Model Runner community? Please star, fork and contribute to our GitHub repo

Quelle: https://blog.docker.com/feed/