NVIDIA Nemotron 3 Super now available on Amazon Bedrock

Amazon Bedrock now supports NVIDIA Nemotron 3 Super, an open hybrid Mixture-of-Experts (MoE) model designed for complex multi-agent applications. Built for agentic workloads, Nemotron 3 Super delivers fast, and cost-efficient inference enabling AI agents to maintain focus and accuracy across long, multi-step tasks without losing context. Fully open with weights, datasets, and recipes, the model supports easy customization and secure deployment, making it well-suited for enterprises, startups, and individual developers building multi-agent workflows, and advanced reasoning applications.
Amazon Bedrock gives customers access to Nemotron 3 Super through a single, fully managed API — with no infrastructure to provision or models to host. Bedrock’s serverless inference, built-in security controls, and compatibility with OpenAI API specifications make it easy to integrate Nemotron 3 Super into existing workflows and deploy at production scale with confidence.
NVIDIA Nemotron 3 Super is now available in Amazon Bedrock across select AWS Regions. For the full list of available AWS Regions, refer to the documentation. To learn more and get started, visit the Amazon Bedrock console or the service documentation here. To get started with Amazon Bedrock OpenAI API-compatible service endpoints, visit documentation here.
Quelle: aws.amazon.com

Minimax M2.5 and GLM 5 models now available on Amazon Bedrock

Amazon Bedrock expands model selection for customers by adding support for GLM 5 and Minimax M2.5. GLM 5 is a frontier‑class, general‑purpose large language model optimized for complex systems engineering and long‑horizon agentic tasks. It builds on the GLM 4.5 agent‑centric lineage and is designed to support multi‑step reasoning, math (including AIME‑style benchmarks), advanced coding, and tool‑augmented workflows, with long context support suitable for sophisticated agents and enterprise applications. MiniMax M2.5 is an agent‑native frontier model trained explicitly to reason efficiently, decompose tasks optimally, and complete complex workflows under real‑world time and cost constraints. It achieves task completion speeds comparable to or faster than leading proprietary frontier models by combining high inference throughput with reinforcement learning focused on token‑efficient reasoning and better decision‑making in agentic scaffolds.
MiniMax M2.5 and GLM 5 are now available in Amazon Bedrock across select AWS Regions. For the full list of available AWS Regions, refer to the documentation.
Quelle: aws.amazon.com

From the Captain’s Chair: Naga Santhosh Reddy Vootukuri

Docker Captains are leaders from the developer community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “From the Captain’s Chair” is a blog series where we get a closer look at one Captain to learn more about them and their experiences. 

Today we are interviewing Naga Santhosh Reddy Vootukuri, known by his nickname Sunny. Sunny is a Principal Software Engineering Manager at Microsoft Azure SQL organization with 17+ years of experience in building cloud distributed scalable systems. He’s also a Dapr Meteor and an open-source contributor to Dapr and Microcks, both highly recognized CNCF projects.

Sunny is also an IEEE Senior member and conducts various IEEE conferences in Seattle, presents workshops and is a regular conference speaker sharing his expertise on Cloud computing, Microservices, Docker and AI related topics. He regularly blogs at DZone as an MVB core member about various topics ranging from Docker, Github Actions, Cloud Native Microservices, Dapr etc. and also published three books on topics like Azure Container Apps, Aspire and Github Copilot.

Can you share how you first got involved with Docker?

My Docker journey began back in 2016 during my time in Shanghai, China. I just moved from Microsoft India to Microsoft Shanghai to join the SQL Server Integration services team in 2015, which is a core ETL product. Being an expat, I was searching for some local community events to go and try out networking. During one of the local meet ups, an engineer from Alibaba or Tencent (I don’t remember exactly) presented a talk on Docker and I remember he mentioned that as a developer you can forget using this sentence as an excuse with your Test teams: “It works on my machine”. I got super fascinated by his talk and demos which made me want to read and go hands-on with Docker and Docker Desktop (also the timing was perfect that Docker Desktop for Windows support had recently launched). Since then, Docker has become like a part of my DNA.

What inspired you to become a Docker Captain?

I think my love towards sharing knowledge and having a stronger community is what got me started with writing blogs and speaking at conferences. During a conference where I was presenting on Docker, I met a few friends who were Docker Captains, and they informed me about the Docker Captains program and the perks they got as Docker Captains (from talking to product teams, trying out new features first-hand to traveling to summits). I immediately applied once I came back home. It took more than a month to receive an email for a Captain’s interview and I hope I impressed Eva Bojorges (Docker community lead) about my passion and my contributions towards the Docker community. Super happy to complete one year as Docker Captain (soonish) and looking forward to many more years.

I was super elated when Docker invited me to their Captains Summit in Istanbul (2025) as I was in their top 20 list of active contributions month over month. This trip was a memorable one as I met Docker product team and also talented Docker Captains across the world. Also, I can’t forget when I experienced my first hot air balloon ride (my friend from Germany took that pic, when I was busy with my Go Pro).

What are some of your personal goals for the year 2026?

There are few interesting goals I set aside to challenge myself:

Writing a couple more technical books. I have finished three books in the last two years and currently two are in the proposal stage and the expected titles are “Docker Loves AI” and “Building Enterprise Copilots Using Copilot Studio” (anyone reading, please don’t steal these titles lol). I don’t know which one I will start soon but both are my personal projects for the year 2026. 

I am currently working on submitting proposals to speak at a couple of really big conferences mainly about Docker and open source projects that I am involved in. I am also the technical committee chair for a couple of IEEE conferences. Hopefully I end 2026 on a big note.

Cross country road trip to the best beaches in the west coast.  

If you weren’t working in tech, what would you be doing instead?

I would have been a cricketer, maybe? My love of Cricket started when I was six years old which was an escape from home and it lasted till now. I still play in domestic leagues in Seattle. Even when I was working in China, I used to play for local clubs in Shanghai with people from different countries. I don’t know if I would have excelled in cricket in a parallel universe (I guess we would never know) but the love towards it is unconditional.

Below pic was right after a game we lost in semifinals of a local domestic league but we were still high on spirits for trying till the last minute (easy guess that blue is my fav color :P)

Can you share a memorable story from collaborating with the Docker community?

Docker community is one of the most active and vibrant communities, where we always encourage and cheer each other’s successes. I still remember the day when I was warmly welcomed into the Slack group as a new captain to get immediate help on a Friday evening when I was having some issues working with Docker Model Runner. My best memory was sitting in the hotel lounge with other Docker Captains at midnight in Turkiye after a boat party and talking about multiple topics from Docker to startups for 3-4 hours.

What’s your favorite Docker product or feature right now, and why?

My favorite one is Docker Agent framework. During the release of Docker Agent, I was playing with the first hands bit when shared in our Captain’s group. I immediately saw there is a potential to integrate with GitHub Models to avoid vendor lock-in when building AI agents. I spoke to the product team, helped them with what exactly GitHub Models about and how this could be integrated into the product as it also supports Open AI standards. It was a useful chat with Docker team lead (Djordje Lukic) and in a couple of hours we had a new release with the integration with GitHub Models.

I also wrote a blog post (https://www.docker.com/blog/configure-cagent-github-models/) on this integration and why everyone should give it a try without worrying about spending money on getting your API developer keys.

Can you walk us through a tricky technical challenge you solved recently?

When I was giving AI related workshops in some colleges from South India, they mentioned some of the popular Microsoft open source repositories not having support for local language translation. There are many colleges that still study in their mother tongue and that hit me hard, so I spent 3-4 weekends and worked on implementing it and currently we have all South Indian languages (Telugu, Tamil, Kannada, Malyalam) support on all the Microsoft open source repositories (100K+ GitHub stars). Check out:

https://github.com/microsoft/ML-For-Beginners

https://github.com/microsoft/AI-For-Beginners

What’s one Docker tip you wish every developer knew?

With the current AI world we are living in, it’s super easy to generate Dockerfiles but VS Code extension (Docker DX- https://marketplace.visualstudio.com/items?itemName=docker.docker) makes it easy to live-debug to figure out any issues. This is a must have tool in your arsenal.

If you could containerize any non-technical object in real life, what would it be and why?

If I had powers I would containerize work sessions. Imagine a perfect containerized isolated work environment that would isolate you from distractions, whether you are at the office, home or on a cruise.

Where can people find you online?

I am always active on LinkedIn and sharing my knowledge on my blog.

Rapid Fire Questions

Cats or Dogs?

Dogs

Morning person or night owl?

Morning Person (4 am)

Favorite comfort food?

Hyderabadi Spicy Dum Biryani

One word friends would use to describe you?

Energetic

A hobby you picked up recently?

Learning Spanish on Duolingo

Quelle: https://blog.docker.com/feed/

Amazon Inspector expands agentless EC2 scanning and introduces Windows KB-based findings

Amazon Inspector now offers expanded agentless EC2 scanning with enhanced detection coverage, including new support for Windows operating system vulnerability scanning without requiring an agent. Security teams and IT administrators can now detect vulnerabilities across a broader range of software and applications on their EC2 instances — including WordPress, Apache HTTP Server, Python packages, and Ruby gems — as well as Windows OS vulnerabilities, all through agentless scanning. Customers automatically receive findings for newly supported software and applications with no configuration changes required.
Amazon Inspector is also introducing Windows Knowledge Base (KB)-based findings for Windows OS vulnerabilities. Rather than receiving a separate finding for each CVE addressed by a single Microsoft patch, customers now receive a single consolidated KB finding that groups all related CVEs together. Each KB finding surfaces the highest CVSS score, EPSS score, and exploit availability from its constituent CVEs, and includes a direct link to the relevant Microsoft KB article — making it straightforward to understand exactly which patch to apply and why. All existing CVE-based Windows OS findings will automatically transition to KB-based findings. All existing CVE-based Windows OS findings will automatically transition to KB-based findings, and customers do not need to take any additional action.
Both capabilities are available in all AWS Regions where Amazon Inspector is available. To learn more, visit the Amazon Inspector product page and the Amazon Inspector documentation. 
Quelle: aws.amazon.com

Amazon EC2 C8a instances now available in the Asia Pacific (Tokyo) region

Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Asia Pacific (Tokyo) region. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances. C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.
Quelle: aws.amazon.com