Build AI apps with Azure Cosmos DB: Key trends from Cosmos Conf 2026

Every year, Azure Cosmos DB Conf offers a window into how modern applications are built—not in theory, but in production at global scale.

This year, the key theme from Cosmos Conf was clear: AI is not just another workload. It is fundamentally reshaping how applications—and data platforms—are built.

In the opening keynote, VP of Azure Cosmos DB Kirill Gavrylyuk described three key shifts driving this transformation, and we saw them play out across every customer story at the event.

Discover how Azure Cosmos DB powers AI app development

The three AI shifts reshaping application architecture with Azure Cosmos DB

AI is making flexible, semi-structured data foundational

AI applications don’t operate on rigid schemas. They operate on prompts, memory, and context, all of which are inherently semi-structured and evolving over time.

This fundamentally changes how databases must behave.

Data platforms are no longer just systems of record—they are becoming systems of reasoning, where flexibility is critical to how applications learn, adapt, and generate outcomes.

AI is dramatically accelerating the pace of development

AI, and especially coding agents, are changing how software is built.

Developers are:

Iterating faster

Shipping more frequently

Scaling from zero to massive usage instantly

As Kirill highlighted, developers can no longer be constrained by strict schemas. Flexibility isn’t just a convenience—it’s what enables teams to move at AI speed. Databases need to meet the demand with serverless form factor, instant and limitless scalability, advanced integrated caching, and provide agent-friendly interfaces.

Semantic search is becoming a first-class query operator

The third shift is just as important:

AI applications require:

Vector search

Full-text search

Hybrid search

Semantic ranking

These are no longer “add-ons.” They are core to how modern applications function.

Across Cosmos DB Conf, we saw a clear pattern: teams are building applications where retrieval, reasoning, and real-time context are tightly integrated.

OpenAI: Flexibility at planet scale

These shifts are most visible in what organizations like OpenAI are building.

Speaking at Cosmos Conf, Jon Lee of OpenAI addressed how they are operating at massive scale—processing trillions of transactions and petabytes of data—reinforcing that what matters most is not just scale, but the ability to evolve quickly.

Watch how OpenAI approaches database design at scale

As Jon shared, modern systems must be able to:

Scale instantly from zero to massive usage.

Support schema-less design for rapid onboarding.

Enable thousands of developers to iterate simultaneously.

“The most important thing… is being able to scale from zero to millions of QPS, being able to scale from zero bytes to petabytes,” explained Jon, adding that speed and flexibility go together.

We have thousands of developers that are actively building products… it’s really important to make it easy to onboard to databases really fast.

This is exactly the world Kirill described: AI systems demand flexible data models that evolve as fast as the applications themselves.

This highlights how Azure Cosmos DB supports dynamically evolving, large-scale AI workloads.

Vercel: The rise of serverless, AI-native applications

If OpenAI shows what’s possible at scale, Vercel shows how the shape of applications is changing.

As Guillermo Rauch, CEO of Vercel, explained, AI is dramatically expanding who can build software—from millions of developers to potentially billions of creators, many of whom are using agents to generate applications on demand. Kirill underscored this point in his keynote when he stated that more than half of Azure Cosmos DB customers are already using coding agents in their development workflows.

Watch how Vercel approaches building AI‑powered applications

According to Guillermo, this is driving a structural shift toward:

Serverless architectures

Ephemeral applications

Instant scaling from zero to viral

Data platforms must keep up. To support this pace, platforms need to provide:

Built-in best practices (data modeling, partitioning, and optimization).

Intelligent guidance (agent skills and automation).

Real-time feedback on performance and cost.

Speaking on why he turned to Azure Cosmos DB, Guillermo said, “I wanted a system that gave me an economical thinking where the developer writes a query and they understand its cost.”

Developers need immediate feedback on the cost of their decisions, making efficiency a built-in design principle, not an afterthought.

This reflects a broader shift toward AI-native apps built on globally distributed, serverless data platforms like Azure Cosmos DB.

Walmart: Reliability and performance at scale

While AI is transforming how applications are built, one thing hasn’t changed: Performance and reliability remain mission-critical.

As Kirill emphasized, AI does not remove the need for reliability, security, and performance.

In fact, it raises the bar. This was reinforced in sessions like Walmart’s, where Technical Fellow Sid Anand explained that large-scale applications must:

Deliver low-latency experiences globally.

Remain available through regional failures.

Maintain consistent performance at massive scale.

Watch how Walmart approaches global e‑commerce at scale

“We want people to be able to add to their cart and view cart no matter what is happening in a given cloud region…and we need all of these interactions to be low latency because any type of latency friction will cause a drop-off,” said Sid.

From gigabytes to petabytes, from hundreds to trillions of transactions, modern systems must operate seamlessly under unpredictable demand.

These requirements align with how Azure Cosmos DB is designed for global distribution and low latency at scale.

Cost efficiency becomes a core design principle

A final takeaway from Cosmos Conf: as systems grow more complex, cost becomes just as important as scale.

Across the keynote and sessions, we saw a clear shift:

Developers need cost visibility in real time.

Architects need to design for efficiency upfront.

Teams want to consolidate platforms and reduce complexity.

This is where innovations like Azure DocumentDB come into focus.

As highlighted in the keynote, Azure DocumentDB offers over 40% lower cost vs. alternatives, and enables high performance with simplified architecture. It also supports open-source, multi-cloud portability scenarios. The result is a broader choice for builders:

Azure Cosmos DB → for global scale, serverless, five-nines reliability.

Azure DocumentDB → for cost efficiency, flexibility, open ecosystem.

Design and architecture examples that developers can start building now

Beyond the keynote, there were a number of demo-driven sessions at Cosmos Conf across app architectures, repeatable patterns, and best practices for building and scaling AI-enabled solutions.

For example, Farah Abdou, a lead machine engineer at startup SmartServe, shared how her team rebuilt their architecture using Azure Cosmos DB as a unified “agent memory fabric.” By combining vector search for semantic caching, change feed for event-driven coordination, and optimistic concurrency for conflict prevention, they were able to reduce costs, enable sub-100ms agent handoffs, and eliminate state conflicts.

Another topic we get asked about a lot is how users protect and govern their AI applications. Pamela Fox, a Microsoft Principal Cloud Advocate, walked through how to build secure, multi-user AI systems using the Model Context Protocol (MCP). By authenticating users with Entra ID and storing per-user data in Azure Cosmos DB, she enabled role-based access with Microsoft Graph, and practical development workflows using tools like VS Code and GitHub Copilot.

From these hands-on patterns to large-scale production systems, the lesson was consistent: teams are designing for scale, efficiency, and real-world usage from day one.

Key takeaways 

AI applications require flexible, schema-agnostic data models. 

Serverless and instant scalability are becoming default expectations. 

Semantic and vector search are now core to application design. 

Cost visibility and efficiency must be designed upfront. 

Building for what’s next

We’re entering a new era of application development. Apps are becoming AI-native, globally distributed, and are continuously evolving.

And success will depend on how well organizations align to these shifts.

The most forward-thinking teams we heard from at Cosmos Conf are already doing this by:

Designing for flexibility.

Building for speed, not just scale.

Treating cost and performance as key concerns.

Leveraging AI not just in apps, but in how apps are built.

This isn’t just a technology shift.

It’s a shift in how we think about building software.

Explore Cosmos DB Conf on demand

If you missed Cosmos Conf 2026, you can explore all sessions on demand and hear directly from the teams building these systems in production today.

The patterns shared this year are more than best practices, they’re a blueprint for what comes next.

Start building AI apps with Azure Cosmos DB
Design scalable, AI-native applications with a globally distributed database built for speed, flexibility, and real-time insights.

Explore Azure Cosmos DB

The post Build AI apps with Azure Cosmos DB: Key trends from Cosmos Conf 2026 appeared first on Microsoft Azure Blog.
Quelle: Azure

Red Hat Summit 2026: Platform modernization and AI on Microsoft Azure Red Hat OpenShift

At Red Hat Summit 2026, Microsoft and Red Hat highlight how Microsoft Azure Red Hat OpenShift supports modernization and production AI workloads—helping organizations move from AI pilots to production systems with consistent governance, security, and scale.

Explore Microsoft and Red Hat’s platform modernization

Red Hat Ecosystem Innovation Award for Platform Modernization

Microsoft was recognized as the Platform Modernization Partner of the Year for the 2026 Red Hat Ecosystem Innovation Award as well as the North American Hybrid Cloud Everywhere honorable mention. These awards highlight partners whose collaboration with Red Hat delivers measurable customer outcomes through open, enterprise-grade platforms.

As AI moves from pilot projects to production systems, the challenge shifts from building models to operating them with consistent identity, governance, and security through integration with Microsoft Azure services.

At the center of this recognition is Banco Bradesco, one of the largest financial institutions in Latin America. Operating at massive scale with strict regulatory and security requirements, Banco Bradesco has moved beyond AI experimentation to production by building on Azure Red Hat OpenShift.

Azure Red Hat OpenShift serves as the secure-focused, scalable foundation for Banco Bradesco’s enterprise AI platform, unifying governance across more than 200 AI initiatives through integration with Azure identity, security, and policy capabilities. This is production AI on a jointly supported, enterprise-ready platform, not a proof of concept.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”http://cdn-dynmedia-1.microsoft.com/is/image/microsoftcorp/3473201-BancoBradesco_tbmnl_en-us?wid=1280″,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3473201-BancoBradesco-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3473201-BancoBradesco-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3473201-BancoBradesco-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3473201-BancoBradesco-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F3473201-bancobradesco%2F3473201-BancoBradesco_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}],”downloadableFiles”:[{“url”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3473201-BancoBradesco_transcript_en-us”,”locale”:”en-us”,”mediaType”:”transcript”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-6a0430a2a90d0″, options);
});

Beyond large-scale financial institutions like Banco Bradesco, Topicus demonstrates how this approach applies across regional lenders and regulated markets. Its Akkuro lending platform runs on Azure Red Hat OpenShift, providing a consistent, enterprise Kubernetes foundation for document-driven credit decisioning in regulated environments.

By deploying in Switzerland North, Topicus keeps financial data in-country to meet sovereignty requirements and maintains a repeatable deployment model across regions. Built on a fully managed service jointly operated by Microsoft and Red Hat, Akkuro allows lenders to scale document intelligence workflows and maintain compliance and operational control.

Customers are increasingly seeking a single platform to run applications and AI, with a consistent application environment integrated with Azure AI and governance services.

Platform advancements across modernization, security, and AI

Azure Red Hat OpenShift provides a consistent, enterprise-ready foundation for running applications and AI at scale. The latest advancements focus on four priority areas:

Modernization

Security

AI innovation

Global expansion

Customers don’t just need features—they need to trust the platform running their most critical workloads. That’s what Microsoft and Red Hat delivers through Azure Red Hat OpenShift: a jointly engineered, enterprise-ready foundation.
—Aaron Isom, Technical Cloud Strategist, Red Hat

Migrate and modernization with Red Hat OpenShift Virtualization

For many enterprises, the immediate priority is migrating off legacy virtualization platforms without disrupting existing workloads. Enterprises are evaluating options to move off legacy virtualization platforms while maintaining flexibility in how and when to modernize. OpenShift Virtualization on Azure Red Hat OpenShift allows virtual machines and containers to run side-by-side on a single managed platform.

This provides a practical path to migrate existing workloads without rearchitecting and transition those workloads to Kubernetes over time. Integrated RHEL entitlements and Azure Hybrid Benefit eligibility further simplify licensing during modernization.

Securing workloads on Azure Red Hat OpenShift

As sensitive applications and data move to the cloud, enterprises need to trust that their most sensitive workloads can run securely across regions while meeting strict regulatory and sovereignty requirements. Azure Red Hat OpenShift applies a Zero Trust approach with built-in identity and confidential computing capabilities to enable that trust.

Confidential Containers on Azure Red Hat OpenShift protect sensitive data in use through hardware‑backed isolation, enabling secure processing of regulated workloads without exposing plaintext data to the underlying infrastructure.

Managed Identities and Workload Identities on Azure Red Hat OpenShift is generally available, standardizing credential management across both platform operations and application workloads.

At the platform layer, Azure Red Hat OpenShift operators use scoped, user assigned managed identities aligned with Azure role-based access control (Azure RBAC). At the application layer, workload identity provides secure access to Azure services through OpenID Connect (OIDC) federation, eliminating the need for long-lived secrets embedded in code or configuration.

As environments scale, managing credentials manually introduces operational overhead and security risk. Adopting identity-based access reduces credential sprawl, improves security posture, and aligns with Zero Trust principles across distributed applications.

These capabilities reduce credential risk, strengthen security posture, and support compliance requirements in regulated industries.

AI innovation on Azure Red Hat OpenShift

Azure Red Hat OpenShift provides a consistent platform for running AI applications across hybrid and multi-cloud environments, with Azure delivering the AI, identity, and governance services needed to operationalize them.

Customers can run AI capabilities directly on Azure Red Hat OpenShift, using Red Hat OpenShift AI, or integrate with Azure AI services and Microsoft Foundry to accelerate development and scale.

Expanded NVIDIA GPU support enables large-scale inference and data-intensive workloads to run on a fully managed Red Hat OpenShift platform backed by Azure infrastructure.

Expanded regional availability

Where workloads run is as important as how they run. Data residency, sovereignty, and latency requirements often shape platform decisions, particularly in regulated industries.

Azure Red Hat OpenShift continues to expand globally, including recent availability in Mexico Central, New Zealand North, Malaysia West, and Indonesia Central and Austria East.

Modern applications and AI capabilities run closer to users and data, meeting local compliance requirements, and maintaining consistency across regions.

The bigger picture

Across virtualization, security, identity, AI infrastructure, and global expansion, these advancements show how enterprises are standardizing on a single platform for applications and AI at production scale. As AI becomes part of core business operations, consistency in governance, identity, and operational control is becoming a requirement for enterprise platforms.

For customers like Banco Bradesco, Azure Red Hat OpenShift powers production AI platforms that meet the demands of scale, security, and operational reliability.

We are bringing AI powered‑ banking to millions of Brazilians, so performance at scale is nonnegotiable. The developed solution gives us the speed and resilience to power AI-powered banking at scale. It is a reliable foundation.
—Rafael Romualdo Wandresen, Senior Bridge Manager, Banco Bradesco

Join Microsoft at Red Hat Summit 2026

See you at Red Hat Summit 2026. Stop by to connect with Microsoft and Red Hat and see how Azure Red Hat OpenShift supports VMware modernization and production AI workloads on a single platform, and how to get started on Azure.

Azure Red Hat OpenShift at Red Hat Summit 2026

Explore how customers modernize with Azure Red Hat OpenShift

Explore how customers adopt AI on Azure Red Hat OpenShift

Watch the Banco Bradesco customer story

See how Topicus has adopted Microsoft Foundry on Azure Red Hat OpenShift

Announcing the winners of the 2026 Red Hat Ecosystem Innovation Awards

Learn more about Azure Red Hat OpenShift

The post Red Hat Summit 2026: Platform modernization and AI on Microsoft Azure Red Hat OpenShift appeared first on Microsoft Azure Blog.
Quelle: Azure

Advancing enterprise AI: New SAP on Azure announcements from SAP Sapphire 2026

In this article

AI is reshaping how enterprises operate, make decisions, and innovate at scaleFrontier innovation for ERP systemsYour agentic intelligencePutting enterprise AI into productionAdvancing a new era of AI collaborationPowering AI with a unified data foundationMicrosoft and SAP expand partnership to deliver trusted sovereign cloud solutionsExpanding platform availability and ecosystem innovationCloud Acceleration Factory expansion: Driving AI innovation for SAPExpanding the Global RISE with SAP Acceleration Program on Microsoft AzureCustomer innovation in actionA global partner ecosystem driving scalePowering the future of SAP with Microsoft CloudDiscover what SAP and Microsoft Cloud have to offer

AI is reshaping how enterprises operate, make decisions, and innovate at scale

Together, Microsoft and SAP are helping enterprises transform operations, decision-making, and innovation at scale on Azure.

At SAP Sapphire 2026, Microsoft and SAP continue to build on a deep, decades-long partnership—one that is increasingly centered on a shared vision for how enterprises innovate in the age of AI. We’re excited to unveil how Microsoft’s Frontier Transformation helps customers realize SAP’s autonomous enterprise journey.

See how SAP on Azure drives business value for these customers

Frontier innovation for ERP systems

Microsoft Azure is the foundation of Frontier Transformation. To succeed, it must be built on a global, trusted, AI-first commercial cloud. Modern AI at scale requires a different kind of platform—one designed for agents, intelligence platforms, and continuous learning with AI embedded where people already work—augmenting decisions and actions in real time. That is how AI moves from experimentation to everyday impact.

What differentiates these experiences is intelligence and context. Agents that understand enterprise data, business processes, and organizational semantics improving relevance, accuracy, and decision quality over time. We call this Microsoft IQ.

Your agentic intelligence

Microsoft IQ is a shared intelligence layer built to power enterprise AI. It brings together three dimensions of intelligence: how people work, how the business operates, and how knowledge is unlocked and activated. From collaboration and workflows, to business data and systems of record, to policies and institutional knowledge—these signals are connected through a common platform, enabling AI to operate with full context across the organization.

This creates a new model for the enterprise:

Employees are supported by AI that understands their work in context.

Business operations are powered by real-time, connected data.

Knowledge is continuously surfaced, reasoned over, and applied through intelligent agents.

The result is an enterprise that moves faster, adapts more quickly, and drives better decisions with AI.

This is where the joint innovation between Microsoft and SAP becomes uniquely powerful. By combining SAP’s deep business process expertise with Microsoft’s intelligence layer, customers can extend this Frontier model into core business operations. SAP’s enterprise applications and data—spanning across SAP Business AI Platform and Joule—connect smoothly with Microsoft Teams, Microsoft Fabric, Microsoft Copilot, and a new generation of AI agents.

Putting enterprise AI into production

A central theme at SAP Sapphire 2026 is making AI real—moving from experimentation to production-ready, business-driven outcomes.

Save the bAIkery AI experience

Microsoft’s AI Immersion Experience showcases this shift in action. Through interactive scenarios like “Save the bAIkery,” attendees step into a dynamic business environment where every decision matters. Powered by Azure OpenAI, SAP Joule, SAP Cloud ERP, Microsoft Copilot Studio, and Power BI. This experience demonstrates how insights are generated to drive business outcomes across SAP systems—bringing together generative AI, analytics, and business processes in a seamless flow.

What makes this experience compelling is not just the technology—it’s the shift it represents. AI is no longer a layer on top of enterprise systems. It is becoming embedded into the core of how businesses operate, enabling faster decisions, greater agility, and entirely new ways of working.

Interactive: Explore how Microsoft and SAP AI capabilities can work together to connect business context, productivity tools and enterprise workflows.

Together, this enables end-to-end AI transformation:

Business data flows into a unified data foundation.

Intelligence is applied through SAP Joule and Copilot and agent-driven experiences.

Actions are executed across systems through tightly integrated workflows, in a governed, secure, and observable way.

The result is not just better integration—but a fundamentally new way of operating. Creating intelligent, interconnected systems where data, AI, and business processes continuously learn, adapt, and improve. This is the path to becoming a Frontier enterprise—where AI delivers measurable impact across the business, not as isolated gains, but as a continuous engine of innovation.

Advancing a new era of AI collaboration

One of the most exciting areas of innovation between Microsoft and SAP is the continued evolution of AI innovation across systems.

Over the past year, integration between SAP Joule and Copilot has taken important steps forward, enabling users to access business data and take action across SAP and Microsoft 365 environments.

At Sapphire, that vision continues to evolve and we are excited to announce agent-to-agent (A2A) integration between Microsoft 365 Copilot and Joule—delivering connected AI experiences using agent‑to‑agent capabilities, enabling Joule to work seamlessly across the Microsoft 365 productivity suite. This announcement opens up a world in which AI systems don’t just assist users—they begin to coordinate with each other across workflows.

Business collaboration happens in Microsoft 365 (for example, Teams chats about supply chain, finance, sales, or external emails with customers and suppliers) A2A enables Joule and Copilot to execute agentic flows with the collective intelligence of Microsoft Work IQ and SAP Knowledge Graph to truly deliver contextually aware enterprise AI.

This emerging model allows SAP and Microsoft AI capabilities to work together more seamlessly, enabling scenarios where business tasks can be orchestrated end-to-end across applications. Internally, this direction focuses on enabling more secure, governed, and scalable workflows across SAP landscapes while using the broader Microsoft ecosystem of applications and partner solutions.

For example, customers can prepare for performance reviews by starting directly in Microsoft 365 Copilot within Word, leveraging SAP-delivered Joule skills for systems like SAP S/4HANA or SAP SuccessFactors, and seamlessly scheduling a one-on-one meeting with their manager—all within the same interface.

It’s a meaningful step toward a future where AI becomes an active participant in how work gets done.

Powering AI with a unified data foundation

AI is only as powerful as the data behind it—and for many organizations, that data is still fragmented across systems, limiting its full potential.

At Ignite 2025, Microsoft and SAP took a major step forward in addressing this challenge. With the announcement of SAP Business Data Cloud Connect for Microsoft Fabric, we committed to a simplified method of accessing semantically rich data products from SAP through bi-directional, zero-copy delta sharing with Microsoft Fabric. Coming in the latter half of 2026, delta sharing will enable enterprises to gain instant access to trusted, business-ready insights for advanced analytics—bringing together SAP and non-SAP data into a single, unified foundation for AI.

Through bi-directional, zero-copy sharing between the SAP Business Data Cloud solution and Microsoft Fabric, customers can realize a fundamentally different data experience: one where data is no longer siloed, insights are no longer delayed, and AI is no longer constrained—enabling organizations to move faster, act with confidence, and turn intelligence into impact across every part of the business.

To lay the foundation for this delta-sharing Microsoft and SAP already deployed SAP BDC in eight Azure Datacenters. We will also add Japan by the end of May and Germany in June. With three additional deployments planned by the end of 2026, that will bring a total of 13 Azure regions available to support SAP BDC for customer analytics.

Microsoft and SAP expand partnership to deliver trusted sovereign cloud solutions

Sovereignty has become a decisive factor for organizations operating in regulated industries and the public sector—where data control, compliance, and operational assurance are critical. SAP and Microsoft are expanding their partnership to help customers meet growing sovereign cloud requirements as they modernize SAP landscapes.

Building on sovereign offerings such as SAP NS2, Delos Cloud, and BLEU, SAP and Microsoft continue to deepen their collaboration to deliver trusted sovereign cloud solutions for customers worldwide. Together, SAP and Microsoft are expanding support for RISE with SAP on SAP Sovereign Cloud running on Azure, giving customers greater choice in how and where they run mission‑critical SAP workloads, without compromising data residency, security, or governance.

Learn more about this announcement

Customers benefit from Microsoft’s and SAP’s sovereign cloud and service capabilities, supporting strong data residency, security, and governance.

The offering is available for customers today in these regions, with continuous worldwide expansion:

Australia

New Zealand

Canada

India

Europe

The United Kingdom

Together, SAP and Microsoft are empowering customers to innovate while maintaining the highest standards of sovereignty and trust.

Expanding platform availability and ecosystem innovation

Microsoft and SAP continue to expand global availability and ecosystem integration:

SAP Business Technology Platform (BTP) is available on Azure, the leading hyperscaler for SAP BTP, with 12 Azure regions live, so customers can meet data residency, and compliance needs while improving performance and scaling innovation closer to their users.

Azure Marketplace provides a streamlined path to discover and procure SAP BTP, SAP LeanIX solutions, and SAP Business Suite, helping customers standardize purchasing, accelerate deployments, and simplify governance through consolidated billing and procurement workflows (available in the US Azure Marketplace).

Joint innovations are enabling deeper integration with Microsoft 365, Teams, and Azure AI—unlocking productivity gains and new business scenarios.

These advancements reinforce Azure as a leading cloud platform for SAP workloads—supporting both migration and innovation at scale.

Cloud Acceleration Factory expansion: Driving AI innovation for SAP

We are expanding Cloud Acceleration Factory to help SAP customers and partners move beyond migration and unlock immediate AI value. By enabling integration between Microsoft 365 Copilot and SAP Joule within RISE and GROW environments, organizations can seamlessly connect data, workflows, and productivity tools from day one, accelerating real business outcomes with AI.

Through Azure Accelerate, customers can quickly operationalize AI with the first three agent-based use cases, built and deployed using Copilot Studio or Foundry. These prebuilt agents automate key business processes and establish a foundation for scaling intelligent operations across the enterprise.

This expansion is further strengthened by Microsoft Sentinel for SAP, providing integrated security and monitoring across SAP landscapes. Together, these innovations enable customers and partners to securely adopt AI and immediately take advantage of SAP and Microsoft’s joint AI innovation, accelerating time to value and increasing measurable business impact on Azure.

Expanding the Global RISE with SAP Acceleration Program on Microsoft Azure

Microsoft and SAP are excited to announce the expansion of the global RISE with SAP Acceleration program on Microsoft Azure, a joint initiative between Microsoft and SAP designed to deliver technical expertise, support, and innovation for RISE with SAP on Microsoft Azure customers.

In 2026, we will more than double the number of customers allowed into the program, marking an important milestone in our mission to provide RISE with SAP on Azure customers extraordinary support and expertise throughout their experience. Thousands of enterprise customers are already transforming their businesses with RISE on Azure, including Nestle, Migros, and Samsung.

First publicly announced in January 2025, this program brings together the best technical teams from SAP and Microsoft to enable a more seamless, high-touch migration and onboarding experience for the customer with no additional cost for an accelerated path to business transformation and faster cloud innovation.

Customer innovation in action

From manufacturing to logistics to energy, organizations are building more intelligent, resilient enterprises on Microsoft Cloud—where insights are embedded into workflows and decisions are driven in real time.

For example, Riddell, leading designer and manufacturer of protective sports equipment and helmets, modernized its operations on Azure to gain deeper visibility across its business and accelerate decision-making. In the logistics sector, Maersk transformed its global SAP estate on Azure to improve scalability and operational efficiency, creating a stronger foundation for innovation across its supply chain. And in the energy industry, MAIRE is enhancing its security posture with Microsoft Sentinel—gaining greater visibility and protection across its SAP and enterprise environments.

At SAP Sapphire 2026, you will also hear directly from Cargill, showcasing how it has modernized its SAP environment on Azure while preparing for an AI-powered future—bringing together SAP data, Copilot, and a secure, governed cloud platform to enable new business scenarios.

Together, with Microsoft and SAP, customers are moving beyond transformation to continuous, AI-powered innovation, using data and AI intelligence with embedded security as core drivers of growth and competitive advantage.

A global partner ecosystem driving scale

Partners play a critical role in helping customers navigate their SAP journey—from migration to modernization to AI transformation. Together, we are enabling industry-specific solutions, accelerating deployment timelines, and ensuring customers can realize value faster.

This ecosystem approach allows organizations to not only adopt new technologies, but also to operationalize them effectively—turning innovation into tangible business impact.

Supported by Microsoft’s cloud, data and AI platforms, EY teams built the EY.ai Agentic Acceleration Engine, which allowed them to analyze SAP business processes, identify over 175 high‑value agentic use cases, and rapidly translate them into interactive prototypes and solution designs that produce tangible impact, including agent teams supporting SAP‑based financial close activities, and intelligent supply chain and inventory management platforms where AI agents transact directly across SAP Finance, Logistics, Inventory and Sales modules. Together, EY teams and Microsoft are demonstrating how agentic AI can extend SAP from a system of record into a system of autonomous action—delivering measurable operational value today while creating a scalable blueprint for future enterprise transformation.
Unlock SAP value with EY, Microsoft and agentic AI

Modernizing large SAP environments often involves a difficult trade-off between speed, risk, and business continuity. For Microsoft, maintaining uninterrupted operations while transitioning to SAP S/4HANA was essential. Together with SNP, the company used a selective data migration approach to complete the transformation over a single weekend.
Massive scale with zero disruption: How Microsoft modernized to S/4HANA in record time with SNP

Powering the future of SAP with Microsoft Cloud

As enterprises look ahead, the opportunity is no longer just to transform SAP systems—it is to reimagine how business runs end-to-end.

Microsoft Cloud is uniquely positioned to enable this next phase. By combining a globally trusted cloud platform with a unified data foundation, advanced AI capabilities, and a seamless productivity layer, Microsoft provides customers with the full stack needed to move from ERP-centric operations to intelligent, AI-powered enterprises.

Our partnership with SAP is foundational to this vision. Together, we are driving innovation across cloud, data, and AI—creating a platform where insights, decisions, and actions are continuously connected.

This is what sets Microsoft apart in the SAP ecosystem. It’s not just about where customers run their SAP workloads—it’s about how they unlock new value across their entire business, using data and AI to drive outcomes, accelerate innovation, and stay competitive in a rapidly evolving world.

Together with SAP, we are shaping a future where enterprise systems are no longer static—but adaptive, intelligent, and built for continuous innovation.

Discover what SAP and Microsoft Cloud have to offer

Visit us at SAP Sapphire 2026 this week at booth #302 in Orlando, Florida and at #9.203 in Hall 9 in Madrid, Spain. Register for the Microsoft sessions.

Read our product blog for additional details on the announcements.

Read more about how customers are unlocking AI innovation and business transformation with SAP and Microsoft Cloud.

SAP on the Microsoft Cloud
Discover why Microsoft Cloud is the leading cloud platform for SAP workloads.

Learn more

The post Advancing enterprise AI: New SAP on Azure announcements from SAP Sapphire 2026 appeared first on Microsoft Azure Blog.
Quelle: Azure

NIST Narrows the NVD: What Container Security Programs Should Reassess

On April 15, NIST announced a prioritized enrichment model for the National Vulnerability Database. Most CVEs will still be published, but fewer will receive the CVSS scores, CPE mappings, and CWE classifications that container scanners and compliance programs have historically relied on.

The change formalizes a drift that has been visible to anyone pulling NVD feeds for the past two years. What shifted on April 15 is the expectation: NIST has now said plainly that it does not intend to return to full-coverage enrichment. For programs that built scanning, prioritization, and SLA workflows around the assumption that NVD sits as the authoritative secondary layer on top of CVE, that assumption is worth a structured review.

What changed

Three categories of CVEs will continue to receive full enrichment:

CVEs in CISA’s Known Exploited Vulnerabilities catalog, targeted within one business day

CVEs affecting software used within the federal government

CVEs affecting “critical software” as defined by Executive Order 14028

Everything else moves to a new “Not Scheduled” status. Organizations can request enrichment by emailing nvd@nist.gov, though no service-level timeline applies. NIST has also stopped duplicating CVSS scores when the submitting CNA provides one, and all unenriched CVEs published before March 1, 2026 have been moved into “Not Scheduled.”

The NIST volumes behind the decision

NIST cited a 263% increase in CVE submissions between 2020 and 2025, with Q1 2026 running roughly a third higher than the same period a year earlier. The rise tracks with a broader expansion in CVE numbering: more CNAs, more open source projects running their own disclosure processes, and more tooling surfacing issues that would not have reached CVE a few years ago.

Year

Published CVEs

Source

2023

~29,000

CVE.org

2024

~40,000

CVE.org

2025

~48,000

NIST

2026 (forecast)

~59,500 (median)

FIRST

AI is a compounding factor on both sides of this curve. In January, curl founder Daniel Stenberg shut down the project’s HackerOne bug bounty after six and a half years, citing “death by a thousand slops”: AI-generated reports that read like real research but described vulnerabilities that didn’t exist. Node.js, Django, and others have tightened intake under similar pressure. On the signal side, Anthropic’s April announcement of Claude Mythos Preview described a model that autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser, including a 17-year-old unauthenticated RCE in FreeBSD’s NFS server. Earlier Anthropic research documented Claude Opus 4.6 finding and validating more than 500 high-severity vulnerabilities in production open source.

More noise and more real signal are heading toward the same pipeline. NIST enriched roughly 42,000 CVEs in 2025, its highest annual total, and still fell further behind incoming volume.

How it lands in compliance

The operational question is what programs have to document when NVD scoring is not available, and how consistently that documentation holds up across assessments.

Framework

NVD reference

Likely effect

FedRAMP

NVD CVSSv3 as original risk rating, with CVSSv2 and native scanner score as documented fallbacks

More variance in how remediation SLAs are applied across CSPs

PCI-DSS 4.0

Req. 11.3.2 external scans reference CVSS; ASV guidance points to NVD

More ambiguity on pass/fail determinations for unscored findings

NIST SP 800-53 (RA-5)

Lists NVD as an example source; permissive language

Lower direct impact, though auditors commonly expect CVSS-based severity evidence

DORA / SOC 2

No direct reference

Principles-based; audit expectations around severity rationale still apply

None of these frameworks break on their own. Mature vulnerability management programs generally have language in their SSPs and risk registers covering fallback scoring and exception handling. Programs that do not will likely need it before their next audit cycle.

The gap that is relevant to the container ecosystem

Two NVD inputs matter most for container scanning:

CPE applicability statements map a CVE to specific software packages. When CPE strings are missing, a scanner that matches primarily on CPE cannot determine which packages in an image are affected. The CVE exists in the database but is operationally invisible to the scan.

CVSS scores drive prioritization and SLA routing. Without a score, a CVE may surface as UNKNOWN severity or fall outside remediation workflows entirely.

Container images create a compounding effect here. Each image inherits packages from a base layer, application dependencies, and often a long transitive dependency chain. When any of those packages carries a CVE that NVD has not enriched, the gap propagates through every downstream image built on top of it. Scanners that draw on multiple advisory sources, and that match on package identifiers other than CPE, are less exposed.

Questions worth putting to image vendors

What advisory sources does your tooling use beyond NVD?

When a CVE has no NVD CVSS score, what does the tool display, and does it trigger remediation workflows?

How do you define “patched,” and is that definition in your written CVE policy?

Are your remediation SLAs measured from CVE disclosure date or NVD enrichment date?

Can a third-party scanner reproduce your clean-scan result against public advisory data?

Where Docker sits

Docker Hardened Images are designed so that vulnerability management in container workloads does not depend primarily on NVD enrichment. Each image ships with signed attestations for build provenance, SBOMs in both CycloneDX and SPDX formats, OpenVEX exploitability statements, and scan results. SBOMs are generated from the SLSA Build Level 3 pipeline rather than inferred from external databases, so package inventory is accurate regardless of NVD’s enrichment state. Hardened System Packages allow package-level patching independent of upstream distribution timelines, which means remediation is not gated on a distribution maintainer’s release cadence or on an NVD analyst’s queue. When a CVE is not exploitable in a specific image context, that assessment is published as a signed VEX document that third-party scanners including Trivy, Grype, and Wiz consume natively.

Docker Scout, the scanning layer that reads these attestations, aggregates 22 advisory sources including NVD, CISA KEV, EPSS, GitHub Advisory Database, and 13 Linux distribution security trackers. Scout matches on Package URLs (PURLs) rather than NVD’s CPE scheme, which allows package identification to continue when CPE strings are unavailable. NVD remains a valuable input to this architecture, one of several rather than the spine.

What to reassess

Audit open findings against the March 1, 2026 cutoff. Any CVE published before that date that has not received NVD enrichment has already been moved to “Not Scheduled.” Programs carrying open findings tied to those CVEs may have severity scores and CPE mappings in their trackers that no longer reflect an active NVD record. Verify that the scoring basis for those findings is documented and defensible independent of NVD.

For programs running DHI, the NVD policy change does not require an operational response. For programs evaluating container security vendors more broadly, the question worth elevating in the next procurement cycle is whether NVD is one source of vulnerability intelligence in their stack, or the primary one.

The NVD will continue to play a role. That role is narrowing, and the signals suggest it will keep narrowing. Programs that use the April announcement as a prompt to audit their data sources now will have a cleaner answer the next time a regulator, an auditor, or a board asks where their vulnerability data actually comes from.

Sources and further reading

NIST, “NIST Updates NVD Operations to Address Record CVE Growth,” April 15, 2026 https://www.nist.gov/news-events/news/2026/04/nist-updates-nvd-operations-address-record-cve-growth

FIRST, “2026 CVE Vulnerability Forecast” https://www.first.org/blog/20260211-vulnerability-forecast-2026

FedRAMP Vulnerability Scanning Requirements v3.0 https://www.fedramp.gov/docs/rev5/playbook/csp/continuous-monitoring/vulnerability-scanning/

Docker Scout advisory database sources https://docs.docker.com/scout/deep-dive/advisory-db-sources/

Docker Hardened Images documentation https://docs.docker.com/dhi/

“Why We Chose the Harder Path: Docker Hardened Images, One Year Later”https://www.docker.com/blog/why-we-chose-the-harder-path-docker-hardened-images-one-year-later/

Quelle: https://blog.docker.com/feed/

AWS Security Agent now supports full repository code reviews

Today, AWS announces the release of full repository code review, a new capability in AWS Security Agent that performs deep, context-aware security analysis of your entire codebase. Unlike traditional static analysis tools that match code against known vulnerability patterns, full repository code review reasons about your application’s architecture, trust boundaries, and data flows to surface systemic vulnerabilities that pattern-matching tools miss. When vulnerabilities are found, the scanner generates code remediation, specific fixes tied to the exact file and line, so teams can identify and remediate security vulnerabilities faster than ever before. This capability is available at no additional charge for existing AWS Security Agent customers during the preview.
AI-driven cybersecurity capabilities are advancing rapidly. AWS Security Agent can find vulnerabilities and build working exploits at a scale and speed we haven’t seen before. AWS is prioritizing free early access for customers, giving defenders the opportunity to strengthen their codebases and share what they learn so the whole industry can benefit.
Full repository code review is available in in all AWS Regions where AWS Security Agent is available.
To get started, visit the AWS Security Agent console to enable full repository code review and run your first review. To learn more, see the AWS Security Agent documentation.
Quelle: aws.amazon.com