Microsoft Sovereign Private Cloud scales to thousands of nodes with Azure Local

Today, I am pleased to announce that Azure Local now scales to support deployments of up to thousands of servers within a single sovereign environment, allowing organizations to run much larger workloads locally across large-footprint datacenters, industrial environments and edge locations while maintaining control within their sovereign boundary.

Organizations operating national infrastructure, regulated workloads or mission-critical services are navigating a fundamental shift in how cloud infrastructure must be deployed and managed. As digital sovereignty postures evolve and regulatory requirements tighten across regions, infrastructure strategies are increasingly shaped by the need to maintain jurisdictional control over data, operations and dependencies. At the same time, AI and data-intensive applications are moving closer to where data is generated, requiring infrastructure that can scale to support larger deployment footprints while maintaining operational control, compliance and data residency requirements within sovereign environments.

Azure Local is the foundation for Microsoft’s Sovereign Private Cloud, allowing organizations to run cloud-consistent infrastructure on hardware they own and operate within their sovereign boundary. It supports deployments across connected, intermittently connected or fully disconnected environments. With Azure Local disconnected operations, customers retain the ability to apply policy enforcement, role-based access control, auditing and compliance configuration locally, allowing them control over how infrastructure is configured, secured and updated regardless of public cloud connectivity.

Scaling Sovereign Private CloudSovereign Private Cloud deployments must scale to support not only larger workloads, but also the operational requirements of national infrastructure and regulated industries. Azure Local allows organizations to grow deployments from hundreds up to thousands of servers within a single sovereign boundary, allowing infrastructure to expand alongside demand without requiring architectural redesign.

As deployment footprints grow, resiliency becomes essential to maintaining continuous operations for mission critical services. Expanded fault domains and infrastructure pools help prevent hardware failures from resulting in service outages, ensuring critical workloads remain operational across environments with varying levels of cloud connectivity.

At these larger scale points, organizations can run data-intensive AI inference and analytics workloads entirely within their own environment. With support for high-performance graphics processing unit (GPU) infrastructure, sensitive models and operational data remain within customer-controlled infrastructure, while access management, auditing and compliance controls are maintained within the sovereign deployment.

Built for challenging workloadsIncreased deployment scale unlocks new workload placement opportunities, from large sovereign private cloud deployments to distributed AI workloads, allowing organizations to run more data intensive and latency sensitive applications entirely within their sovereign boundary.

AT&T, one of the world’s largest telecommunications operators, is deploying Azure Local to run mission-critical infrastructure on hardware they own in their environment. The goal: full operational control while running at the scale the business demands.

“Azure Local provides the infrastructure foundation we need to run critical operations at scale, while ensuring control and governance across our environment. The consistency of the Azure operating model, delivered on our own infrastructure, is key as we continue to modernize while delivering reliable services to our customers.”

— Sherry McCaughan, Vice President – Mobility Core Services, AT&T

Kadaster, the Netherlands’ official land registry and mapping agency, is running Azure Local to keep sovereign control over some of the country’s most sensitive public data.

“As a government agency responsible for some of the Netherlands’ most sensitive data, we need infrastructure that gives us full control over where our data lives and how it’s governed. Azure Local has been a consistent foundation for that — and as our workloads grow in scale and complexity, the platform has grown with us.”

— Maarten van der Tol, General Manager, Kadaster

FiberCop, Italy’s most advanced and extensive digital network operator is deploying Azure Local across its edge locations to bring sovereign cloud and AI services to organizations throughout the country. Fabio Veronese, Chief Information & Technology Officer commented:

“FiberCop is better positioned than any other player on the Italian market to drive innovation and deliver cloud as well as AI services at national scale. Azure Local supports our mission to drive Italy’s digital future and brings Microsoft’s cloud capabilities to edge workloads across the country while keeping data sovereignty and compliance where they matter most.”

The infrastructure behind Sovereign Private CloudAzure Local is available today with validated compute and enterprise storage platforms from partners including DataON, Dell Technologies, Everpure, Hitachi Vantara, HPE, Lenovo and NetApp, allowing organizations to integrate existing Storage Area Networks (SAN) and preserve prior investments while allowing compute and storage resources to scale independently within their sovereign environment.

At the silicon level, Intel®  Xeon® 6 processors provide the compute foundation for the platform. Built for the density and performance demands of modern enterprise workloads, Xeon 6 also brings built-in AI acceleration with Intel® AMX, meaning organizations running inference or generative AI workloads within their sovereign environment do not need to introduce separate, specialized infrastructure to do so.

Together, Azure Local, validated compute and enterprise storage platforms, accelerated computing platforms and underlying silicon can provide a datacenter-scale stack that supports sovereign infrastructure deployments while helping ensure data, models and execution remain within customer-controlled environments.

Sovereign infrastructure built for your requirementsAzure Local was built to meet customers where their requirements are whether that means strict data residency, disconnected operations, regulated workloads or AI running close to where data is generated. As these requirements evolve across regulated industries and governments worldwide, Sovereign Private Cloud deployments can expand from a single node at the edge to large enterprise-scale datacenter environments, running on hardware organizations own and operate, with consistent lifecycle management through Azure.

Resources:

Learn more about Azure LocalExplore Microsoft’s Sovereign CloudRead the Tech Community blogVisit the Azure Local solution catalogDouglas Phillips leads global engineering efforts for Microsoft’s specialized, sovereign and private clouds. He is responsible for Microsoft’s global strategy, products and operations that bring Microsoft’s industry-leading solutions, including Azure, our adaptive cloud portfolio and Microsoft 365 collaboration suite, to customers with additional sovereignty, security, edge and compliance requirements.
The post Microsoft Sovereign Private Cloud scales to thousands of nodes with Azure Local appeared first on Microsoft Azure Blog.
Quelle: Azure

Enforcing trust and transparency: Open-sourcing the Azure Integrated HSM

As cloud workloads become more agentic and AI systems increasingly handle mission‑critical data, trust must be engineered into the infrastructure at every layer. At Microsoft, security is designed into the foundation of our cloud infrastructure, from silicon to services. With the Azure Integrated Hardware Security Module (HSM), Microsoft is redefining how cryptographic trust is delivered in the cloud.

Azure Integrated HSM is a tamper‑resistant, Microsoft‑built hardware security module integrated into every new Azure server, extending existing key management services by bringing hardware enforced protection directly to where workloads execute. Rather than relying solely on centralized services, this approach makes hardware-backed security a native property of the compute platform itself.

Azure Integrated HSM is engineered to meet FIPS 140‑3 Level 3, the gold standard for hardware security modules used by governments and regulated industries worldwide. Level 3 requires strong tamper resistance, hardware-enforced isolation, and protection against physical and logical key extraction. By building these assurances directly into the platform, Azure makes the highest levels of compliance a default property of the cloud, rather than a specialized configuration or premium add‑on.

Learn more about Azure Security

Reinforcing transparency through trust with open-sourced designs

Our approach to hardware security is grounded in a simple belief: transparency builds trust, and industry collaboration strengthens security. Openness strengthens trust by allowing customers, partners, and regulators to validate design choices and security boundaries.

This week, at the Open Compute Project (OCP) EMEA Summit, we announced plans to open the Azure Integrated HSM hardware to the broader open hardware ecosystem. Through OCP, we plan to release the Azure Integrated HSM firmware, driver, and software stack as open source, and launch an OCP workgroup to guide ongoing development—spanning architectural design, protocol specifications, firmware, and hardware. The Azure Integrated HSM firmware is now available through the Azure Integrated HSM GitHub repository, alongside independent validation artifacts such as the OCP SAFE audit report.

This openness is particularly important for regulated industries and sovereign cloud scenarios, where independent validation of security controls is required. By making key components available for external review, Azure Integrated HSM enables customers, partners, and regulators to assess implementation details directly rather than relying solely on vendor assertions.

This approach strengthens confidence in the platform and helps establish a more transparent and verifiable foundation for cloud security, while reducing reliance on proprietary vendor specific protocols. At a time when cryptographic trust underpins everything from AI inference to national digital infrastructure, open sourcing the HSM is a practical step toward interoperability, auditability, and customer confidence.

A tiered approach to key management

This design complements services like Azure Key Vault and Azure Managed HSM, which continue to provide centralized key lifecycle management, governance, and policy enforcement. Azure Integrated HSM adds a new layer; one that brings cryptographic protection down to the individual server, so that keys are protected not just when they are stored but while they are actively being used by workloads. The Azure Integrated HSM also supports industry standards such as TDISP, enabling secure binding between the HSM and confidential computing environments.

In the coming weeks, Azure Integrated HSM will be available in Azure V7 virtual machines to all customers globally.

Setting a new standard for server-local key protection at scale

With Azure Integrated HSM, encryption keys are generated, stored, and used entirely within hardened hardware. Keys are designed to never appear in host memory, guest memory, or software processes even during active cryptographic operations. By keeping keys within the hardware boundary at all times, Azure Integrated HSM eliminates entire classes of key and credential exfiltration attacks that target memory or software layers.

The result is true customer control enforced by silicon, not policy. Security is no longer dependent on operational discipline or complex isolation assumptions; it is enforced by hardware.

Traditional cloud security models rely on centralized HSM services accessed over the network. While effective, these models introduce shared blast radius, scalability challenges, and performance constraints as workloads grow.

By anchoring cryptographic protection directly to the server, security scales naturally with compute. There are no shared bottlenecks, no added network hops, and no need to trade performance for protection. As Azure scales, security scales with it.

With hardware roots of trust, measured boot, and attestation, Azure Integrated HSM makes trust verifiable rather than contractual. Customers and regulators can cryptographically validate that approved hardware, firmware, and configurations are in place. This can be further verified by the open-source firmware. Trust is no longer something you accept; it is something you can prove.

Together, these capabilities establish a new baseline for cloud security, one in which hardware-enforced, verifiable trust is the default for modern workloads, from core infrastructure services to the next generation of AI. When combined with confidential computing, open silicon roots of trust, Azure Boost, and datacenter-level secure control modules, the Azure Integrated HSM helps establish a vertically integrated chain of trust, from silicon to software.

We invite customers, partners, and the broader open-source community to contribute to the architecture and help shape future standards. Together, we can build secure, sovereign, and open cloud infrastructure for the challenges ahead.

For additional information, read the announcement blog and learn more about Azure Security.

Azure Security
Get a comprehensive look at the security available with Azure.

Learn more

The post Enforcing trust and transparency: Open-sourcing the Azure Integrated HSM appeared first on Microsoft Azure Blog.
Quelle: Azure

Amazon Quick upgrades the extension for Microsoft Outlook (Preview)

Today, AWS announces the preview of the Amazon Quick extension for Microsoft Outlook, which brings generative AI-powered productivity directly into your email and calendar workflows. With the extension, you can use natural language to summarize unread messages, organize your inbox, schedule meetings, and draft in-line responses all without leaving Outlook.
The Quick extension for Outlook helps you focus on what matters most by prioritizing emails, searching for specific discussions, and organizing messages into folders or flagging them for follow-up. Using conversational instructions, you can find optimal meeting times with coworkers and schedule meetings. For email threads, you can generate summaries, extract action items, and draft contextual replies that pull in relevant information from your Amazon Quick spaces and knowledge bases. You can also trigger actions in external applications using your configured integrations directly from Outlook.
The Amazon Quick extension for Microsoft Outlook is available in preview in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Ireland), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (London).
To get started with Amazon Quick, visit the Quick website, and sign up for an account in minutes. Read the documentation to learn more, and install the Quick extension for Outlook from the Quick download page.
Quelle: aws.amazon.com

Amazon Quick now supports S3 tables bucket as a data source

Amazon Quick now supports Amazon S3 table buckets as a data source — enabling users to build dashboards, run conversational analytics, and explore Apache Iceberg tables stored in S3 table buckets. With no intermediate data warehouse or OLAP layers required, users can now interoperate with their lakehouse data in Amazon Quick for both agentic AI and BI workloads — all through a simplified data architecture.
Paired with Zero-ETL from sources like Salesforce, SAP, and Amazon Kinesis Data Firehose directly into S3 table buckets, users get near real-time insights with minimal pipeline dependencies. Getting started is straightforward: admins configure S3 table bucket permissions once, and authors can immediately create datasets and start building. S3 table bucket datasets are fully accessible through Amazon Quick’s Dataset Q&A — ask a natural language question and get answers grounded in your data lake as the source of truth.
Amazon S3 table buckets as a data source in Amazon Quick is now available in all AWS Regions where Amazon Quick is available. To get started, see this blog post.
Quelle: aws.amazon.com

Amazon Quick introduces Dataset Q&A for conversational analytics against enterprise data

Amazon Quick now supports Dataset Q&A — a conversational analytics capability that enables users to ask natural language questions directly against their enterprise data. Alongside Dashboard Q&A, Dataset Q&A provides a powerful new way to interact with data in Amazon Quick — letting anyone with dataset access explore their data and get meaningful, actionable insights using natural language, while respecting all governance rules including Row Level and Column Level Security policies set by data owners..
Dataset Q&A is powered by Amazon Quick’s text-to-SQL agent, which interprets user questions, identifies the right data, and generates precise SQL — all in a single conversational step. The agent works across various data sources users bring into Amazon Quick — generating engine- and dialect-aware optimized SQL against SPICE or AWS data assets such as Amazon Redshift, Amazon Athena, Aurora PostgreSQL, and Apache Iceberg tables stored in Amazon S3 table buckets. Data owners can enrich their datasets with custom instructions, business definitions, and field descriptions directly in Amazon Quick or through simple file uploads. These curated semantics, together with dataset metadata, are ingested into a knowledge graph that captures the meaning and relationships across data assets, enabling Quick’s orchestrator to accurately identify the most relevant datasets and generate the accurate SQL. The Dataset Q&A agent delivers accurate answers across a broad range of question types — from trend analysis and time-series comparisons to ranking, multi-condition analytical queries, and open-ended exploratory questions. Dataset Q&A also includes an Explain capability, allowing users to step through the reasoning behind each answer, inspect the underlying logic, and validate that the generated SQL correctly interprets their question before acting on the result.
Dataset Q&A is now generally available in all AWS Regions where Amazon Quick is available. To get started, see this blog post.
Quelle: aws.amazon.com

Amazon Quick generates dashboards from natural language prompts

Amazon Quick now generates dashboards from natural language prompts with Generate Analysis. You describe the dashboard you want, select up to three datasets, and review an editable plan before generation. Amazon Quick then produces organized sheets with visuals selected for your data, filter controls for exploring by different dimensions, and calculated fields such as year-over-year growth and month-over-month comparisons.. Generate Analysis reduces dashboard creation from hours of manual configuration to minutes.
With Generate Analysis, you can describe goals such as “create a sales performance dashboard with revenue trends, regional comparisons, and month-over-month growth” and receive a dashboard ready for refinement. The output works with existing publishing workflows, embedding, CI/CD pipelines, and point-and-click editing.
At launch, Generate Analysis is available to Enterprise subscription/Author Pro users. Authors also have promotional access to this capability through December 2026 as part of Amazon Quick Enterprise, provided their organization has not restricted access. Generate Analysis is now generally available in all AWS Regions where Amazon Quick is available.
To learn more, see Generating an analysis with natural language prompts in the Amazon Quick User Guide. To get started, open any dataset in Amazon Quick and choose Generate analysis.
Quelle: aws.amazon.com

AWS Entity Resolution launches support for incremental Machine Learning based matching workflows

AWS Entity Resolution launches support for Machine Learning (ML) based Incremental Matching workflows in General Availability, fundamentally transforming how enterprises process entity resolution at scale. Previously, adding even a single new record required customers to reprocess their entire dataset—a process that could take up to 2 days and cost thousands of dollars. This created a critical bottleneck that forced major businesses to seek costly workarounds or alternative solutions. 
With this enhancement, AWS Entity Resolution enables businesses to process only the new records added since their last workflow run. This launch provides dramatic efficiency gains: processing 1M incremental records in less than 1 hour which is a 95% reduction in processing time compared to current workloads , while also significantly reducing infrastructure costs. The feature supports incremental workloads up to 50M incremental records over datasets containing up to 1 billion historical base records, making AWS Entity Resolution viable for continuous, large-scale enterprise workloads that were previously economically unfeasible.
You can start using incremental ML workflows in all AWS Regions where AWS Entity Resolution is available. For more information on starting an incremental ML workflow, see our user guide. For more information about AWS Entity Resolution, visit our product page. 
Quelle: aws.amazon.com