Inside Azure for IT: 3 cloud strategies to navigate market uncertainty

The saying, “the only thing constant is change,” is one I can’t seem to get out of my head these days, and also seems to resonate with customers I talk to given the dynamic market changes, macroeconomic headwinds, geopolitical tensions, and labor constraints we're all living in currently. That’s why for this episode of Inside Azure for IT, we’re bringing you three discussions about cloud strategies that can help you not only successfully navigate some of today's uncertainties, but also build agility and increase efficiency while you move ahead.

In part one, we discuss how migrating and modernizing your business with the cloud can help you achieve efficiencies and the scalability you need to meet changing business demands. The pandemic fundamentally shifted the technology landscape and accelerated the pace of digital transformation across all industries. If you’re working with existing IT platforms that limit the agility you need to be successful, moving strategic workloads to the cloud can help you deliver on customer demand and gain a competitive advantage.

In part two, we dive into how you can optimize your IT investments and realize the full power of the cloud quickly by configuring workloads for maximum efficiency and cost savings. As technology evolves, many orginizations will need to re-think how their technology strategy aligns with their business objectives. For example, if you're operating your business primarily on premises, you might decide that you can’t afford, or don’t want to invest in, running more of your own servers, hardware, or storage to keep up with continually evolving IT infrastructure. Taking advantage of the economies of scale that a cloud provider offers can help you reduce technical debt and optimize your operations for better efficiency.

Finally, in part three, we look at today’s ever-changing security landscape and why a strong security posture is critical to growing your business. Now more than ever, IT leaders need to adopt the right security strategy to protect against ransomware attacks, supply chain software compromises, and data breaches. Microsoft shares best practices and invests heavily in cybersecurity so you can run your business more securely and efficiently—because when your business is protected, teams can innovate fearlessly and focus on what they do best.

As I reflect on my own personal strategies managing through change, I find it always helps to focus on what you can control. That gives you a way to anchor your thinking and build certainty out of the uncertain. Each of these strategies offers a point of view on what you can control within your environment and a place to start.

Inside Azure for IT, Episode 5: Three cloud strategies to help you navigate market uncertainty

The episode is divided into three separate segments so you can watch them individually, on demand, and at your convenience.

Part one: Navigate market uncertainty by migrating and modernizing with Azure

Guest: Sathish Rajappa, Vice President of Global Platform Technology Sales at Blue Yonder

In the first segment, Sathish Rajappa from software provider Blue Yonder joins me to share some insights from their own modernization journey, and how their strategic partnership with Microsoft enhances their software as a service (SaaS) solutions to help customers accelerate digital transformation. We also talk about how you can optimize spending by consolidating systems to a few solution providers, and how adopting AI and machine learning at scale can help you solve specific use cases and gain a competitive advantage.

Watch now >

Part two: Optimize IT investments to maximize efficiency and reduce cloud spend

Guest: Henry O’Connell, Chief Executive Officer and Co-Founder of Canary Speech

In the second segment, I talk with Henry O’Connell at Canary Speech about how they use Microsoft Azure to power their vocal biomarker screening technology to help healthcare professionals use conversational speech to screen for mood changes and diseases. With this technology, clinicians can recognize conditions such as depression, anxiety, stress, and more in a matter of seconds. Henry shares a few reasons why Canary Speech chose to move from another cloud provider to Azure to grow their business, how they used resources like the Azure Well-Architected Framework to configure workloads for maximum efficiency and cost savings, as well as some exciting innovations Canary Speech is pursuing with Azure. 

Watch now >

Part three: Strengthen your security posture to innovate fearlessly and grow your business

Guest: Vasu Jakkal, Corporate Vice President of Microsoft Security, Compliance, Identity and Privacy

In the third segment, I’m joined by my friend and colleague Vasu Jakkal to talk about how Microsoft is navigating the ever-changing global challenges of today’s security landscape. Vasu explains how Microsoft’s investments in cybersecurity are helping customers run their businesses more securely and efficiently while minimizing disruptions, and we end with an inspiring discussion about how strengthening your security posture can help you innovate fearlessly in challenging times.  

Watch now >

What’s next for Inside Azure for IT

Beyond this latest episode, there are many more fireside chat videos, tutorials, and cloud-skilling resources available through Inside Azure for IT. Learn more about empowering an adaptive IT environment with best practices and resources designed to enable productivity, digital transformation, and innovation. You can also take advantage of technical training videos and learn about implementing some of the scenarios we discuss in this episode.

Watch past episodes of the Inside Azure for IT fireside chats
Register for the free Securely Migrate and Optimize with Azure digital event on April 26, 2023.
Future-proof your business and do more with less on Azure. 

Quelle: Azure

Announcing the general availability of Azure CNI Overlay in Azure Kubernetes Service

This post was co-authored by Qi Ke, Corporate Vice President, Azure Kubernetes Service.

Today, we are thrilled to announce the general availability of Azure CNI Overlay. This is a big step forward in addressing networking performance and the scaling needs of our customers.

As cloud-native workloads continue to grow, customers are constantly pushing the scale and performance boundaries of our existing networking solutions in Azure Kubernetes Service (AKS). For Instance, the traditional Azure Container Networking Interface (CNI) approaches require planning IP addresses in advance, which could lead to IP address exhaustion as demand grows. In response to this demand, we have developed a new networking solution called "Azure CNI Overlay".

In this blog post, we will discuss why we needed to create a new solution, the scale it achieves, and how its performance compares to the existing solutions in AKS.

Solving for performance and scale

In AKS, customers have several network plugin options to choose from when creating a cluster. However, each of these options have their own challenges when it comes to large-scale clusters.

The "kubenet" plugin, an existing overlay network solution, is built on Azure route tables and the bridge plugin. Since kubenet (or host IPAM) leverages route tables for cross node communication it was designed for, no more than 400 nodes or 200 nodes in dual stack clusters.

The Azure CNI VNET provides IPs from the virtual network (VNET) address space. This may be difficult to implement as it requires a large, unique, and consecutive Classless Inter-Domain Routing (CIDR) space and customers may not have the available IPs to assign to a cluster.

Bring your Own Container Network Interface (BYOCNI) brings a lot of flexibility to AKS. Customers can use encapsulation—like Virtual Extensible Local Area Network (VXLAN)—to create an overlay network as well. However, the additional encapsulation increases latency and instability as the cluster size increases.

To address these challenges, and to support customers who want to run large clusters with many nodes and pods with no limitations on performance, scale, and IP exhaustion, we have introduced a new solution: Azure CNI Overlay.

Azure CNI Overlay

Azure CNI Overlay assigns IP addresses from the user-defined overlay address space instead of using IP addresses from the VNET. It uses the routing of these private address spaces as a native virtual network feature. This means that cluster nodes do not need to perform any extra encapsulation to make the overlay container network work. This also allows this overlay addressing space to be reused for different AKS clusters even when connected via the same VNET.

When a node joins the AKS cluster, we assign a /24 IP address block (256 IPs) from the Pod CIDR to it. Azure CNI assigns IPs to Pods on that node from the block, and under the hood, VNET maintains a mapping of the Pod CIDR block to the node. This way, when Pod traffic leaves the node, VNET platform knows where to send the traffic. This allows the Pod overlay network to achieve the same performance as native VNET traffic and paves the way to support millions of pods and across thousands of nodes.

Datapath performance comparison

This section sneaks into some of the datapath performance comparisons we have been running against Azure CNI Overlay.

Note: We used the Kubernetes benchmarking tools available at kubernetes/perf-tests for this exercise. Comparison can vary based on multiple factors such as underlining hardware and Node proximity within a datacenter among others. Actual results might vary.

Azure CNI Overlay vs. VXLAN-based Overlay

As mentioned before, the only options for large clusters with many Nodes and many Pods are Azure CNI Overlay and BYO CNI. Here we compare Azure CNI Overlay with VXLAN-based overlay implementation using BYO CNI.

TCP Throughput – Higher is Better (19% gain in TCP Throughput)

Azure CNI Overlay showed a significant performance improvement over VXLAN-based overlay implementation. We found that the overhead of encapsulating CNIs was a significant factor in performance degradation, especially as the cluster grows. In contrast, Azure CNI Overlay's native Layer 3 implementation of overlay routing eliminated the double-encapsulation resource utilization and showed consistent performance across various cluster sizes. In summary, Azure CNI Overlay is a most viable solution for running production grade workloads in Kubernetes.

Azure CNI Overlay vs. Host Network

This section will cover how pod networking performs against node networking and see how native L3 routing of pod networking helps Azure CNI Overlay implementation.

Azure CNI Overlay and Host Network have similar throughput and CPU usage results, and this reinforces that the Azure CNI Overlay implementation for Pod routing across nodes using the native VNET feature is as efficient as native VNET traffic.

TCP Throughput – Higher is Better (Similar to HostNetwork)

Azure CNI Overlay powered by Cilium: eBPF dataplane

Up to this point, we’ve only taken a look at Azure CNI Overlay benefits alone. However, through a partnership with Isovalent, the next generation of Azure CNI is powered by Cilium. Some of the benefits of this approach include better resource utilization by Cilium’s extended Berkeley Packet Filter (eBPF) dataplane, more efficient intra cluster load balancing, Network Policy enforcement by leveraging eBPF over iptables, and more. To read more about Cilium’s performance gains through eBPF, see Isovalent’s blog post on the subject.

In Azure CNI Overlay Powered by Cilium, Azure CNI Overlay sets up the IP-address management (IPAM) and Pod routing, and Cilium provisions the Service routing and Network Policy programming. In other words, Azure CNI Overlay Powered by Cilium allows us to have the same overlay networking performance gains that we’ve seen thus far in this blog post plus more efficient Service routing and Network Policy implementation.

It's great to see that Azure CNI Overlay powered by Cilium is able to provide even better performance than Azure CNI Overlay without Cilium. The higher pod to service throughput achieved with the Cilium eBPF dataplane is a promising improvement. The added benefits of increased observability and more efficient network policy implementation are also important for those looking to optimize their AKS clusters.

TCP Throughput – Higher is better

To wrap up, Azure CNI Overlay is now generally available in Azure Kubernetes Service (AKS) and offers significant improvements over other networking options in AKS, with performance comparable to Host Network configurations and support for linearly scaling the cluster. And pairing Azure CNI Overlay with Cilium brings even more performance benefits to your clusters. We are excited to invite you to try Azure CNI Overlay and experience the benefits in your AKS environment.

To get started today, visit the documentation available.
Quelle: Azure

5 reasons to join us at Securely Migrate and Optimize with Azure

Did you know you can lower operating costs by 40 percent1 when you migrate Windows Server and SQL Server to Azure versus on-premises? Furthermore, you can improve IT efficiency and operating costs by 53 percent by automating management of your virtual machines in cloud and hybrid environments2. To maximize the value of your existing cloud investments, you can utilize tools like Microsoft Cost Management and Azure Advisor. A recent study showed that our customers achieve up to 34 percent reduction in Azure spend in the first year by using Microsoft Cost Management3. To learn more about how to achieve efficiency and maximize cloud value with Azure, join us at Securely Migrate and Optimize with Azure digital event on Wednesday, April 26, 2023, at 9:00 AM–11:00 AM Pacific Time.

When migrating to the cloud, consider that Windows Server and SQL Server perform best on Azure. Using managed Azure SQL Server in the cloud can help maximize performance and value. Azure SQL meets your mission-critical requirements up to 5 times faster and costs up to 93 percent less than AWS4. Additionally, you can cost-effectively retire legacy workloads that are reaching end-of-support by reducing your technical debt in a secure way with free Extended Security Updates on Azure for Windows Server 2012/R25. Plus, save up to 85 percent over the standard pay-as-you-go rate by bringing your Windows Server and SQL Server on-premises licenses to Azure6.

Maximize the value of your existing cloud investments by controlling cloud spend, improving workload efficiency, and optimizing workload costs. Microsoft Cost Management helps you understand your Azure bill, provides data analysis to costs, sets spending thresholds, and identifies opportunities for workload changes to optimize your costs. Azure Advisor provides personalized best practices for you to optimize your Azure workloads. Use guidance within the Cloud Adoption Framework and Azure Well-Architected Framework to ensure your teams follow Microsoft best practices for cost optimization throughout the cloud journey.

Drive market differentiation and emerge stronger with intelligent apps infused with AI. When you modernize using App Service you get built-in infrastructure maintenance, security patching, and scaling so you can quickly build apps instead of managing infrastructure. Production-ready cloud AI services enable you to infuse intelligence into your cloud apps and drive new efficiency and market differentiation for common business processes and unlock new scenarios with Azure AI.

Here are five reasons why you should attend this event:

Get expert guidance to gain efficiency by securely migrating and optimizing your Windows Server and SQL Server workloads.
Maximize cloud value with tools, resources, and expertise from Azure to optimize your existing cloud investments.
See demos with step-by-step guidance on how to stay secure and manage complex hybrid IT environments.
Get a walkthrough of tools for self-guided migration including how to discover, assess, and migrate with Azure Migrate.
Ask the experts by posting your questions during the live chat Q&A. This event features a live forum so you can exchange questions and answers with subject matter experts.

Learn more

Learn more from Azure experts on how to increase efficiency and maximize the value of your Windows Server and SQL Server investments. Discover best practices and tips to migrate, optimize, and modernize your infrastructure, apps, and data in the cloud with Azure. Hear customer success stories and learn how to make a business case for migration. And get hands-on experience with demos on how to discover, assess, and start migrating your Windows Server and SQL Server workloads. Register for Securely Migrate and Optimize with Azure free digital event today and join us on Wednesday, April 26, 2023, 9:00 AM–11:00 AM Pacific Time.

Source:

1 The Business Value of Microsoft Azure for Windows Server and SQL Server Workloads

2 The Business Value of Migrating and Modernizing with Azure

3 The Total Economic Impact™ of Microsoft Cost Management and Billing

4 Microsoft Azure SQL Managed Instance (principledtechnologies.com)

5 Free Extended Security Updates for Windows Server 2012/R2, only on Azure

6 Azure Hybrid Benefit
Quelle: Azure

Manage your APIs with Azure API Management’s self-hosted gateway v2

Our industry has seen an evolution in how we run software. Traditionally, platforms were running in on-premises datacenters but started to transition to the cloud. However, not all workloads can move or customers want to have resiliency across clouds and edge which introduced multi-cloud scenarios.

With our self-hosted gateway capabilities, customers can use our existing tooling to extend to their on-premises and multi-cloud APIs with the same role-based access controls, API policies, observability options, and management plane that they are already using for their Azure-based APIs.

New to the self-hosted gateway, how does it work?

When deploying an Azure API Management instance in Azure customers get three main building blocks:

A developer portal (also called user plane) for allowing internal and external users to find documentation, test APIs, get access to APIs, and see basic usage data among other features.
An API gateway (also called data plane), which contains the main networking component that exposes API implementations, applies API policies, secures APIs, and captures metrics and logs of usage among other features.
Finally, a Management Plane, which is used through the Azure Portal, Azure Resource Manager (ARM), Azure Software Development Kits (SDKs), Visual Studio and Code extensions, and command-line interfaces (CLIs) that allow to manage and enforce permissions to the other components. Examples of this are setting up APIs, configuring the infrastructure, and defining policies.

Figure 1: Architecture diagram depicting the components and features of Azure API Management Gateway.

In the case of the self-hosted gateway, we provide customers with a container image that hosts a version of our API Gateway. Customers can run multiple instances of this API Gateway in non-Azure environments and the only requirement is to allow outbound communications to the Management Plane of an Azure API Management instance to fetch configuration and expose APIs running in those non-Azure environments.

Figure 2: Architecture diagram depicting the components of a distributed API Gateway solution using the self-hosted gateway.

Supported Azure API Management tiers

The self-hosted gateway v2 is now generally available and fully supported. However, the following conditions apply:

You need an active Azure API Management instance; this instance should be on the Developer tier or Premium tier.

In the developer tier, in this case the feature is free for testing, with limitations of one active instance.
In the Premium tier, you can run as many instances as you want. Learn more about pricing at our pricing table.

Azure API Management will always provision an API Gateway in Azure, which we typically call our managed API gateway.

Be aware that there are differences in features between our various API gateway offerings. Learn more about the differences in our documentation.

Pricing and gateway deployment

In the case of the self-hosted gateway, we can define a self-hosted gateway by assigning a name to our gateway, a location (which is a logical grouping that aligns with your business, not an Azure region), a description, and finally what APIs we want to expose in this gateway. This allows us to do physical isolation of APIs at the gateway level, which is only possible in the self-hosted gateway at this moment. This combination of location, APIs, and hostname is what defines a self-hosted gateway deployment, this “self-hosted gateway deployment” should not be confused with a Kubernetes “deployment” object.

For example, using a single deployment, where the same APIs are configured in all locations:

Figure 3: Architecture diagram describing the pricing model for a single deployment of a self-hosted gateway.

However, you can also create multiple self-hosted gateway deployments to have more granular control over the different APIs that are being exposed:

Figure 4: Architecture diagram describing the pricing model for two deployments of a self-hosted gateway.

Supportability and shared responsibilities

Another important aspect is the support, in the case of the self-hosted gateway, the infrastructure is not necessarily managed by Azure, therefore as a customer you have more responsibilities to ensure the proper functioning of the gateway:

Microsoft Azure

Shared Responsibilities

Customers

Managed service service level agreements ( SLA), for the management plane, access to configuration and ability to receive telemetry.

Securing self-hosted gateway communication with Configuration endpoint: the communication between the self-hosted gateway and the configuration endpoint is secured by an access token, this token expires automatically every 30 days and needs to be updated for the running containers.

Gateway hosting, deploying, and operating the gateway infrastructure: virtual machines with container runtime or Kubernetes clusters.

Gateway maintenance, bug fixes and patches to container image.

Keeping the gateway up to date: regularly updating the gateway to the latest version and latest features.

Network configuration, necessary to maintain management plane connectivity and API access.

Gateway updates, performance, and functional improvements to container image.

 

Gateway SLA, capacity management, scaling, and uptime

 

 

Keeping the gateway up to date, regularly updating the gateway to the latest version and latest features.

 

 

Providing diagnostics data to support, collecting, and sharing diagnostics data with support engineers

 

 

Third party open-source software (OSS ) software components, adding additional layers like Prometheus, Grafana, service meshes, container runtimes, Kubernetes distributions, proxies are customer responsibility.

New features and capabilities of v2 and v1 retirement

When using the latest versions of our v2 container image, tag 2.0.0 and or higher, you would be able to use the following features:

Opentelemetry metrics: the self-hosted gateway can be configured to automatically collect and send metrics to an OpenTelemetry Collector. This allows you to bring your own metrics collection and reporting solution for the self-hosted gateway. Here you can find a list of supported metrics.
New image tagging: we provide four tagging strategies to meet your needs regarding updates, stability, patching, and production environments.
Helm chart: a new deployment option with multiple variables for you to configure at deployment time like backups, logs, OpenTelemetry, ingress, probes, and also Distributed Application Runtime (DAPR) configurations. This helm chart together with our sample Yaml files can be used for automated deployments with continuous integration and continuous delivery (CI and CD ) tools or even Gitops tools.
Artifact registry: you can find all our artifacts in our centralized Microsoft Artifact Registry for all the container images provided by Microsoft.
New EventGrid events: a new batch of supported EventGrid events related to the self-hosted gateway operations and configurations. The full list of events can be found here.

Please remember that we will be retiring support for the v1 version of our self-hosted gateway, so this is the perfect time to upgrade to v2. We also provide a migration guide and a guide for running the self-hosted gateway in production.
Quelle: Azure

How 5G and wireless edge infrastructure power digital operations with Microsoft

As enterprises continue to adopt the Internet of Things (IoT) solutions and AI to analyze processes and data from their equipment, the need for high-speed, low-latency wireless connections are rapidly growing. Companies are already seeing benefits from deploying private 5G networks to enable their solutions, especially in the manufacturing, healthcare, and retail sectors.

The potential of 5G and multi-access edge computing (MEC) has evolved substantially. As they are fully ready to enable the next-generation of digital operations, it is important to highlight some recent successful deployments that provide high speeds and ultra-low latency.

These findings have been included in the latest Digital Operations Signals report. Where our previous industry trends report, IoT Signals, gave insight for audiences into IoT, we thought it was important for this latest report to go beyond IoT and into the world of digital operations. The report now encompasses the business outcomes that organizations are pursuing to unlock the next level of improvements in efficiency, agility, and sustainability in their physical operations utilizing AI, machine learning, digital twins, 5G, and more.

As 5G connections and mobile edge computing continue to advance, so does the demand for its adoption. Interestingly, the Digital Operations Signals report found that cloud radio access networks (C-RAN), private Wi-Fi networks, and MEC technologies are not just continuing to develop, but they are also likely to converge. This means we could see more unified on-site network architectures with faster, more powerful computing.

What can 5G infrastructure deliver?

Traditionally, local connectivity in business sites—such as hospitals, clinics, warehouses, and factories—was provided by Ethernet and Wi-Fi. While Wi-Fi is still in common use for enterprise on-premises connections, it doesn’t always offer the bandwidth, latency, security, and reliability needed for demanding IoT solutions, particularly for rugged operational environments. The wider availability of 5G connectivity is spurring growth in new edge solutions and an increasing number of IoT device connections. It is now possible to have higher throughput and latency as low as 100 milliseconds or less for a device to respond to a hosting server’s request.

But the adoption of 5G is more than just a network upgrade. Instead, it’s ushering in a new category of network-intelligent applications that can solve problems that were once out of reach. With 5G, you can deploy edge applications based on cloud-native distributed architecture for solutions that demand low latency and dedicated quality of service. By using 5G and leveraging APIs to interact with networks, these applications can deliver high-performing, optimized experiences.

How is 5G being used by enterprises today?

In factory settings, for example, AI requires low latency to improve control processes and robotic systems, recognize objects through advanced computer vision, and effectively manage warehouse and supply chain operations. In this scenario, 5G and MEC can help power computer vision-assisted product packing and gather near-real-time data on any mistakes. This opens the potential to improve on-site quality assurance for logistics and supply chain companies and reduce processing times.

In healthcare, 5G connections support AI’s use in medical diagnoses, health monitoring, predictive maintenance and monitoring of medical systems, and telemedicine applications. In retail operations, low-latency connections allow AI to help with real-time inventory management, in-store video traffic, and in-store real-time offers.

The 5G architecture consists of three different network tiers—low band, midband, and millimeter wave (mmWave) high band—that offer different advantages and disadvantages in coverage distances and speed. Additionally, key 5G services specialize in providing different features:

Enhanced mobile broadband (eMBB): By defining a minimum level of data transfer rate, eMBB can provide ultra-high wireless bandwidth capabilities, handling virtual reality, computer vision, and large-scale video streaming.
Massive machine-type communications (mMTC): Designed for industrial scenarios and other environments requiring numerous devices to be connected to each other, mMTC could be used with IoT solutions or large spaces with a variety of devices that would need to communicate together.
Ultra-reliable low-latency communications (URLLC): This is designed for use cases that require extremely low latency and high reliability. This would benefit situations where responsiveness is critical, such as public safety and emergency response uses, remote healthcare, industrial automation, smart energy grids, and controlling autonomous vehicles.

Using these services to achieve high speeds and performance, however, requires businesses to upgrade network technology and update their older wireless and edge architectures. To help overcome these challenges, enterprises are turning to the right combination of hardware, software, and cloud services that can optimize 5G at the edge.

How are Microsoft and Intel empowering 5G solutions?

Microsoft and Intel understand the many challenges that enterprises face. By working with telecom hyper scalers, independent solution providers, and other partners, we are providing 5G infrastructure and network services that are easily adaptable for use cases in many sectors. Azure private multi-access edge compute (MEC) helps operators and system integrators simplify the delivery of ultra-low-latency solutions over 4G and 5G netwworks. By reducing integration complexity, enterprises can innovate new solutions and generate new revenue streams.

Intel has designed a range of hardware to power 5G edge network activities and improve content transmission and processing. By providing foundational technology to run 5G, they are working to help standardize and simplify its use and create more unified edge applications and services. By helping customers securely and efficiently deploy 5G across industries, they can reap the benefits of 5G without complicated or extended timelines.

Learn more about 5G at the edge

For the manufacturing industry, 5G can bring compute power closer to challenges that need to be solved. While 5G adoption is still in its early stages in many industries, Microsoft and Intel are advancing the evolution and growing deployment of 5G and supporting the development of new solutions and use cases with their hardware, software, and services.

For additional insights on the current trends and recent findings, check out the Digital Operations Signals report.

We also have smart factory use cases available, and you can download the business use case and technical use case for more information on the value drivers, total cost of ownership, and technical design. Enterprises interested in any of the solutions listed above can contact our partners via Azure Marketplace, or contact the Azure private MEC team.

Finally, to learn more about how Microsoft is helping organizations adopt 5G with connected applications, sign up for news and updates delivered to your inbox.
Quelle: Azure

Announcing Project Health Insights Preview: Advancing AI for health data

We live in an era with unprecedented increases in the size of health data. Digitization of medical records, medical imaging, genomic data, clinical notes, and more all contributed to an exponential increase in the amount of medical data. The potential benefit of leveraging this health data is enormous. However, with this growth in health data, new challenges arise, including the focus on data privacy and security, the need for data standardization and interoperability. There is a need for effective tools for extracting information that is buried in this data and using it to derive valuable insights, inferences, and deep analytics that can make sense of the data and support clinicians.

Today, I’m excited to announce Project Health Insights Preview. Project Health Insights is a service that derives insights based on patient data and includes pre-built models that aim to power key high value scenarios in the health domain. The models receive patient data in different modalities, perform analysis, and enable clinicians to obtain inferences and insights with evidence from the input data. These insights can assist healthcare professionals in understanding clinical data, like patient profiling, clinical trials matching, and more.

Project Health Insights—leveraging patient data to power actionable insights

Project Health Insights supports pre-built models that receive patient data in multiple modalities as their input, and produce insights and inferences that include:

Confidence scores: The higher the confidence score is, the more certain the model was about the inference value provided.
Evidence: linking model output with specific evidence within the input provided, such as references to spans of text reflecting the data that led to an insight.

Project Health Insights Preview includes two enterprise grade AI models that can be provisioned and deployed in a matter of minutes: Oncology Phenotype and Clinical Trial Matcher.

Oncology Phenotype is a model that enables healthcare providers to rapidly identify key cancer attributes within their patient populations with an existing cancer diagnosis. The model identifies cancer attributes such as tumor site, histology, clinical stage, tumor, nodes, and metastasis (TNM) categories and pathologic stage TNM categories from unstructured clinical documents.

Key features of the Oncology Phenotype model include:

Cancer case finding.
Clinical text extraction for solid tumors.
Importance ranking of evidence.

Clinical Trial Matcher is a model that matches patients to potentially suitable clinical trials, according to the trial’s eligibility criteria and patient data. The model helps with finding relevant clinical trials, that patients could be qualified for, as well as with finding a cohort of potentially eligible patients for a list of clinical trials.

Key Features of the Clinical Trial Matcher model include:

Support for scenarios that are:

Patient Centric: Helping patients find potentially suitable clinical trials and assess their eligibility against the trials criteria.
Trial Centric: Matching a trial with a database of patients to locate a cohort of potentially suitable patients.

Interactive Matching where the model provides insights into missing information that is needed to further narrow down the potential clinical trial list via an interactive experience.
Support for various modalities of patient data such as unstructured clinical notes, structured patient data, and Fast Healthcare Interoperability Resources (FHIR®) bundles.
Support for search across built-in knowledge graphs for clinical trials from clinicaltrials.gov as well as against a custom trial protocol with specific eligibility criteria.

Streamlining clinical trial matching and cancer research

According to the World Health Organization, the number of registered clinical trials increased by more than 4800 percent from 1999 to 2021. Today there are more than 82,000 clinical trials actively recruiting participants worldwide (based on clinicaltrials.gov), with increasingly complicated trial eligibility criteria. However, enrollment in clinical trials is based on manual screening of millions of patients, each with up to hundreds of clinical notes requiring review and analysis by a healthcare professional, making it an unsustainable process. Given this, it is not surprising that up to 80 percent of clinical trials miss their clinical trial enrollment timelines, and up to 48 percent fail to meet clinical trial enrollment targets according to data provided by Tufts University. The Clinical Trial Matcher model aims to solve this exact problem by effectively matching patients with diverse conditions to clinical trials for which they are potentially eligible through analysis of patient’s data and the complex eligibility criteria of clinical trials.

The Oncology Phenotype model allows physicians to effectively analyze cancer patients’ data based on their tumor site, tumor histology, and cancer staging. These models deliver crucial building blocks to realize the goals set out by the White House Cancer Moonshot initiative: to develop and test new treatments, to share more data and knowledge, to collaborate on tools that can benefit all, and to make progress towards ending cancer as we know it.

Providing value across the health and life sciences industry

John’s Hopkins University Medical Center is an early user of Project Health Insights. Dr. Srinivasan Yegnasubramanian is using the Oncology Phenotype model to leverage unstructured data to accelerate Cancer Registry curation efforts for patients with solid tumors.

Pangaea Data is a Microsoft partner working in health AI. “At Pangaea Data we help companies discover 22 times more undiagnosed, misdiagnosed, and miscoded patients by characterizing them through unlocking and summarization of clinically valid actionable intelligence from patient records in a federated privacy-preserving, scalable, and evolving manner. We are exploring using Project Health Insights to augment our own advanced capabilities for characterizing patients.”—Vibhor Gupta, Director and Founder, Pangaea Data.

Akkure Genomics helps patients utilize their own genomic data or DNA to improve their chances of finding a clinical trial. “At AKKURE GENOMICS we leverage Project Health Insights, which empowers our own AI and digital DNA platform capabilities, to help patients get matched to clinical trials based on their individual medical diagnoses, thus boosting enrollment, improving the chances of finding a precision-matched trial and accelerating discovery of new therapeutics and cures.”—Professor Oran Rigby, Chief Engineering Officer and Founder, Akkure.

Built with the end user in mind

Initial models were validated in a research setting through a strategic partnership between Microsoft and Providence to accelerate digital transformation in health and life sciences. These models can enable oncologists to substantially scale up their precision oncology capabilities and generate intelligence and insights useful to clinicians as well as beneficial to patients.

“Microsoft’s ability to structure complex concepts with their natural language processing tools for cancer has contributed significantly to our ability to build research cohorts and discuss cancer treatment options.”—Dr. Carlo Bifulco, Chief Medical Officer, Providence Genomics.

Microsoft will continue to expand capabilities within Project Health Insights to support additional health workloads and enable insights that will guide key decision-making in healthcare.

Microsoft continues to grow its portfolio of AI services for health

Microsoft continues to invest in AI services for the health and life sciences industry. Along with other new offerings in the Microsoft Cloud for Healthcare, we are pleased to announce new enhancements to Text Analytics for Health (TA4H).

The new enhancements include:

Social Determinants of Health (SDoH) and Ethnicity information extraction. The newly introduced SDoH and Ethnicity features enable extraction of social, environmental, and demographics factors from unstructured text. These factors will empower the development of more inclusive healthcare applications. Read more about it in our blog.
Temporal assertions—past, present, and future. The ability to identify the temporal context of TA4H entities whether in the past, present or future.

Customers can now extend TA4H to support custom entities based on their own data. Customers can now also extend the entities extracted by the service.

We are also excited to share that Azure Health Bot now has a new Azure OpenAI template in preview. The Azure Health Bot OpenAI template allows customers to extend their Azure Health Bot instance with Azure OpenAI Service for answering unrecognized utterances in a more intelligent way. This feature will be enabled through the Azure Health Bot template catalogue. Customers can choose to import this template into their bot instance using their Azure OpenAI resource endpoint and key, enabling fallback answers generated by GPT from trusted, medically viable sources that can be provisioned by customers. This feature provides a mechanism for customers to experiment with this capability as preview.1 Read more about this and how to apply responsible AI principles when implementing your own Health Bot instance in this blog.

We look forward to what the coming years will bring for the health and life sciences industry empowered by these new capabilities and the continued innovation we are seeing across AI and machine learning. The potential for improved precision care, quicker and more efficient clinical trials, and thereby drug and therapy availability and medical research is unparalleled. Microsoft looks forward to partnering with you and your organizations on this journey to improve the health of humankind.

Learn more

Project Health Insights Preview
Text analytics for Health
Azure Health Bot
Microsoft Cloud for Healthcare

1 At this time, we are offering the preview for internal testing and evaluation purposes only.

®FHIR is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and are used with their permission.
Quelle: Azure

The future of healthcare is data-driven

As analytics tools and machine learning capabilities mature, healthcare innovators are speeding up the development of enhanced treatments supported by Azure’s GPU-accelerated AI infrastructure powered by NVIDIA.

Improving diagnosis and elevating patient care

Man’s search for cures and treatments for common ailments has driven millennia of healthcare innovation. From the use of traditional medicine in early history to the rapid medical advances of the past few centuries, healthcare providers are locked in a constant search for effective solutions to old and emerging diseases and conditions.

The pace of healthcare innovation has increased exponentially over the past few decades, with the industry absorbing radical changes as it transitions from a health care to a health cure society. From telemedicine, personalized wellbeing, and precision medicine to genomics and proteomics, all powered by AI and advanced analytics, modern medical researchers can access more supercomputing capabilities than ever before. This quantum leap in computational capability, powered by AI, enables healthcare services dissemination and consumption in ways, and at a pace, that were previously unimaginable.

Today, health and life sciences leaders leverage Microsoft Azure high-performance computing (HPC) and purpose-built AI infrastructure to accelerate insights into genomics, precision medicine, medical imaging, and clinical trials, with virtually no limits to the computing power they have at their disposal. These advanced computing capabilities are allowing healthcare providers to gain deeper insights into medical data by deploying analytics and machine learning tools on top of clinical simulation data, increasing the accuracy of mathematical formulas used for molecular dynamics and enhancing clinical trial simulation.

By utilizing the infrastructure as a service (IaaS) capabilities of Azure HPC and AI, healthcare innovators can overcome the challenges of scale, collaboration, and compliance without adding complexity. And with access to the latest GPU-enabled virtual machines, researchers can fuel innovation through high-end remote visualization, deep learning, and predictive analytics.

Data scalability powers rapid testing capabilities

Take the example of the National Health Service, where the use of Azure HPC and AI led to the development of an app that could analyze COVID-19 tests at scale, with a level of accuracy and speed that is simply unattainable for human readers. This drastically improved the efficiency and scalability of analysis as well as capacity management.

Another advance worth noting, is that with Dragon Ambient Experience (DAX), an AI-based clinical solution offered by Nuance, doctor-patient experiences are optimized through the digitization of patient conversations into highly accurate medical notes, helping ensure high-quality care. By freeing up time for doctors to engage with their patients in a more direct and personalized manner, DAX improves the patient experience, reducing patient stress and saving time for doctors.

“With support from Azure and PyTorch, our solution can fundamentally change how doctors and patients engage and how doctors deliver healthcare.”—Guido Gallopyn, Vice President of Healthcare Research at Nuance.

Another exciting partnership between Nuance and NVIDIA brings directly into clinical settings medical imaging AI models developed with MONAI, a domain-specific framework for building and deploying imaging AI. By providing healthcare professionals with much needed AI-based diagnostic tools, across modalities and at scale, medical centers can optimize patient care at fractions of the cost compared to traditional health care solutions.

“Adoption of medical imaging AI at scale has traditionally been constrained by the complexity of clinical workflows and the lack of standards, applications, and deployment platforms. Our partnership with Nuance clears those barriers, enabling the extraordinary capabilities of AI to be delivered at the point of care, faster than ever.”—David Niewolny, Director of Healthcare Business Development at NVIDIA.

GPU-accelerated virtual machines are a healthcare game changer

In the field of medical imaging, progress relies heavily on the use of the latest tools and technologies to enable rapid iterations. For example, when Microsoft scientists sought to improve on a state-of-the-art algorithm used to screen blinding retinal diseases, they leveraged the power of the latest NVIDIA GPUs running on Azure virtual machines.

Using Microsoft Azure Machine Learning for computer vision, scientists reduced misclassification by more than 90 percent from 3.9 percent to a mere 0.3 percent. Deep learning model training was completed in 10 minutes over 83,484 images, achieving better performance than a state-of-the-art AI system. These are the types of improvements that can assist doctors in making more robust and objective decisions, leading to improved patient outcomes for patients.

For radiotherapy innovator Elekta, the use of AI could help expand access to life-saving treatments for people around the world. Elekta believes AI technology can help physicians by freeing them up to focus on higher-value activities such as adapting and personalizing treatments. The company accelerates the overall treatment planning process for patients undergoing radiotherapy by automating time-consuming tasks such as advanced analysis services, contouring targets, and optimizing the dose given to patients. In addition, they rely heavily on the agility and power of on-demand infrastructure and services from Microsoft Azure to develop solutions that help empower their clinicians, facilitating the provision of the next generation of personalized cancer treatments.

Elekta uses Azure HPC powered by NVIDIA GPUs to train its machine learning models with the agility to scale storage and compute resources as its research requires. Through Azure’s scalability, Elekta can easily launch experiments in parallel and initiate its entire AI project without any investment in on-premises hardware.

“We rely heavily on Azure cloud infrastructure. With Azure, we can create virtual machines on the fly with specific GPUs, and then scale up as the project demands.”—Silvain Beriault, Lead Research Scientist at Elekta.

With Azure high-performance AI infrastructure, Elekta can dramatically increase the efficiency and effectiveness of its services, helping to reduce the disparity between the many who need radiotherapy treatment and the few who can access it.

Learn more

Leverage Azure HPC and AI infrastructure today or request an Azure HPC demo.

Read more about Azure Machine Learning:

Multimodal 3D Brain Tumor Segmentation with Azure ML and MONAI.
Practical Federated Learning with Azure Machine Learning.

Quelle: Azure

Azure Space technologies advance digital transformation across government agencies

Since its launch, Microsoft Azure Space has been committed to enabling people to achieve more, both on and off the planet. This mission has transcended various industries, including agriculture, finance, insurance, and healthcare.

The announcements we’ve made thus far have helped showcase how our mission encompasses not only commercial industries but also empowers government missions through recent contract wins. By bringing new commercial technologies, such as Microsoft 365, Azure Government Cloud, and Azure Orbital, government agencies are increasing the speed, flexibility, and agility of their missions. Today, we are announcing additional momentum on this motion, including:

Viasat RTE integration with Azure Orbital Ground Station, bringing high rate, low latency data streaming downlink from spacecraft directly to Azure.
A partnership with Ball Aerospace and Loft Federal on the Space Development Agency’s (SDA) National Defense Space Architecture Experimental Testbed (NeXT) program, which will bring 10 satellites with experimental payloads into orbit and provide the associated ground infrastructure.
Advancements on the Hybrid Space Architecture for the Defense Innovation Unit, U.S. Space Force and Air Force Research Lab, with new partners and demonstrations that showcase the power, flexibility, and agility of commercial hybrid systems that work across multi-path, multi-orbit, and multi-vendor cloud enabled resilient capabilities.
Azure powers Space Information Sharing and Analysis Center (ISAC) to deliver Space cybersecurity and threat intelligence operating capabilities. The watch center’s collaborative environment provides visualization of environmental conditions and threat information to rapidly detect, assess and respond to space weather events, vulnerabilities, incidents, and threats to space systems.

Viasat Real-Time Earth general availability on Azure Orbital Ground Station

Microsoft has partnered with Viasat Real-Time Earth (RTE) to offer customers new real-time capabilities to manage spacecraft and missions with Azure Orbital Ground Station as a service. This includes the ability to view, schedule, and modify passes at Viasat RTE sites for downlinking data to Azure and bring real-time streaming directly to Azure across secure Microsoft WAN.

As commercial satellite operators require increasingly higher downlink rates to bring mission data such as hyperspectral or synthetic aperture radar into Azure—this partnership with Viasat increases the opportunity to access an established global network on KA-band antennas. This unlocks new business opportunities for missions that require fast time to insight whilst also maintaining a high level of security.

“Viasat Real-Time Earth is enabling remote sensing satellite operators who are pushing the envelope of high-rate downlinks. Our strong relationship with Azure Orbital enables those same customers, through increased access to our ground service over the Azure Orbital marketplace and a dependable, high-speed terrestrial network, to reduce the time it takes to downlink and deliver massive amounts of data.”—John Williams, Vice President Viasat Real-Time Earth.

Learn more about the power of Azure Orbital Ground Station and how it can unlock new missions by bringing real time date from space to earth, read more here.

True Anomaly

True Anomaly delivers a fully integrated technology platform that combines training and simulation tools, advanced spacecraft manufacturing infrastructure and autonomy systems to revolutionize space security and sustainability.

True Anomaly is using the Viasat RTE integration with Azure Orbital Ground Station via Microsoft APIs today to advance their business with the government.

"Together, True Anomaly, Viasat, and Microsoft will employ cutting-edge modeling, simulation, and visualization tools available to train Space Force guardians and other operators. Our partnership will extend to satellite control, leveraging Microsoft Azure Orbital to provide seamless and efficient satellite management solutions for our fleet of Autonomous Orbital Vehicles. By joining forces, we unlock a path to disrupt space operations and training for years to come."— Even Rogers, Co-founder and CEO of True Anomaly.

This partnership combines True Anomaly's innovative Mission Control System with Microsoft’s Azure Orbital and Viasat, offering a seamless satellite management solution for space security operations and training.

Microsoft, Loft Federal, and Ball Aerospace partner on Space Development Agency NExT

The Space Development Agency is charged to create and sustain effective and affordable military space capabilities that provide persistent, resilient, global, low-latency surveillance. The National Defense Space Architecture Experimental Testbed (NExT) program will carry 10 satellites with experimental payloads into orbit.

SDA NExT builds upon Microsoft’s Azure Space products and partnerships. Central to Microsoft’s solution for NExT is the combination of Azure Orbital Ground Station and Azure Government air-gapped clouds which will allow SDA to do their mission work in a secure cloud environment. 

Through NExT, together the SDA and US Space Force will securely operate a government-owned satellite constellation with Azure Orbital Ground Station’s global network for the first time. Additionally, Microsoft 365 will also provide them with productivity tools to enable personnel to share information, which will help ensure a coordinated response.

Microsoft Azure Government cloud will enable SDA to extract spaceborne data insights from the cloud to the ultimate edge and to scale innovation faster and better meet the critical needs of the Guardians and strengthen national security.

New advancements and partnerships for Hybrid Space Architecture

Last year, we announced our contract supporting the Department of Defense's (DoD) Defense Innovation Unit (in partnership with United States Space Force and Air Force Research Lab) on the Hybrid Space Architecture (HSA). The goal of the program is to bring our advanced, trusted cloud, and innovative Azure Space capabilities, alongside a space partner ecosystem, to serve as a foundation to realize their Hybrid Space Architecture vision.

This year, Microsoft completed the first demonstration for the program focused on resilient communication and data paths which showcased:

Multi-orbit, multi-vendor, resilient, edge-to-cloud connectivity including use of Azure Orbital Cloud Access through satellite communications partner SpaceX and SES.
SpatioTemporal Asset Catalogs (STAC) standards for operating a private Planetary Computer to efficiently manage large geospatial datasets and enable space vehicle tasking across multiple providers.
AI-enabled field user application to allow users to rapidly and easily discover and task satellite collection through an intuitive chat interface.

Microsoft is committed to a strong, and growing, partner ecosystem. As part of this first demonstration, the Hybrid Space Architecture ecosystem included the capabilities from Umbra and BlackSky.

Future demonstrations will incorporate all Azure Space capabilities including Azure Orbital Cloud Access, Azure Orbital Ground Station, Azure Orbital Space SDK, our leading security solutions, and vast threat intelligence, as well as multiple leading space partners.

Azure powers ISAC to deliver Space cybersecurity and threat intelligence operating capabilities

As a society, our increased reliance of space-based systems for commercial, government, and critical infrastructure sectors underscores the importance of sharing threat intelligence to safeguard space infrastructure, which supports billions of people globally.

The Space Information Sharing and Analysis Center (ISAC) was established several years ago, with Microsoft as a founding member, to facilitate timely collaboration across the global space industry to enhance the ability to prepare for and respond to vulnerabilities, incidents, and cybersecurity threats.

On March 30, 2023 the Space ISAC’s Operational Watch Center reached its initial operational capability hosted in Azure. The watch center’s collaborative environment provides visualization of environmental conditions and threat information to rapidly detect, assess and respond to space weather events, vulnerabilities, incidents, and threats to space systems. The Watch Center is supported by a dedicated team of 10 in-person analysts with additional virtual support enabled by Azure cloud architecture.

As the largest cloud architecture in the world, Microsoft has gained an exceptional vantage point and garnered unique experience on what it takes to secure cloud workloads and containers. Microsoft has a unique view into emerging threats based on analysis of over 65 trillion threat signals daily across over 200 global consumer and commercial services and shares this insight with the Space ISAC community.

Working with the Space ISAC Watch Center, we can rapidly share threat intelligence with the space community. In addition, the new Microsoft Security Copilot capability will be available to our Space ISAC partners, to enable cyber defense at machine speed and scale.

What's next

As the world grows more complex regarding global security, climate change, sustainability, and more, the imperative to partner across public and private sector has become even more clear. Government agencies have the most demanding missions and need to effectively manage massive and growing datasets, resiliently connect across the globe, respond quickly to changing events, and provide a secure and trusted platform for varied users. With the rapid advancements in space technologies and cloud computing, Azure Space is proud to work with an industry ecosystem team and committed to support these government agencies to innovate and address their hardest missions.
Quelle: Azure

Boost your data and AI skills with Microsoft Azure CLX

We’re excited to announce that the Microsoft Azure Connected Learning Experience (CLX) program now has three new Data and AI tracks designed for data professionals.

Personalized, self-paced, and culminating in a certificate of completion, these courses help you boost your data and AI skills your way—allowing you to maximize your learning in minimal time.

New courses

Who should attend?

Course content

AI-102: Designing and implementing a Microsoft Azure AI Solution

Azure AI Engineer, AI Developer, AI Specialist

This course boosts your understanding of building, managing, and deploying AI solutions that leverage Azure Cognitive Services and Azure Applied AI services. It’s designed for learners who are experienced in all phases of AI solutions development.

DP-300: Administering Microsoft Azure SQL Solutions

Database Management Specialists, Database Administrators

In this course, you’ll learn to build and manage cloud-native and hybrid data platform solutions based on SQL Server and SQL database services. The track is designed for Database Administrators who are familiar with database design and management for on-premises and cloud databases developed using SQL Server and SQL database services.

DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB

Azure Cosmos DB Developer, Database Developer, Data Developer Specialist

This track covers the design and implementation of data models, data distribution, data loading, and integration within Azure Cosmos DB and various Azure services. It’s designed for learners with expertise in designing, implementing, and monitoring of cloud-native applications that store and manage data.

What is the CLX program?

CLX is a four-step learning program that helps aspiring learners and IT professionals build skills on the latest topics in cloud services by providing learners with a mix of self-paced, interactive labs and virtual sessions led by Microsoft tech experts. CLX enables learners to minimize their time invested while maximizing their learning through its unique design, which includes four steps:

A knowledge assessment test
A Microsoft Learn study materials review
A virtual cram session
A practice test

At the start of the program, you’ll take a 20-question Knowledge Assessment to test your skills. Based on your results, you’ll receive customized course content that fits your experience—so you can focus only on the information that’s useful for you. If you’re a cloud computing pro, for example, you won’t need to study topics on the fundamentals of cloud computing, even if they’re a part of the course.

You’ll then dive into learning modules and hands-on, interactive online labs that mirror what you’ll experience in the professional world—helping you learn efficiently and effectively. The interactive labs are available on-demand and can be used as many times as needed.

What happens next?

After finishing the interactive labs, you can choose to attend a virtual cram session led by Microsoft-certified trainers that dives deeply into the course syllabus. You can use these sessions to ask any follow-up questions from the course content and get real-time help from Microsoft Certified Trainers (MCTs). To make sure you get the skills you need wherever you’re located, the two-to-four-hour sessions are held regularly in three time zones—Australian Easter Daylight Time (AEST), Greenwich Mean Time (GMT), and Pacific Daylight Time (PDT).

As the final step, you can choose to take an 80-question practice test that takes approximately two hours and mimics the final Microsoft Azure Certification Exam, ensuring you’re well-prepared to pass with ease.

Once you’ve finished your practice test, you’ll receive your certificate of completion and a 50 percent discount off the cost of the Microsoft Azure Certification Exam—and you’ll walk away with the skills to excel in the world of data and AI.

How do I get started with CLX?

To learn more and to register for CLX, visit the Microsoft Cloud Events Portal or check out our Microsoft Azure CLX introductory video.

You can also explore our suite of six additional CLX program courses that are designed to quickly strengthen the Azure skills of IT professionals and beyond.

These courses—which you can read more about in our previous blog here—will help you exceed your Azure learning goals and equip you with the skills to advance your career, now and beyond.
Quelle: Azure

Important observations from Microsoft at Mobile World Congress 2023

Mobile World Congress (MWC) was back in full swing this year, and so was Microsoft. To start, a recap of our MWC 2023 announcements includes 12 major product announcements made at MWC 2023 by Microsoft, with more than 20 demonstrations to highlight our latest developments. These demonstrations covered an array of topics such as live private and public multi-access edge compute (MEC) use cases, network API efforts with our operator partners, and the use of AI and analytics to improve operator efficiency and resiliency, among others.

Microsoft at Mobile World Congress 2023

As we often do, we have collected observations from hundreds of customer meetings (up nearly 35 percent from last year) and dozens of partner engagements at our demo booth, and I would like to share some of the important feedback that we heard while at MWC 2023:

Operators are deeply interested in the monetization opportunities created by programmable networks and modern connected applications.
Operators have a demand to see a strong total cost ownership (TCO) case when adopting cloud-native technology.
There is ongoing, keen interest in lessons learned from real deployments at scale versus proof-of-concept deployments.
Operators want to understand the impact that developments in AI can have on their ability to run networks that are more resilient, sustainable, and efficient.
Operators are looking for an ecosystem that includes meaningful partnerships—both technical and commercial—in order to ensure holistic success.
Operators are accelerating their plans for virtual RAN deployments.

In this blog post, we take a deeper look at each of these core pieces of feedback, providing additional insights, news, and links to further reading.

Creating value with network APIs and mobile edge computing

On top of the valuable opportunity created by the convergence of cloud computing and networking in a distributed fabric that spans 5G to space, Microsoft further believes that operators are uniquely positioned to create value as they modernize their networks to cloud-native technology.

The network itself is a rich source of intelligence and functionality that can be exposed—with the appropriate privacy controls and security—not just to end users but also more broadly to the developer community. This exposure will allow operators to find new sources of innovation in the developer community, and new approaches to support the goal of delivering value above and beyond pure connectivity. Network APIs will power a new generation of network-aware applications that put the flexibility and power of 5G to work, solving mission-critical business needs. Through our discussion of this exciting new development with our customers, a few key items emerged:

Consistent implementation across operators is needed, as developers are unlikely to build apps that run on a single network.
However, some localization will be required to ensure specific regulations on data residency, privacy, and controls are supported.
Tying into existing developer communities is essential to accelerate the adoption of these capabilities.
Developers require agility; therefore, we must trial new services in advance to create meaningful input for the standardization process.
There is an outsized need to establish an appropriate business model that supports the investment that operators are making in these capabilities.
Recognition that business models are likely to evolve rapidly as the technology trials with developers become more concrete.
Not all network APIs are created equal, and the business model should be flexible enough to permit different monetization schemes.

To find out more about Microsoft’s leadership in the development of network APIs, read about Azure Programmable Connectivity.

Microsoft continues to work closely with our operator partners to deploy both private and public MEC solutions. A consistent theme that we hear from our operator partners is the need for a robust ecosystem of application solutions that highlight the value of 5G and edge computing. Several conversations with operators and ISV partners highlighted the progress that is being made in the deployment of solutions across the manufacturing, energy, and transport industries. Further, we understand the value of and need to move beyond individual proof of concept trials to deeper solution catalogs that cover multiple business processes enabled by a private network deployment.

We invite application ISVs to collaborate with us by joining the private MEC and public MEC programs.

Updating network architecture while reducing the total cost of ownership

As operators have built more sophisticated service implementations on top of traditional networking solutions, they have moved away from physical appliances to adopt disaggregated network functions based on standardized hardware deployments. While this has yielded some CAPEX savings, the operational complexity of these do-it-yourself (DIY) virtualization efforts remains high. This complexity directly drives up cost and risk. As a result, our customers have challenged us to show definitive benefits when adopting infrastructure based on cloud provider technology as they look to upgrade their network architecture to support the increased resiliency needs and new services, and to undergo the upgrade to a 5G stand alone (SA) core. As part of these discussions, operators have asked us to address:

Demonstrating specific savings from the use of automation to accelerate deployment of capacity and to improve release quality.
Improving standardization and consistency of network function deployments on top of a cloud platform.
Eliminating the need to deal with managing hardware procurement and deployment.
Moving from a traditional CAPEX model to OPEX-based consumption.
Enabling the acceleration of and improvements to the quality of new service deployments within the network.

To learn more about the cost-effective deployment of 5G networks, read the Analysys Mason TCO report for Microsoft Azure Operator Nexus, February 2023.

Lessons learned from supporting a production deployment

While many operators recognize the potential of cloud technology to accelerate innovation and reduce costs, they are keen to understand the experiences of those early leaders who have already deployed similar cloud-based solutions at scale. For this reason, Microsoft invited our flagship customer, AT&T, to share their views on stage at MWC 2023. Igal Elbaz, Senior Vice President, Network Chief Technology Officer, and Rob Soni, Vice President of RAN Technology, AT&T Services, joined Microsoft to provide direct insight into their experience as AT&T continues to deploy Azure Operator Nexus to support AT&T’s mobility network.

Watch this video to hear about AT&T’s journey in their own words.

While many of the lessons learned from our deployment at AT&T have already been built into the Azure Operator Nexus platform, these lessons are also reflected in our blueprints for onboarding new network functions to the platform—our best practices for deployment and operations, our operating model, as well as our platform API design. To learn more about the Microsoft carrier-grade, hybrid platform, read the blog “Introducing Azure Operator Nexus.”

To learn more about how other customers are using Azure technology to test 5G technology, please see "MTN deploys one of the first 5G Standalone Core in Microsoft Azure."

Harnessing AI-powered operations with new services

In light of the increasing complexity of disaggregated network architectures, larger numbers of devices connecting to networks, and a desire for rapid innovation, our customers continue to express interest in harnessing the power of cloud-based analytics and AI. To address this interest, Microsoft announced the availability of two new services—Azure Operator Insights and Azure Operator Service Manager—which were developed to simplify network management. A key part of our platform value proposition is that many of the capabilities we are building on with these new offerings are the same capabilities that power the management of Azure itself—so they have truly been tried and tested.

When we spoke to customers about Azure Operator Insights at MWC 2023, the following themes surfaced:

On-premises or cloud-based data lakes have often been tried in one form or another, with mixed results. Data silos still exist as do the challenges of getting a systematic understanding of network health and customers’ quality of experiences.
Customers were excited to see Azure Operator insights paving the way to remove the silos and enable data democratization for all users.
GPT was of course at the front of everyone’s minds, and customers were also very excited to see the different ways in which Azure Operator Insights managed data would benefit from GPT—both in the short and longer terms.

Many customers also asked us how to automate network actions based on the insights AI can deliver. This is where Azure Operator Service Manager plays a role. Several key themes were raised consistently across many discussions on automation:

Manual operating procedures are still prevalent within operators’ environments. These tend to be error-prone, costly, and typically delay the deployment of software or configuration changes.
Where customers have tried to automate these procedures, the plethora of automation solutions for each underlying platform or network function vendor has resulted in fragmented tooling that fails to address the overarching service as a whole.
Customers were interested to see how Azure Operator Service Manager enables a service-centric automation toolchain, addressing network services composed of multiple network functions and deployed across many sites and heterogeneous infrastructures. Customers were also particularly keen to learn how we have been able to reduce real-life deployments from days to minutes.

We also heard operators talk about their leading-edge experiences when adopting AI for fault management, customer service, and automation. However, it was the fact that Microsoft is ensuring AIOps is an integral part of all of Microsoft Azure for Operators offerings and the consistency in management that this guarantees that really got operators excited—as well as GPT integration.

For more information, visit Azure Operator Insights and Azure Operator Service Manager.

Establishing new partnerships and offering meaningful ecosystem support

Microsoft is proud of our unique acquisition of industry-leading, cloud-native network functions that provide us with the telco DNA needed to truly understand the unique requirements of carrier-grade solutions. We are equally committed to supporting the partners that operators know and trust today, with full and equal access to the Azure Operator Nexus platform capabilities. Our customers reinforced the expectation that we continue to work closely with the industry ecosystem in areas such as network function pre-certification, software DevOps, and security best practices to enable the successful delivery of the end-to-end service experience.

Watch our partners describe Microsoft's efforts to jointly service the industry—including Monica Zethzon, Vice President and Head of Solution Area Core Networks, Ericsson, and Fran Heeran, Senior Vice President and General Manager of Core Networks, Nokia.

Register here to join the Azure Operator Nexus Ready program.

Adopting cloud technology to support virtual RAN workloads

As mobile operators begin planning for the next RAN upgrade after their current 5G new radio (NR) deployment, they are looking to better understand key questions such as:

Will hybrid cloud platforms support various proposed acceleration technologies for both private and macro networks if the operators choose to adopt them? This includes the TCO and performance expectations associated with the use of disaggregated platforms to support RAN workloads, with a particular emphasis on energy efficiency and spectral efficiency in dense areas.
Will cloud platforms provide the management of far-edge at scale, and enable fully cloud-native RAN workloads that are implemented on the proposed Open RAN interface specifications, such as the O2 interface?
Can platform services be used to gain end-to-end visibility from applications, packet cores, RAN, and infrastructure? And further, can this be leveraged to gain true visibility into the network and to provide fully optimized and automatized CI/CD and AIOps experiences?

To better understand Microsoft’s vision for the adoption of cloud technology in support of RAN workloads, check out the Microsoft programmable RAN platform whitepaper.

All in all, MWC 2023 provided a fantastic and rich opportunity for Microsoft to connect with our customers and partners as we expand the use of our Azure for Operators portfolio to modernize and monetize the network.

Discover more about Azure for Operators

Learn more about the Azure for Operators portfolio.
Quelle: Azure