Microsoft and Adobe partner to deliver cost savings and business benefits

Delivering quality end-to-end digital experiences can be challenging for multiple reasons including lack of resources, legacy technologies, and disorganized customer journeys. Microsoft and Adobe have purpose-built integrations to overcome these challenges with a result of simplifying deployment and reducing overall cost.

Grounded in open software standards and a scalable, secure cloud, Microsoft and Adobe deliver end-to-end technology ecosystems for a modern, secure, and connected enterprise. The integration between our applications transforms data into insights that enable intelligent, targeted, and customized marketing campaigns. With Microsoft setting the data foundation and Adobe providing a comprehensive marketing activation layer, we can take organizations from traditional batch marketing to real-time, precise, and timely event-based marketing.

Marketing needs to be tailored specifically to how, when, and where customers want to shop. To achieve this, Microsoft and Adobe are providing tools to make collaboration between employees easier and smarter, such as digital signatures and document sharing. Meanwhile, automation tools enable organizations to source, adapt, and deliver assets for more personalized customer experiences. These sophisticated integrations between Microsoft and Adobe are enabling businesses to do more with less.

Microsoft and Adobe partner to create a connected enterprise

A 2023 commissioned study conducted by Forrester Consulting—The Total Economic Impact™ Of Adobe SaaS Solutions with Microsoft Cloud—uncovered how our partnership and technology collaboration with Adobe lowers time and resource costs, improves employee productivity and customer engagement, as well as the return on investment by deploying Adobe applications for the enterprise on the Microsoft Cloud. The study specifically focuses on Adobe Experience Cloud and Adobe Document Cloud running on Microsoft Azure, Microsoft 365, and Microsoft Dynamics 365.

The study explores the challenges organizations hope to address by implementing the integrated Microsoft and Adobe solutions (see image below).

The resounding solutions to these challenges show the strength of the native integration between Microsoft and Adobe SaaS solutions, which enabled organizations to:

Enhance customer experiences by leveraging consolidated customer data, and with cloud support across solutions, gain real-time insights and analysis about these customers and marketing efforts.

Strengthen security and protection of data files across the enterprise with tightly integrated tools. With consolidated tools under an all-in-one vendor, tool deployment and IT team management is simplified.

Streamline data access and management across the organization. With tight integrations, organizations can improve collaboration, decision-making, and performance among their teams.

Integrated data means better customer journeys

As customers move between digital channels—mobile apps, social media, online chat, and so on—they generate digital records. Capturing insights from siloed data streams can be a challenge, impacting the ability to create personalized experiences in a timely manner for customers. By transitioning companies from legacy and siloed technologies to a connected, cloud-enabled tech stack, Microsoft and Adobe address the data issues that improve the quality of connections companies have to their customers.

According to the Forrester study, those implementing Adobe SaaS solutions on Microsoft Cloud say that managing customer experiences is the top goal of their organization (71 percent). This is also where organizations found key benefits of implementing integrated solutions.

Some survey participants saw a 45 percent gain in customer loyalty that they attributed to integrating Microsoft Cloud with Adobe Experience Cloud.

Data analysis was 15 percent faster, which improved the overall speed of work for marketers by 10 percent.

Customer satisfaction, the number of transactions, and customer retention rates all increased due to the data-driven capabilities the integration presents.

AI and Machine Learning are pivotal in sifting through large volumes of data, pinpointing areas of importance or anomalies for marketers. The wide range of triggering events, including email, calendar invites, webinars, advertising, data changes, and ERP events offer a holistic view of the customer. A focus on a self-service model delivers speed and ease of use. In combination, organizations gain operational efficiencies, as well as fast, effective marketing from Microsoft and Adobe integrations.

Increase data security across the enterprise

Keeping the organizations’ data and files secure is a leading concern of enterprise leaders—especially with data spread between on-prem servers and the cloud and hybrid work arrangements. The fear is that, without tighter integration with cloud security controls, their data centers are vulnerable to attacks.

Security is inherent with the cloud. Additionally, data stored in Microsoft Cloud and imported directly into natively connected Adobe solutions with native connectors reduces data vulnerabilities. Interviewees of the Forrester study reported that data and files specific to Adobe had not experienced any compromises.

Reduce time and resource demands to streamline deployment

Without the tight integration between Microsoft and Adobe, IT teams—or consultants and contractors—need to code or build custom API connectors to implement solutions. Microsoft and Adobe have developed over 60 specific integrations that are purpose-built for common use cases, reducing the workload for IT teams in customer organizations. The Forrester study found that for the composite organization the integration of Adobe tools went from a multi-month endeavor down to one month and required far fewer resources. This led to overall IT and security team productivity going up by 20 percent. One interviewed company reallocated 15 members of their 40-person IT team to other organizational projects. The Forrester study found that a composite organization based on interviewed customers saw the following financial benefits over three years:

251 percent ROI

1.3M USD Benefits PV

925 Thousand USD Net Present Value

Because Microsoft and Adobe support global content distribution, with Microsoft having cloud storage services worldwide, organizations can increase efficiencies in creating and distributing the millions of content assets needed to create personalized customer journeys.

Read the study to learn more

The Total Economic Impact™ Of Adobe SaaS Solutions with Microsoft Cloud report provides a deep analysis of the key challenges organizations look to address when deploying Adobe SaaS solutions on Microsoft Cloud. Read the full study to understand why.

As global leaders in business solutions, Microsoft and Adobe combine the power of data and expertise in marketing to deliver innovations that are reliable, secure, and optimized to meet consumers wherever they are in the customer journey.
The post Microsoft and Adobe partner to deliver cost savings and business benefits appeared first on Azure Blog.
Quelle: Azure

Manage your big data needs with HDInsight on AKS

As companies today look to do more with data, take full advantage of the cloud, and vault into the age of AI, they’re looking for services that process data at scale, reliably, and efficiently. Today, we’re excited to announce the upcoming public preview of HDInsight on Azure Kubernetes Service (AKS), our cloud-native, open-source big data service, completely rearchitected on Azure Kubernetes Service infrastructure with two new workloads and numerous improvements across the stack. The public preview will be available for use on 10/10.

HDInsight on AKS amplifying performance

HDInsight on AKS includes Apache Spark, Apache Flink, and Trino workloads on an Azure Kubernetes Service infrastructure, and features deep integration with popular Azure analytics services like Power BI, Azure Data Factory, and Azure Monitor, while leveraging Azure managed services for Prometheus and Grafana for monitoring. HDInsight on AKS is an end-to-end, open-source analytics solution that is easy to deploy and cost-effective to operate. 

HDInsight on AKS helps customers leverage open-source software for their analytics needs by: 

Providing a curated set of open-source analytics workloads like Apache Spark, Apache Flink, and Trino. These workloads are the best-in-class open-source software for data engineering, machine learning, streaming, and querying.

Delivering managed infrastructure, security, and monitoring so that teams can spend their time building innovative applications without needing to worry about the other components of their stack. Teams can be confident that HDInsight helps keep their data safe. 

Offering flexibility that teams need to extend capabilities by tapping into today’s rich, open-source ecosystem for reusable libraries, and customizing applications through script actions.

Customers who are deeply invested in open-source analytics can use HDInsight on AKS to reduce costs by setting up fully functional, end-to-end analytics systems in minutes, leveraging ready-made integrations, built-in security, and reliable infrastructure. Our investments in performance improvements and features like autoscale enable customers to run their analytics workloads at optimal cost. HDInsight on AKS comes with a very simple and consistent pricing structure per vcore per hour regardless of the size of the resource or the region, plus the cost of resources provisioned.

Developers love HDInsight for the flexibility it offers to extend the base capabilities of open-source workloads through script actions and library management. HDInsight on AKS has an intuitive portal experience for managing libraries and monitoring resources. Developers have the flexibility to use a Software Development Kit(SDK), Azure Resource Manager (ARM) templates, or the portal experience based on their preference.

Join us for a deep dive into this launch in our upcoming free webinar. 

Open, managed, and flexible

HDInsight on AKS covers the full gamut of enterprise analytics needs spanning streaming, query processing, batch, and machine learning jobs with unified visualization. 

Curated open-source workloads

HDInsight on AKS includes workloads chosen based on their usage in typical analytics scenarios, community adoption, stability, security, and ecosystem support. This ensures that customers don’t need to grapple with the complexity of choice on account of myriad offerings with overlapping capabilities and inconsistent interoperability.  

Each of the workloads on HDInsight on AKS is the best-in-class for the analytics scenarios it supports: 

Apache Flink is the open-source distributed stream processing framework that powers stateful stream processing and enables real-time analytics scenarios. 

Trino is the federated query engine that is highly performant and scalable, addressing ad-hoc querying across a variety of data sources, both structured and unstructured.  

Apache Spark is the trusted choice of millions of developers for their data engineering and machine learning needs. 

HDInsight on AKS offers these popular workloads with a common authentication model, shared meta store support, and prebuilt integrations which make it easy to deploy analytics applications.

Managed service reduces complexity

HDInsight on AKS is a managed service in the Azure Kubernetes Service infrastructure. With a managed service, customers aren’t burdened with the management of infrastructure and other software components, including operating systems, AKS infrastructure, and open-source software. This ensures that enterprises can benefit from ongoing security and functional and performance enhancements without investing precious development hours.  

Containerization enables seamless deployment, scaling, and management of key architectural components. The inherent resiliency of AKS allows pods to be automatically rescheduled on newly commissioned nodes in case of failures. This means jobs can run with minimal disruptions to Service Level Agreements (SLAs). 

Customers combining multiple workloads in their data lakehouse need to deal with a variety of user experiences, resulting in a steep learning curve. HDInsight on AKS provides a unified experience for managing their lakehouse. Provisioning, managing, and monitoring all workloads can be done in a single pane of glass. Additionally, with managed services for Prometheus and Grafana, administrators can monitor cluster health, resource utilization, and performance metrics.  

Through the autoscale capabilities included in HDInsight on AKS, resources—and thereby cost—can be optimized based on usage needs. For jobs with predictable load patterns, teams can schedule the autoscaling of resources based on a predefined timetable. Graceful decommission enables the definition of wait periods for jobs to be completed before ramping down resources, elegantly balancing costs with experience. Load-based autoscaling can ramp resources up and down based on usage patterns measured by compute and memory usage. 

HDInsight on AKS marks a shift away from traditional security mechanisms like Kerberos. It embraces OAuth 2.0 as the security framework, providing a modern and robust approach to safeguarding data and resources. In HDInsight on AKS authorization, access controls are based on managed identities. Customers can also bring their own virtual networks and associate them during cluster setup, increasing security and enabling compliance with their enterprise policies. The clusters are isolated with namespaces to protect data and resources within the tenant. HDInsight on AKS also allows management of cluster access using Azure Resource Manager (ARM) roles. 

Customers who’ve participated in the private preview love HDInsight on AKS. 

Here’s what one user had to say about his experience. 

“With HDInsight on AKS, we’ve seamlessly transitioned from the constraints of our in-house solution to a robust managed platform. This pivotal shift means our engineers are now free to channel their expertise towards core business innovation, rather than being entangled in platform management. The harmonious integration of HDInsight with other Azure products has elevated our efficiency. Enhanced security bolsters our data’s integrity and trustworthiness, while scalability ensures we can grow without hitches. In essence, HDInsight on AKS fortifies our data strategy, enabling more streamlined and effective business operations.” 
Matheus Antunes, Data Architect, XP Inc

Azure HDInsight on AKS resources

Learn more about Azure Kubernetes Service (AKS) 

The post Manage your big data needs with HDInsight on AKS appeared first on Azure Blog.
Quelle: Azure

Driving performance and enhancing services across Three UK’s 5G network

In the ever-evolving landscape of mobile telecommunications, Three UK deploys cutting-edge technologies to drive performance and improve overall service quality. Leveraging their 5G network and the power of AIOps, Three UK is focusing on enhancing the customer experience for data services such as streaming, gaming, and social media. This blog delves deeper into how Three UK uses Azure Operator Insights to expand their network capabilities, providing gamers with the most dynamic and innovative user experience.

Check out this video to learn more about how Three UK is unlocking actionable intelligence with Azure Operator Insights.

Recognizing the transformative potential of 5G, Three UK is committed to harnessing its capabilities to meet the escalating demands of the growing gaming industry. As the fourth largest mobile network operator in the United Kingdom, Three UK handles 29 percent of the country’s mobile data traffic, serving approximately 10 million subscribers. Three UK is the United Kingdom’s Ookla Speedtest Awards Winner for 5G mobile network speed during Q1-Q2 2023 with median download speeds of 265.75 Mbps.

Optimizing the gaming experience 

Three UK’s 5G network delivers impressive peak speeds and features high speed, low latency to ensure optimal responsiveness for gamers, allowing them to fully immerse themselves in virtual realms. To support this level of performance, Three UK has deployed a total of 18,400 sites, of which 4,600 provide high-speed 5G access to 60 percent1 of the population. 

Three UK recognizes the importance of leveraging data insights to maintain its position in the telecommunications industry and to further enhance the user experience. With many petabytes of data flowing through their network every day, Three UK possesses a wealth of telemetry information and metadata that can drive network management and customer satisfaction to new heights. By analyzing user behaviors, Three UK can identify factors contributing to positive or negative customer experiences, and then make highly targeted improvements. 

Employing Azure Operator Insights 

Managing a network as extensive and complex as Three UK’s certainly comes with challenges. Three UK adopts a best-of-breed approach by incorporating network functions from various suppliers. However, accessing data from these disparate sources poses a formidable challenge. By leveraging the expertise of Microsoft in data ingestion, transformation, and analysis, Three UK aims to unlock the full potential of this data. A key component of this collaboration is the application of Azure Operator Insights, a new service on the Azure cloud platform specifically designed to help telecommunications carriers manage and extract actionable information from their network data. 

Azure Operator Insights enables carriers like Three UK to collect, organize, and process large datasets, providing valuable business insights and improving customer experiences, in part by optimizing the efficiency of time-to-insight. What would have previously taken weeks or months to assess, will now be possible to perform in minutes with AI. With this solution, Three UK can easily ingest terabytes of telemetry, event, and log data from various sources and vendors. The service also offers powerful data analysis tools, AI and machine learning processing, and secure data governance and sharing capabilities. 

By leveraging Azure Operator Insights, Three UK gains the ability to efficiently analyze data, using applications like Azure Data Explorer, Azure Synapse Analytics, and Azure Databricks. They can optimize network performance using AI and machine learning models, identify areas for network improvements, and make data-driven decisions to enhance overall service quality. 

Enhancing customer experiences

Understanding the needs and preferences of customers is crucial in the competitive telecommunications landscape, where loyalty supports successful long-term performance. Because of this, Three UK leverages data insights to gain a comprehensive view of their customers’ experiences. By capturing and analyzing user gaming data and other important usage information, Three UK can evaluate the quality of the user’s experience for specific activities. This invaluable information allows them to identify factors that contribute to positive experiences, and to address any pain points that may arise. 

With a vast amount of data at their disposal, Three UK can determine the elements that make a gaming session either enjoyable or frustrating for its customers. Armed with this knowledge, they can proactively optimize their network infrastructure and services to ensure seamless gaming experiences. Whether it involves fine-tuning latency levels, increasing network capacity, or strategically deploying additional infrastructure, Three UK can make data-driven decisions that directly and positively impact the gaming experiences of its customers. 

Additionally, Three UK incorporates this single customer view into their network services’ key performance indicators (KPIs), taking a holistic approach to network management. By considering the entire network’s performance, they ensure that it not only leverages the power of the individual user experience but also responds to the needs of all users engaged with the network. This broader perspective enables them to allocate resources efficiently, make targeted improvements, and drive proactive maintenance and deployment strategies when it matters most. 

Azure Operator Insights driving future innovation with Three UK

Three UK uses cutting-edge technologies to drive performance, enhance services as well as efficiency in their 5G network. By recognizing the importance of data insights and collaborating with Microsoft through Azure Operator Insights, Three UK gains the ability to harness the power of their data effectively. This enables them to make data-driven adjustments, to optimize network performance, and to provide enhanced experiences for their customers.

Learn More

Lower TCO and increase operational efficiency with Azure Operator Nexus.

Explore more AIOps use case scenarios in this paper. 

Learn more about Azure Operator Insights.

References

Three UK press release.

The post Driving performance and enhancing services across Three UK’s 5G network appeared first on Azure Blog.
Quelle: Azure

Cloud Cultures, Part 3: The pursuit of excellence in the United Kingdom

The swift progression of technological innovation is truly captivating. However, for me, what holds an even greater fascination is the intricate interplay of people, narratives, and life encounters that shape how technology is used every day. The outcomes of cloud adoption are shaped dramatically by the people and their culture. These stories show firsthand how technology and tradition combine to form cloud cultures.

In our first two episodes, Poland and Sweden, we explored how the people in these countries have taken important leaps in delivering new innovative technologies, and the underlying culture that helped them get there in uniquely different ways. In Poland, there is a sense of fearlessness when it comes time to act. They are a dynamic country embracing change, reinventing themselves, and creating new innovative opportunities. For Sweden, success knows no borders. Despite being one of the largest countries in Europe by landmass, it’s one of the smallest in population, which forces Sweden’s ambitious entrepreneurs to adopt a global-first mindset from day one. Now it’s our turn to explore cloud culture in the United Kingdom. 

United Kingdom: Pursuit of excellence 

Our two data center regions in the United Kingdom have been live for nearly a decade and built to help support the exploding growth of the cloud in the United Kingdom and Europe. With the evolution of new cloud services, such as generative AI, we are on the precipice of another leap in cloud-powered innovation. My time in the United Kingdom helped me see how the innovations coming from this country and their pursuit of excellence are impacting this growth. 

In my visit to the United Kingdom, I saw how towering aspirations fuel an unwavering commitment to producing premium results and have set a global standard for greatness. Excellence is an ideal we all strive for. It drives our decision-making, fuels innovation, and propels us to exceed not only our own expectations but also the demands of an ever-evolving industry. 

The cloud culture here has shown me that excellence is more than just an outcome—it’s a mindset that permeates every aspect of our work. And while achieving it is always desirable, it’s the pursuit that matters most. 

Our conversations with customers and partners helped me see how the powerful winds of innovation that have converged with local customs, values, and ways of living, helped create something unique. 

How are United Kingdom customers using the cloud? 

These conversations helped uncover how high standards are more than just a commitment—it’s a way of life. Below are just a few of the United Kingdom customers who are transforming their businesses to adapt to the growing needs of their customers in the United Kingdom, and beyond: 

Rolls Royce Engines has delivered excellence through its engines for over 100 years. As a broad-based power and propulsion provider, they operate in some of the most complex, critical systems at the heart of global society. With each engine producing about half a gigabyte of data per flight, doing so for the next 100 years will require a transition to new ways of working. 

London Stock Exchange Group is one of the world’s leading providers of financial markets infrastructure delivering financial data, analytics, news, and index products to more than 40 thousand customers in 190 countries. Clients rely on their expertise and innovative technologies to navigate the unpredictable currents of the financial markets. And in an industry where even the slightest edge can lead to substantial margins, they’ve found theirs using the cloud to deliver insights and cutting-edge solutions at speed. 

VCreate is an innovative business operating in the sphere of healthcare. It develops secure video technology that connects patients, families, and clinical teams for improved diagnostic management and enhanced family-focused care. 

KX is a global provider of vector database technology for time-series, real-time, and embedded data that provides context and insights at the speed of thought. Its software powers generative AI applications in banking, life sciences, semiconductors, telecommunications, and manufacturing. Enabling the processing and analysis of time series and historical data at speed and scale, KX gives developers, data scientists, and engineers the tools to build high-performance data-driven applications that uncover deeper insights and drive transformative business innovation.

Data is the key 

Talking to these customers, I started to pick up on a common thread. Data is the key to unlocking this excellence. These companies process vast amounts of data in order to provide quality products or services to their customers. The cloud opens opportunities to analyze this data faster for more effective decision making, by drawing deeper insights from analytics.  

“There is a lot of focus on how to improve efficiency. You should focus more on doing the right things. It’s not about doing more for less; it’s doing the right things in the first place. It’s effectiveness, not efficiency.”—Ashok Reddy, Chief Executive Officer of KX. 

There is an important distinction between efficiency and effectiveness. Operating efficiently is undeniably important, but it doesn’t guarantee exceptional results. However, aligning our actions with meaningful outcomes can definitely be a differentiating factor. 

Learn more  

Technology is a powerful tool, not on its own but because of the people and cultures that shape it. As we move into this next era of digital transformation, with AI at the forefront, our mission has never been more important—to empower every person on the planet to achieve more. Everyone has a role to play in creating a better world and at Microsoft we simply want to provide the tools and resources to do so. 

Watch the Cloud Cultures: United Kingdom episode today.  
The post Cloud Cultures, Part 3: The pursuit of excellence in the United Kingdom appeared first on Azure Blog.
Quelle: Azure

Unlocking the potential of in-network computing for telecommunication workloads

Azure Operator Nexus is the next-generation hybrid cloud platform created for communications service providers (CSP). Azure Operator Nexus deploys Network Functions (NFs) across various network settings, such as the cloud and the edge. These NFs can carry out a wide array of tasks, ranging from classic ones like layer-4 load balancers, firewalls, Network Address Translations (NATs), and 5G user-plane functions (UPF), to more advanced functions like deep packet inspection and radio access networking and analytics. Given the large volume of traffic and concurrent flows that NFs manage, their performance and scalability are vital to maintaining smooth network operations.

Until recently, network operators were presented with two distinct options when it comes to implementing these critical NFs. One, utilize standalone hardware middlebox appliances, and two use network function virtualization (NFV) to implement them on a cluster of commodity CPU servers.

The decision between these options hinges on a myriad of factors—including each option’s performance, memory capacity, cost, and energy efficiency—which must all be weighed against their specific workloads and operating conditions such as traffic rate, and the number of concurrent flows that NF instances must be able to handle.

Our analysis shows that the CPU server-based approach typically outshines proprietary middleboxes in terms of cost efficiency, scalability, and flexibility. This is an effective strategy to use when traffic volume is relatively light, as it can comfortably handle loads that are less than hundreds of Gbps. However, as traffic volume swells, the strategy begins to falter, and more CPU cores are required to be dedicated solely to network functions.

In-network computing: A new paradigm

At Microsoft, we have been working on an innovative approach, which has piqued the interest of both industry personnel and the academic world—namely, deploying NFs on programmable switches and network interface cards (NIC). This shift has been made possible by significant advancements in high-performance programmable network devices, as well as the evolution of data plane programming languages such as Programming Protocol-Independent (P4) and Network Programming Language (NPL). For example, programmable switching Application-Specific Integrated Circuits (ASIC) offer a degree of data plane programmability while still ensuring robust packet processing rates—up to tens of Tbps, or a few billion packets per second. Similarly, programmable Network Interface Cards (NIC), or “smart NICs,” equipped with Network Processing Units (NPU) or Field Programmable Gate Arrays (FPGA), present a similar opportunity. Essentially, these advancements turn the data planes of these devices into programmable platforms.

This technological progress has ushered in a new computing paradigm called in-network computing. This allows us to run a range of functionalities that were previously the work of CPU servers or proprietary hardware devices, directly on network data plane devices. This includes not only NFs but also components from other distributed systems. With in-network computing, network engineers can implement various NFs on programmable switches or NICs, enabling the handling of large volumes of traffic (e.g., > 10 Tbps) in a cost-efficient manner (e.g., one programmable switch versus tens of servers), without needing to dedicate CPU cores specifically to network functions.

Current limitations on in-network computing

Despite the attractive potential of in-network computing, its full realization in practical deployments in the cloud and at the edge remains elusive. The key challenge here has been effectively handling the demanding workloads from stateful applications on a programmable data plane device. The current approach, while adequate for running a single program with fixed, small-sized workloads, significantly restricts the broader potential of in-network computing.

A considerable gap exists between the evolving needs of network operators and application developers and the current, somewhat limited, view of in-network computing, primarily due to a lack of resource elasticity. As the number of potential concurrent in-network applications grows and the volume of traffic that requires processing swells, the model is strained. At present, a single program can operate on a single device under stringent resource constraints, like tens of MB of SRAM on a programmable switch. Expanding these constraints typically necessitates significant hardware modifications, meaning when an application’s workload demands surpass the constrained resource capacity of a single device, the application fails to operate. In turn, this limitation hampers the wider adoption and optimization of in-network computing.

Bringing resource elasticity to in-network computing

In response to the fundamental challenge of resource constraints with in-network computing, we’ve embarked on a journey to enable resource elasticity. Our primary focus lies on in-switch applications—those running on programmable switches—which currently grapple with the strictest resource and capability limitations among today’s programmable data plane devices. Instead of proposing hardware-intensive solutions like enhancing switch ASICs or creating hyper-optimized applications, we’re exploring a more pragmatic alternative: an on-rack resource augmentation architecture.

In this model, we envision a deployment that integrates a programmable switch with other data-plane devices, such as smart NICs and software switches running on CPU servers, all connected on the same rack. The external devices offer an affordable and incremental path to scale the effective capacity of a programmable network in order to meet future workload demands. This approach offers an intriguing and feasible solution to the current limitations of in-network computing.

Figure 1: Example scenario scaling up to handle load across servers. The control plane installs programmable switch rules, which map cell sites to Far Edge servers.

In 2020, we presented a novel system architecture, called the Table Extension Architecture (TEA), at the ACM SIGCOMM conference.1 TEA innovatively provides elastic memory through a high-performance virtual memory abstraction. This allows top-of-rack (ToR) programmable switches to handle NFs with a large state in tables, such as one million per-flow table entries. These can demand several hundreds of megabytes of memory space, an amount typically unavailable on switches. The ingenious innovation behind TEA lies in its ability to allow switches to access unused DRAM on CPU servers within the same rack in a cost-efficient and scalable way. This is achieved through the clever use of Remote Direct Memory Access (RDMA) technology, offering only high-level Application Programming Interfaces (APIs) to application developers while concealing complexities.

Our evaluations with various NFs demonstrate that TEA can deliver low and predictable latency together with scalable throughput for table lookups, all without ever involving the servers’ CPUs. This innovative architecture has drawn considerable attention from members of both academia and industry and has found its application in various use cases that include network telemetry and 5G user-plane functions.

In April, we introduced ExoPlane at the USENIX Symposium on Networked Systems Design and Implementation (NSDI).2 ExoPlane is an operating system specifically designed for on-rack switch resource augmentation to support multiple concurrent applications.

The design of ExoPlane incorporates a practical runtime operating model and state abstraction to tackle the challenge of effectively managing application states across multiple devices with minimal performance and resource overheads. The operating system consists of two main components: the planner, and the runtime environment. The planner accepts multiple programs, written for a switch with minimal or no modifications, and optimally allocates resources to each application based on inputs from network operators and developers. The ExoPlane runtime environment then executes workloads across the switch and external devices, efficiently managing state, balancing loads across devices, and handling device failures. Our evaluation highlights that ExoPlane provides low latency, scalable throughput, and fast failover while maintaining a minimal resource footprint and requiring few or no modifications to applications.

Looking ahead: The future of in-network computing

As we continue to explore the frontiers of in-network computing, we see a future rife with possibilities, exciting research directions, and new deployments in production environments. Our present efforts with TEA and ExoPlane have shown us what’s possible with on-rack resource augmentation and elastic in-network computing. We believe that they can be a practical basis for enabling in-network computing for future applications, telecommunication workloads, and emerging data plane hardware. As always, the ever-evolving landscape of networked systems will continue to present new challenges and opportunities. At Microsoft we are aggressively investigating, inventing, and lighting up such technology advancements through infrastructure enhancements. In-network computing frees up CPU cores resulting in reduced cost, increased scale, and enhanced functionality that telecom operators can benefit from, through our innovative products such as Azure Operator Nexus.

References

TEA: Enabling State-Intensive Network Functions on Programmable Switches, ACM SIGCOMM 2020

ExoPlane: An Operating System for On-Rack Switch Resource Augmentation, USENIX NSDI 2023

The post Unlocking the potential of in-network computing for telecommunication workloads appeared first on Azure Blog.
Quelle: Azure

Accelerating the pace of innovation with Azure Space and our partners

Azure Space innovating into the future

Today, I’m excited to share some news spanning the full spectrum of space industry use cases, including:

Real-world examples of how Azure Orbital Ground Station is enabling both space agencies and start-ups with new ways to operate satellites in orbit.

A new addition to the Azure Space family, the Planetary Computer (and the petabytes of data within its catalog). Together, with new partnerships with Esri and Synthetaic, we are on a journey to empower our customers through rapid new insights from earth observation data. 

When Microsoft announced Azure Space in 2020, we saw a chance to innovate new solutions that meet modern demands and create opportunities for the future. So, we applied our proven partner-first approach to rapidly reimagine traditional space solutions, introduce new software-based tools, and minimize cost barriers holding back the space ecosystem.

This partner-first approach has allowed us to rapidly go from vision to solutions. In October 2020, we shared early outcomes that collectively brought Azure together with a global effort of ground stations, provided resilient connectivity to the hyperscale cloud, and launch a transformational effort to virtualize satellite communications.

Azure customers

Since then, both established satellite operators and exciting new start-ups have redefined the on-orbit possibilities. Now, start-ups, government agencies, and enterprises are experimenting with new form factor satellites, relying upon Earth observation data to derive actionable insights, and identifying ways to maintain constant connectivity in unpredictable global environments.

Together with our partners, we are rapidly innovating to provide every space operator with the solutions to solve persistent challenges in new ways and capture new opportunities in the rapidly expanding space sector.

Azure Orbital Ground Station supports our customers on and off the planet

Cloud computing is the foundation that underpins one of the biggest revolutions in the space industry, ground stations as a service. Those ground stations, in turn, have drastically lowered one of the most expensive barriers to entering space. One year after Azure Orbital Ground Station became generally available, customers including NASA and Muon Space are using it to support their operations.

Use KSAT and Azure Orbital Ground Station to improve delivery of NASA’s earth science data products

Teams from NASA (Langley Research Center and Goddard Space Flight Center), the global space company KSAT, and Microsoft have completed a technology demonstration. The demo focused on data acquisition, processing, and distribution of near real-time Earth Science data products in the cloud.

The teams successfully validated space connectivity across KSAT and Microsoft Azure Orbital Ground Station sites with four public satellites owned by both NASA and the National Oceanic and Atmospheric Administration (NOAA): Terra, Aqua, Suomi National Polar-orbiting Partnership, and JPSS-1/NOAA-20. 

The demonstration was a showcase for Azure, which provided real-time cross-region data delivery from Microsoft and KSAT sites to NASA’s virtual network in Azure. With satellite data in the cloud, NASA Azure compute and storage services to take data from raw form to higher processing levels (see sample final product in Image 1) improving latency from an average of 3-6 hours to under 25 minutes for some data products.

Integration of capabilities across Microsoft and partner KSAT allowed NASA to expand its coverage and connectivity, benefiting from ground station access through a single application programming interface (API), direct backhaul into Azure, cross-region delivery, and a unified data format experience.

Muon Space achieves liftoff and successful operations with MuSat-1 and Azure Orbital Ground Station

Muon Space selected Microsoft to support its first-ever launch in June 2023, leveraging Azure Orbital Ground Station as the sole ground station provider for their MuSat-1 mission. After MuSat-1 was deployed from the SpaceX Transporter 10, Muon Space achieved contact via Azure Orbital Ground Station within six minutes.1 From the launch and early operation (LEOP) stage to continuous on-orbit operations, Microsoft ground stations around the world are used to successfully communicate with MuSat-1. Azure Orbital Ground Station is a completely cloud-based solution. Therefore, hardware deployment was not needed, enabling Muon Space to take advantage of an innovative virtual radio frequency (RF) solution with a custom modem.

“Since we’re building constellations of multiband remote sensing spacecraft with unique revisit, resolution, and data latency capabilities, our ground station partner was a critical choice. Muon selected Azure Orbital Ground Station as we launch our constellation due to its current capabilities and product roadmap. Collaborating with Microsoft to handle ground allows us to focus on the core mission of gathering climate intelligence and serving it to our customers.”
Jim Martz, Vice President, Engineering at Muon.

Expanding capabilities to derive rapid insights from massive amounts of space data

Today, we are applying that same approach to revolutionize geospatial data, as we welcome Planetary Computer to the Azure Space family. Planetary Computer is a robust geospatial data platform. It combines a multi-petabyte catalog of global multi-source data with intuitive APIs, a flexible scientific environment that allows users to answer global questions about that data, and applications that put those answers in the hands of many stakeholders. It is used by researchers, scientists, students, and organizations across the globe with millions of API calls every month.

With Planetary Computer now part of the Azure Space family, we are beginning to work with partners in new ways. Today, we are building upon the existing catalog to drive a path to new capabilities and partnerships that will empower users with one of the largest Earth observation data sets at their fingertips—petabytes worth of possibilities for understanding our planet. In keeping with our partner-first approach, Esri and Synthetaic will provide essential capabilities to our platform and ecosystem. By combining the power of data analysis in Synthetaic’s RAIC, with the data visualization of Esri’s ArcGIS, customers will be able to glean insights from space data at previously unattainable speed and scale.

Planetary Computer is grounded in our commitment to not only better understand our world but to leverage the insights we gain to achieve our commitments to being a carbon-negative, water-positive, and zero-waste company by 2030. A quick look at Chesapeake Conservancy, a nonprofit organization based in Annapolis, Maryland, illustrates the possibilities that Planetary Computer can unlock for any organization to achieve its sustainability goals.

Chesapeake Conservancy is leading a regional effort to protect 30 percent of the Chesapeake Bay watershed by 2030 using a precision conservation approach that optimizes resources and protects land with the greatest value for water quality, outdoor recreation, wildlife habitat, and local economies.

“Collaborating with Microsoft Azure, we developed an AI system that maps ground-mounted solar arrays using up-to-date satellite data enabling us to regularly track one of the most rapid drivers of land use change in the watershed,” says Joel Dunn, CEO, Chesapeake Conservancy. “Going forward, we must contextualize these insights within a complete, up-to-date picture of land use. We’re excited to work with Microsoft and their partners to produce our 1-meter land use data more frequently and accurately keeping an active pulse on the entire Chesapeake Bay watershed.”

Using tools from our partners to harness the power of Microsoft Planetary Computer, customers such as Chesapeake Conservancy can use artificial intelligence and groundbreaking data to accelerate progress in conserving landscapes vital to the Chesapeake Bay’s health and its cultural heritage while equitably connecting people to the Chesapeake, as seen in the video here.

The space industry has made incredible progress since the dawn of the space age in the 1950s. That progress, however, has been slow, hard, and expensive. It has required expensive and sustained investments by governments, followed by the creation of bespoke, rigid, and complex-to-integrate satellite constellations and ground infrastructure.

What’s next for Azure Space

Building on the Microsoft Planetary Computer to develop a new Azure Space Data solution will create an end-to-end space fabric, providing ubiquitous connectivity, resiliency, and global insights at scale, in real-time. We are excited to build this platform and open it to the many companies, large and small, who are shaping the future of space and look forward to collaborating with those writing the next chapter in humanity’s journey beyond Earth.

We invite partners and enterprises interested in learning more about Azure Space to do the following:

Learn more about Azure Space

Sign up for news and updates on how Space data can advance your organization and missions, or complete this form to get in touch with the Azure Space team.

Save the date—join us for the Planetary Computer webinar in December.

References

Muon Space launches first satellite.

The post Accelerating the pace of innovation with Azure Space and our partners appeared first on Azure Blog.
Quelle: Azure

Real-world sustainability solutions with Azure IoT

In today’s fast-moving world, organizations are deploying innovative IoT and Digital Operations solutions that drive sustainable business practices, achieve energy conservation goals, and enhance operational efficiencies. I am amazed by their work and want to share a handful of recent stories that showcase how organizations use technology to solve real-world sustainability challenges for their customers.

Sustainability practices reduce energy use, waste, and costs

With technologies like open industrial IoT, advanced analytics, and AI, Microsoft Azure ensures manufacturing organizations are well-equipped to understand, mitigate, and validate their environmental impacts. Celanese and SGS are just two examples of Azure customers using IoT and Digital Operations to reduce energy use, waste, and costs.

Celanese, a specialty materials and chemical manufacturing company, envisions a Digital Plant of the Future powered by Cognite Data Fusion® on Microsoft Azure. The idea is to unify their processes, assets, and 25,000 employees on a common, scalable, and secure platform where AI algorithms actively identify and solve manufacturing problems.

For a global specialty manufacturer like Celanese, its ability to deploy diverse solutions quickly and cost-effectively anywhere across its value chain translates into millions of dollars in savings by optimizing heavy machinery and industrial processes. Microsoft Kubernetes Service (AKS) is core to Cognite’s infrastructure. Azure Functions orchestrates complex calculations with data stored in Azure Data Lake. AI capabilities in Azure and Azure Machine Learning provide actionable insights with contextualized industrial data. The solutions boost energy efficiency and reduce carbon emissions across their 30 industrial facilities across the globe.

Testing, inspection, and certification company SGS partnered with Microsoft Azure to develop an intelligent device for wind turbines called OCM-Online®, which uses Azure IoT Edge, Azure IoT Hub, and three Azure database services. The solution monitors and predicts turbine oil conditions and levels by collecting data from sensors that provide more than 17 different parameters from over 315 wind turbines. The solution is installed across one of the largest wind farms in the world, the Three Gorges Yangiiang Shaba Offshore project which powers 2.4 million households.

Instead of following a prescribed schedule for oil changes, wind farm operators now only change oil only when data shows it is needed. This greatly reduces unnecessary oil changes and recycling challenges. Historically, field teams manually collected samples and delivered them to a lab for analysis. With the global market size for online oil fluid monitoring valued at 689.7 million USD in 2021 and projected to reach 1.4 billion USD by 2031, digital solutions like OCM-Online are paramount to reducing waste and recycling challenges.

Data drives energy conservation efforts

We are seeing a massive build-out of clean energy technologies—wind, solar, hydro, and nuclear energy. However, tackling the supply side of energy use will not get us to global energy reduction goals. We need to reduce the demand. It’s challenging for consumers to make energy use decisions without clear and accessible data. Azure customers like SA Power Networks and Watts have innovative solutions consumers need to make smart, informed decisions.

One consumer-based solution comes from Watts, a Danish energy technology company. Watts uses Microsoft Azure for its smart home energy-tracking applications that allow households to monitor their own energy consumption patterns to understand how energy is being used and make decisions about when or if to run appliances. Consumers can even see where the energy comes from so, they can choose to power the house with green energy. The company is at the forefront of developing intuitive, accessible, user-friendly tools that use IoT devices to monitor power consumption. Near real-time data monitoring has driven down energy use in almost all homes on the grid.

Watts chose to build on the Microsoft.NET platform, a free, open-source software development framework and ecosystem designed by Microsoft. It created a system of 50 microservices communicating via Azure Service Bus and running on Azure Event Hubs. The system also relies on Azure Table Storage and Azure Blob Storage. The company has seen a huge increase in its customer base, indicating consumers want to make decisions that have an impact. Watts went from 150,000 users to 550,000 at the end of 2022.

Another consumer-facing solution comes from South Australian utility company SA Power Networks. It developed a solution based on Microsoft Azure IoT which enables customers with rooftop solar panels to export excess solar energy to the power grid. This excess energy provides a significant share of renewables on the grid that services 1.7 million customers spread across 180,000 square kilometers. 

Data from devices provides visibility into network conditions down to the local level, allowing SA Power Networks to respond more quickly to potential issues. It also allows for dynamically managed network capacity to keep energy resources balanced for a stable and more resilient grid. In just 12 months, the average customer doubled their exported energy which makes more low-cost, renewable energy available to all customers on the SA Power Networks grid.

Operational efficiencies support growth while reducing costs

When companies optimize their operations, they experience increased productivity and reduced production costs. They also consume less energy and use fewer resources. Telefónica, a telecommunications provider, uses an Azure-IoT-based platform to efficiently and securely manage 6.5 billion messages each day. Its Home Advanced Connectivity (HAC) platform uses Microsoft Azure IoT and Azure IoT Hub device provisioning service to enable real-time, bidirectional data flows between 4.5 million in-home gateway devices and the Telefónica cloud. Operations teams can diagnose or predict connectivity issues by retrieving information directly from a customer’s router and delivering a fix within a single, continuous data flow. HAC also uses IoT Hub device twins to help ensure precise, remote configuration of routers. It’s an efficient digital solution that streamlines scaling up to 20 million devices in the next few years.

Let Azure unlock your potential

From startups to Fortune 500 powerhouses, Azure is fueling innovation and driving success across diverse industries worldwide. This is a small sampling of the work our customers are doing to support sustainability goals for the public and private sectors. You can read their success stories and about other companies here.
The post Real-world sustainability solutions with Azure IoT appeared first on Azure Blog.
Quelle: Azure

Microsoft and Accenture partner to tackle methane emissions with AI technology

This post was co-authored by Dan Russ, Associate Director, and Sacha Abinader, Managing Director from Accenture.

The year 2022 was a notable one in the history of our climate—it stood as the fifth warmest year ever recorded1. An increase in extreme weather conditions, from devastating droughts and wildfires to relentless floods and heat waves, made their presence felt more than ever before—and 2023 seems poised to shatter still more records. These unnerving circumstances demonstrate the ever-growing impact of climate change that we’ve come to experience as the planet continues to warm.

Microsoft’s sustainability journey

At Microsoft, our approach to mitigating the climate crisis is rooted in both addressing the sustainability of our own operations and in empowering our customers and partners in their journey to net-zero emissions. In 2020, Microsoft set out with a robust commitment: to be a carbon-negative, water positive, and zero-waste company, while protecting ecosystems, all by the year 2030. Three years later, Microsoft remains steadfast in its resolve. As part of these efforts, Microsoft has launched Microsoft Cloud for Sustainability, a comprehensive suite of enterprise-grade sustainability management tools aimed at supporting businesses in their transition to net-zero.

Moreover, our contribution to several global sustainability initiatives has the goal of benefiting every individual and organization on this planet. Microsoft has accelerated the availability of innovative climate technologies through our Climate Innovation Fund and is working hard to strengthen our climate policy agenda. Microsoft’s focus on sustainability-related efforts forms the backdrop for the topic tackled in this blog post: our partnership with Accenture on the application of AI technologies toward solving the challenging problem of methane emissions detection, quantification, and remediation in the energy industry.

“We are excited to partner with Accenture to deliver methane emissions management capabilities. This combines Accenture’s deep domain knowledge together with Microsoft’s cloud platform and expertise in building AI solutions for industry problems. The result is a solution that solves real business problems and that also makes a positive climate impact.”—Matt Kerner, CVP Microsoft Cloud for Industry, Microsoft.

Why is methane important?

Methane is approximately 85 times more potent than carbon dioxide (CO2) at trapping heat in the atmosphere over a 20-year period. It is the second most abundant anthropogenic greenhouse gas after CO2, accounting for about 20 percent of global emissions.

The global oil and gas industry is one of the primary sources of methane emissions. These emissions occur across the entire oil and gas value chain, from production and processing to transmission, storage, and distribution. The International Energy Agency (IEA) estimates that it is technically possible to avoid around 75 percent of today’s methane emissions from global oil and gas operations. These statistics drive home the importance of addressing this critical issue.

Microsoft’s investment in Project Astra

Microsoft has signed on to the Project Astra initiative—together with leading energy companies, public sector organizations, and academic institutions—in a coordinated effort to demonstrate a novel approach to detecting and measuring methane emissions from oil and gas production sites.

Project Astra entails an innovative sensor network that harnesses advances in methane-sensing technologies, data sharing, and data analytics to provide near-continuous emissions monitoring of methane across oil and gas facilities. Once operational, this kind of smart digital network would allow producers and regulators to pinpoint methane releases for timely remediation.

Accenture and Microsoft—The future of methane management

Attaining the goal of net-zero methane emissions is becoming increasingly possible. The technologies needed to mitigate emissions are maturing rapidly, and digital platforms are being developed to integrate complex components. As referenced in Accenture’s recent methane thought leadership piece, “More than hot air with methane emissions”. What is needed now is a shift—from a reactive paradigm to a preventative one—where the critical issue of leak detection and remediation is transformed into leak prevention by leveraging advanced technologies.

Accenture’s specific capabilities and toolkit

To date, the energy industry’s approach to methane management has been fragmented and comprised of a host of costly monitoring tools and equipment that have been siloed across various operational entities. These siloed solutions have made it difficult for energy companies to accurately analyze emissions data, at scale, and remediate those problems quickly.

What has been lacking is a single, affordable platform that can integrate these components into an effective methane emissions mitigation tool. These components include enhanced detection and measurement capabilities, machine learning for better decision-making, and modified operating procedures and equipment that make “net-zero methane” happen faster. These platforms are being developed now and can accommodate a wide variety of technology solutions that will form the digital core necessary to achieve a competitive advantage.

Accenture has created a Methane Emissions Monitoring Platform (MEMP) that facilitates the integration of multiple data streams and embeds key methane insights into business operations to drive action (see Figure 1 below).

Figure 1: Accenture’s Methane Emissions Monitoring Platform (MEMP).

The cloud-based platform, which runs on Microsoft Azure, enables energy companies to both measure baseline methane emissions in near real-time and detect leaks using satellites, fixed wing aircraft, and ground level sensing technologies. It is designed to integrate multiple data sources to optimize venting, flaring, and fugitive emissions. Figure 2 below illustrates the aspirational end-to-end process incorporating Microsoft technologies. MEMP also facilitates connectivity with back-end systems responsible for work order creation and management, including the scheduling and dispatching of field crews to remediate specific emission events.

Figure 2: The Methane Emissions Monitoring Platform Workflow (aspirational).

Microsoft’s AI tools powering Accenture’s Methane Emissions Monitoring Platform

Microsoft has provided a number of Azure-based AI tools for tackling methane emissions, including tools that support sensor placement optimization, digital twin for methane Internet of Things (IoT) sensors, anomaly (leak) detection, and emission source attribution and quantification. These tools, when integrated with Accenture’s MEMP, allow users to monitor alerts in near real-time through a user-friendly interface, as shown in Figure 3.

Figure 3: MEMP Landing Page visualizing wells, IoT sensors, and Work Orders.

“Microsoft has developed differentiated AI capabilities for methane leak detection and remediation, and is excited to partner with Accenture in integrating these features onto their Methane Emissions Monitoring Platform, to deliver value to energy companies by empowering them in their path to net-zero emissions”—Merav Davidson, VP, Industry AI, Microsoft.

Methane IoT sensor placement optimization

Placing sensors in strategic locations to ensure maximum potential coverage of the field and timely detection of methane leaks is the first step towards building a reliable end-to-end IoT-based detection and quantification solution. Microsoft’s solution for sensor placement utilizes geospatial, meteorological, and historical leak rate data and an atmospheric dispersion model to model methane plumes from sources within the area of interest and obtain a consolidated view of emissions. It then selects the best locations for sensors using either a mathematical programming optimization method, a greedy approximation method, or an empirical downwind method that considers the dominant wind direction, subject to cost constraints.

In addition, Microsoft provides a validation module to evaluate the performance of any candidate sensor placement strategy. Operators can evaluate the marginal gains offered by utilizing additional sensors in the network, through sensitivity analysis as shown in Figure 4 below.

Figure 4: Left: Increase in leak coverage with a number of sensors. By increasing the number of sensors that are available for deployment, the leak detection ratio (i.e., the fraction of detected leaks by deployed sensors) increases. Right: Source coverage for 15 sensors. The arrows map each sensor (red circles) to the sources (black triangles) that it detects.

End-to-end data pipeline for methane IoT sensors

To achieve continuous monitoring of methane emissions from oil and gas assets, Microsoft has implemented an end-to-end solution pipeline where streaming data from IoT Hub is ingested into a Bronze Delta Lake table leveraging Structured Streaming on Spark. Sensor data cleaning, aggregation, and transformation to algorithm data model are done and the resultant data is stored in a Silver Delta Lake table in a format that is optimized for downstream AI tasks.

Methane leak detection is performed using uni- and multi-variate anomaly detection models for improved reliability. Once a leak has been detected, its severity is also computed, and the emission source attribution and quantification algorithm then identifies the likely source of the leak and quantifies the leak rate.

This event information is sent to the Accenture Work Order Prioritization module to trigger appropriate alerts based on the severity of the leak to enable timely remediation of fugitive or venting emissions. The quantified leaks can also be recorded and reported using tools such as the Microsoft Sustainability Manager app. The individual components of this end-to-end pipeline are described in the sections below and illustrated in Figure 5.

Figure 5: End-to-end IoT data pipeline that runs on Microsoft Azure demonstrating methane leak detection, quantification, and remediation capabilities.

Digital twin for methane IoT sensors

Data streaming from IoT sensors deployed in the field needs to be orchestrated and reliably passed to the processing and AI execution pipeline. Microsoft’s solution creates a digital twin for every sensor. The digital twin comprises a sensor simulation module that is leveraged in different stages of the methane solution pipeline. The simulator is used to test the end-to-end pipeline before field deployment, reconstruct and analyze anomalous events through what-if scenarios and enable the source attribution and leak quantification module through a simulation-based, inverse modeling approach.

Anomaly (leak) detection

A methane leak at a source could manifest as an unusual rise in the methane concentration detected at nearby sensor locations that require timely mitigation. The first step towards identifying such an event is to trigger an alert through the anomaly detection system. A severity score is computed for each anomaly to help prioritize alerts. Microsoft provides the following two methods for time series anomaly detection, leveraging Microsoft’s open-source SynapseML library, which is built on the Apache Spark distributed computing framework and simplifies the creation of massively scalable machine learning pipelines:

Univariate anomaly detection: Based on a single variable, for example, methane concentration.

Multivariate anomaly detection: Used in scenarios where multiple variables, including methane concentration, wind speed, wind direction, temperature, relative humidity, and atmospheric pressure, are used to detect an anomaly.

Post-processing steps are implemented to reliably flag true anomalous events so that remedial actions can be taken in a timely manner while reducing false positives to avoid unnecessary and expensive field trips for personnel. Figure 6 below illustrates this feature in Accenture’s MEMP: the ‘hover box” over Sensor 6 documents a total of seven alerts resulting in just two work orders being created.

Figure 6: MEMP dashboard visualizing alerts and resulting work orders for Sensor 6.

Emission source attribution and quantification

Once deployed in the field, methane IoT sensors can only measure compound signals in the proximity of their location. For an area of interest that is densely populated with potential emission sources, the challenge is to identify the source(s) of the emission event. Microsoft provides two approaches for identifying the source of a leak:

Area of influence attribution model: Given the sensor measurements and location, an “area of influence” is computed for a sensor location at which a leak is detected, based on the real-time wind direction and asset geo-location. Then, the asset(s) that lie within the computed “area of influence” are identified as potential emissions sources for that flagged leak.

Bayesian attribution model: With this approach, source attribution is achieved through inversion of the methane dispersion model. The Bayesian approach comprises two main components—a source leak quantification model and a probabilistic ranking model—and can account for uncertainties in the data stemming from measurement noise, statistical and systematic errors, and provides the most likely sources for a detected leak, the associated confidence level and leak rate magnitude.

Considering the high number of sources, low number of sensors, and the variability of the weather, this poses a complex but highly valuable inverse modeling problem to solve. Figure 7 provides insight regarding leaks and work orders for a particular well (Well 24). Specifically, diagrams provide well-centric and sensor-centric assessments that attribute a leak to this well.

Figure 7: Leak Source Attribution for Well 24.

Further, Accenture’s Work Order Prioritization module using Microsoft Dynamics 365 Field Service application (Figure 8) enables Energy operators to initiate remediation measures under the Leak Detection and Remediation (LDAR) paradigm.

Figure 8: Dynamics 365 Work Order with emission source attribution and CH4 concentration trend data embedded.

Looking ahead

In partnership with Microsoft, Accenture is looking to continue refining MEMP, which is built on the advanced AI and statistical models presented in this blog. Future capabilities of MEMP look to move from “detection and remediation” to “prediction and prevention” of emission events, including enhanced event quantification and source attribution.

Microsoft and Accenture will continue to invest in advanced capabilities with an eye toward both:

Integrating industry standards platforms such as Azure Data Manager for Energy (ADME) and Open Footprint Forum to enable both publishing and consumption of emissions data.

Leveraging Generative AI to simplify the user experience.

Learn more

Case study

Duke Energy is working with Accenture and Microsoft on the development of a new technology platform designed to measure actual baseline methane emissions from natural gas distribution systems.

Accenture Methane Emissions Monitoring Platform

More information regarding Accenture’s MEMP can be found in “More than hot air with methane emissions”. Additional information regarding Accenture can be found on the Accenture homepage and on their energy page.

Microsoft Azure Data Manager for Energy

Azure Data Manager for Energy is an enterprise-grade, fully managed, OSDU Data Platform for the energy industry that is efficient, standardized, easy to deploy, and scalable for data management—ingesting, aggregating, storing, searching, and retrieving data. The platform will provide the scale, security, privacy, and compliance expected by our enterprise customers. The platform offers out-of-the-box compatibility with major service company applications, which allows geoscientists to use domain-specific applications on data contained in Azure Data Manager for Energy with ease.

Related publications and conference presentations

Source Attribution and Emissions Quantification for Methane Leak Detection: A Non-Linear Bayesian Regression Approach. Mirco Milletari, Sara Malvar, Yagna Oruganti, Leonardo Nunes, Yazeed Alaudah, Anirudh Badam. The 8th International Online & Onsite Conference on Machine Learning, Optimization, and Data Science.

Surrogate Modeling for Methane Dispersion Simulations Using Fourier Neural Operator. Qie Zhang, Mirco Milletari, Yagna Oruganti, Philipp Witte. Presented at the NeurIPS 2022 Workshop on Tackling Climate Change with Machine Learning.

1https://climate.nasa.gov/news/3246/nasa-says-2022-fifth-warmest-year-on-record-warming-trend-continues/
The post Microsoft and Accenture partner to tackle methane emissions with AI technology appeared first on Azure Blog.
Quelle: Azure

Optimize the cost of .NET and Java application migration to Azure cloud

In today’s uncertain economic environment, cost is top of mind for every organization. With uncertain global economic conditions, high inflation rates, and challenging job markets, many businesses are tightening their spending. Yet, companies continue to prioritize substantial budget allocations for digital transformation, especially for the agility, performance, and security gained by migrating applications to the cloud. The reason is simple: investments in cloud translate to positive impacts on the business revenue and significant cost savings.  

But how do businesses turn this opportunity into reality? In this article, we’ll look at  several levers that Azure provides to help organizations maximize the cost benefits of migrating .NET and Java apps to the cloud. One of the things to note about cost optimization is that it’s not only about the price. There are significant financial benefits to be gained when you leverage the right technical resources, have access to best practices from real-world experiences with thousands of customers, and flexibility of the right pricing option for any scenario. These factors may result in a compelling total cost of ownership (TCO).  

Let’s look at some of these benefits for Azure App Service customers below: 

Azure landing zone accelerators

Enterprise web app patterns

Powerful Azure Migrate automation tooling

Offers to offset the initial cost of migration

Cost-effective range of pricing plans

Faster time to value with expert guidance through landing zone accelerators  

For cloud migration projects, getting it right quickly from the start sets the foundation for business success and savings. Azure landing zone accelerators are prescriptive solution architectures and guidance that aid IT pros in preparing for migration and deployment of on-premises apps to the cloud.  

Provided at no additional cost and capturing the expert guidance from migrations done with thousands of customers, landing zone accelerators are a compelling differentiator with Azure that help organizations focus on delivering value rather than spend cycles doing the heavy lifting of migration on their own. Based on well-architected principles and industry best-practices for securing and scaling application and platform resources, these resources create tangible cost savings by reducing the time and effort to complete app migration projects.  

Learn more about other landing zone accelerator workloads, and watch the Azure App Service landing zone accelerator demo. 

Enhance developer skilling with the reliable enterprise web app pattern

The reliable web app (RWA) pattern is another free resource from Azure that is specifically designed to empower developers confidently plan and execute the migration process. It is targeted at both experts in the cloud and developers who may be more familiar with on-premises tools and solutions and taking their first steps in the cloud. Built on the Azure Well-Architected Framework, this set of best practices helps developers successfully migrate web applications to the cloud and establishes a developer foundation for future innovations on Azure. We are pleased to announce that a reliable web app pattern for Java is now available, in addition to the .NET pattern announced at Build.

The reliable web app pattern provides guidance on the performance, security, operations, and reliability of web applications with minimal changes during the migration process. It smoothens the learning curve and greatly reduces the length of the migration project, thereby saving organizations the cost of maintaining on-premises infrastructure any longer. The Azure Architecture Center provides comprehensive guidance, open-source reference implementation code, and CI/CD pipelines on GitHub. Check out the free, on-demand Microsoft Build 2023 session to learn more. 

Accelerate the end-to-end migration journey with free automated tooling  

Costs of tooling and automation are often underestimated during migration projects. Azure Migrate is a free Microsoft tool for migrating and modernizing in Azure. It provides discovery, assessment, business case analysis, planning, migration, and modernization capabilities for various workloads on premises—all while allowing developers to run and monitor the proceedings from a single secure portal. Watch this short demo of the business case feature, and find Azure Migrate in the portal to get started.

Azure Migrate, Azure Advisor, and Azure Cost Management and Billing are components of this migration journey that provide guidance, insights, and the ability to right-size Azure resources for optimal cost-efficiency. 

Offset the initial cost of migration projects with Azure offerings

To alleviate risks and help jumpstart migration with confidence, Azure Migrate and Modernize partner offers are available to customers. It not only helps build a sustainable plan to accelerate the cloud journey with the right mix of best practices, resources, and extensive guidance at every stage, but may also include agile funding to off-set the initial costs.  

With Azure Migrate and Modernize, moving to the cloud is efficient and cost-optimized with free tools like Azure Migrate and Azure Cost Management. Additionally, it supports environmentally sustainable outcomes and drives operational efficiencies, while reducing migration costs through tailored offers and incentives based on your specific needs and journey. Work with your Microsoft partner to take advantage of these offers in your enterprise app migration.  

Benefit from a wide range of flexible and cost-effective plans

Azure App Service is one of the oldest and most popular destinations for .NET and Java app migrations, with over two and a half million web apps and growing fast. It offers a wide range of flexible pricing options to save on compute costs. Azure Savings Plan for Compute is ideal if the flexibility to run dynamic workloads across a variety of Azure services is crucial. Reserved instances are another popular option, providing substantial cost savings for workloads with predictable resource needs. There are various pricing plans and tiers to suit every budget and need—from a new entry-level Premium v3 plan called P0v3, to large-scale plans that support up to 256GB memory. For hobbyists and learners, Azure App Service has one of the most compelling free tiers that continues to attract new developers every day.  

Check out the Azure App Service pricing page and pricing calculator to learn more.  

Learn more

Interested in learning more? Dive deeper into the cost optimization strategies and see how other organizations have optimized their cost of migration with the following papers: 

Save up to 54 percent versus on-premises and up to 35 percent versus Amazon Web Services by migrating to Azure.

Forrester study finds 228 percent ROI when modernizing applications on Azure PaaS.

Plan to manage costs for App Service.

Read our customer stories, including from the NBA, a leading sports association in United States, and Nexi, a leading European payment technology company.  

Follow Azure App Service on Twitter.
The post Optimize the cost of .NET and Java application migration to Azure cloud appeared first on Azure Blog.
Quelle: Azure

Scale generative AI with new Azure AI infrastructure advancements and availability

Generative AI is a powerful and transformational technology that has the potential to advance a wide range of industries from manufacturing to retail, and financial services to healthcare. Our early investments in hardware and AI infrastructure are helping customers to realize the efficiency and innovation generative AI can deliver. Our Azure AI infrastructure is the backbone of how we scale our offerings, with Azure OpenAI Service at the forefront of this transformation, providing developers with the systems, tools, and resources they need to build next-generation, AI-powered applications on the Azure platform. With generative AI, users can create richer user experiences, fuel innovation, and boost productivity for their businesses.  

As part of our commitment to bringing the transformative power of AI to our customers, today we’re announcing updates to how we’re empowering businesses Azure AI infrastructure and applications. With the global expansion of Azure OpenAI Service, we are making OpenAI’s most advanced models, GPT-4 and GPT-35-Turbo, available in multiple new regions, providing businesses worldwide with unparalleled generative AI capabilities. Our Azure AI infrastructure is what powers this scalability, which we continue to invest in and expand. We’re also delivering the general availability of the ND H100 v5 Virtual Machine series, equipped with NVIDIA H100 Tensor Core graphics processing units (GPUs) and low-latency networking, propelling businesses into a new era of AI applications. 

Here’s how these advancements extend Microsoft’s unified approach to AI across the stack.  

General availability of ND H100 v5 Virtual Machine series: Unprecedented AI processing and scale

Today marks the general availability of our Azure ND H100 v5 Virtual Machine (VM) series, featuring the latest NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking. This VM series is meticulously engineered with Microsoft’s extensive experience in delivering supercomputing performance and scale to tackle the exponentially increasing complexity of cutting-edge AI workloads. As part of our deep and ongoing investment in generative AI, we are leveraging an AI optimized 4K GPU cluster and will be ramping to hundreds of thousands of the latest GPUs in the next year. 

The ND H100 v5 is now available in the East United States and South Central United States Azure regions. Enterprises can register their interest in access to the new VMs or review technical details on the ND H100 v5 VM series at Microsoft Learn.  

The ND H100 v5 VMs include the following features today: 

AI supercomputing GPUs: Equipped with eight NVIDIA H100 Tensor Core GPUs, these VMs promise significantly faster AI model performance than previous generations, empowering businesses with unmatched computational power.

Next-generation computer processing unit (CPU): Understanding the criticality of CPU performance for AI training and inference, we have chosen the 4th Gen Intel Xeon Scalable processors as the foundation of these VMs, ensuring optimal processing speed.

Low-latency networking: The inclusion of NVIDIA Quantum-2 ConnectX-7 InfiniBand with 400Gb/s per GPU with 3.2 Tb/s per VM of cross-node bandwidth ensures seamless performance across the GPUs, matching the capabilities of top-performing supercomputers globally.

Optimized host to GPU performance: With PCIe Gen5 providing 64GB/s bandwidth per GPU, Azure achieves significant performance advantages between CPU and GPU.

Large scale memory and memory bandwidth: DDR5 memory is at the core of these VMs, delivering greater data transfer speeds and efficiency, making them ideal for workloads with larger datasets.

These VMs have proven their performance prowess, with up to six times more speedup in matrix multiplication operations when using the new 8-bit FP8 floating point data type compared to FP16 in previous generations. The ND H100 v5 VMs achieve up to two times more speedup in large language models like BLOOM 175B end-to-end model inference, demonstrating their potential to optimize AI applications further.

Azure OpenAI Service goes global: Expanding cutting-edge models worldwide

We are thrilled to announce the global expansion of Azure OpenAI Service, bringing OpenAI’s cutting-edge models, including GPT-4 and GPT-35-Turbo, to a wider audience worldwide. Our new live regions in Australia East, Canada East, East United States 2, Japan East, and United Kingdom South extend our reach and support for organizations seeking powerful generative AI capabilities. With the addition of these regions, Azure OpenAI Service is now available in even more locations, complementing our existing availability in East United States, France Central, South Central United States, and West Europe. The response to Azure OpenAI Service has been phenomenal, with our customer base nearly tripling since our last disclosure. We now proudly serve over 11,000 customers, attracting an average of 100 new customers daily this quarter. This remarkable growth is a testament to the value our service brings to businesses eager to harness the potential of AI for their unique needs.

As part of this expansion, we are increasing the availability of GPT-4, Azure OpenAI’s most advanced generative AI model, across the new regions. This enhancement allows more customers to leverage GPT-4’s capabilities for content generation, document intelligence, customer service, and beyond. With Azure OpenAI Service, organizations can propel their operations to new heights, driving innovation and transformation across various industries.

A responsible approach to developing generative AI

Microsoft’s commitment to responsible AI is at the core of Azure AI and Machine Learning. The AI platform incorporates robust safety systems and leverages human feedback mechanisms to handle harmful inputs responsibly, ensuring the utmost protection for users and end consumers. Businesses can apply for access to Azure OpenAI Service and unlock the full potential of generative AI to propel their operations to new heights.

We invite businesses and developers worldwide to join us in this transformative journey as we lead the way in AI innovation. Azure OpenAI Service stands as a testament to Microsoft’s dedication to making AI accessible, scalable, and impactful for businesses of all sizes. Together, let’s embrace the power of generative AI and Microsoft’s commitment to responsible AI practices to drive positive impact and growth worldwide.

Customer inspiration

Generative AI is revolutionizing various industries, including content creation and design, accelerated automation, personalized marketing, customer service, chatbots, product and service innovation, language translation, autonomous driving, fraud detection, and predictive analytics. We are inspired by the way our customers are innovating with generative AI and look forward to seeing how customers around the world build upon these technologies.

Mercedes-Benz is innovating its in-car experience for drivers, powered by Azure OpenAI Service. The upgraded “Hey Mercedes” feature is more intuitive and conversational than ever before. KPMG, a global professional services firm, leverages our service to improve its service delivery model, achieve intelligent automation, and enhance the coding lifecycle. Wayve trains large scale foundational neural-network for autonomous driving using Azure Machine Learning and Azure’s AI infrastructure. Microsoft partner SymphonyAI launched Sensa Copilot to empower financial crime investigators to combat the burden of illegal activity on the economy and organizations. By automating data collection, collation, and summarization of financial and third-party information, Sensa Copilot identifies money laundering behaviors and facilitates quick and efficient analysis for investigators. Discover all Azure AI and ML customer stories. 

Learn more

Resources and getting started with Azure AI  

Azure AI Portfolio 

Explore Azure AI. 

Azure AI Infrastructure 

Apply now for NDH100 v5 Virtual Machine Series.

Review Azure AI Infrastructure documentation. 

Read more about Microsoft AI at Scale. 

Read more about Azure AI Infrastructure.

Azure OpenAI Service 

Apply now for access to Azure OpenAI Service. 

Apply now for access to GPT-4. 

Review Azure OpenAI Service documentation.

Explore the playground and customization in Azure AI Studio.  

The post Scale generative AI with new Azure AI infrastructure advancements and availability appeared first on Azure Blog.
Quelle: Azure