How Art Catalyzes Change—Join Us for a Livestream Event on September 20

On September 20th, at Automattic’s stunning Noho space, artist Ana Teresa Fernández and a panel of other leaders—including Cristina Gnecco, Co-Founder of HOPE Hydration, and Whitney McGuire, Esq., Director of Sustainability at the Solomon R. Guggenheim Museum—will gather to discuss and confront our current climate crisis. This unique evening aims to delve into the compelling intersections of art, climate change, and social innovation.

We’d love for all of you to join us virtually for this event via livestream, starting at 6:45pm ET. 

RSVP for the livestream event

Held against the backdrop of Fernández’s incredible Under Pressure series, this event aims to spotlight how art and innovation can catalyze positive change. We hope to inspire attendees to become more than observers—to take their newfound insights to their communities and inspire collective action.

Here’s a statement about Under Pressure from Fernández’s website (which, of course, is powered by WordPress): 

Human beings have a hard time facing ugly truths. But what if an artist makes them beautiful? How then might we respond? Rather than turn away, would we change our ways, could we turn the tide against oncoming disaster? . . . 

Fernandez is not only an artist of stunning visual poetry, she is also an astute social activist. It is not guilt and shame that motivate people to be their best selves; it is inspiration, empowerment, and hope that remind us of how intrinsically we are all connected in thought and deed.

We hope to see you there at 6:45pm ET on Wednesday, September 20th!

Quelle: RedHat Stack

Driving performance and enhancing services across Three UK’s 5G network

In the ever-evolving landscape of mobile telecommunications, Three UK deploys cutting-edge technologies to drive performance and improve overall service quality. Leveraging their 5G network and the power of AIOps, Three UK is focusing on enhancing the customer experience for data services such as streaming, gaming, and social media. This blog delves deeper into how Three UK uses Azure Operator Insights to expand their network capabilities, providing gamers with the most dynamic and innovative user experience.

Check out this video to learn more about how Three UK is unlocking actionable intelligence with Azure Operator Insights.

Recognizing the transformative potential of 5G, Three UK is committed to harnessing its capabilities to meet the escalating demands of the growing gaming industry. As the fourth largest mobile network operator in the United Kingdom, Three UK handles 29 percent of the country’s mobile data traffic, serving approximately 10 million subscribers. Three UK is the United Kingdom’s Ookla Speedtest Awards Winner for 5G mobile network speed during Q1-Q2 2023 with median download speeds of 265.75 Mbps.

Optimizing the gaming experience 

Three UK’s 5G network delivers impressive peak speeds and features high speed, low latency to ensure optimal responsiveness for gamers, allowing them to fully immerse themselves in virtual realms. To support this level of performance, Three UK has deployed a total of 18,400 sites, of which 4,600 provide high-speed 5G access to 60 percent1 of the population. 

Three UK recognizes the importance of leveraging data insights to maintain its position in the telecommunications industry and to further enhance the user experience. With many petabytes of data flowing through their network every day, Three UK possesses a wealth of telemetry information and metadata that can drive network management and customer satisfaction to new heights. By analyzing user behaviors, Three UK can identify factors contributing to positive or negative customer experiences, and then make highly targeted improvements. 

Employing Azure Operator Insights 

Managing a network as extensive and complex as Three UK’s certainly comes with challenges. Three UK adopts a best-of-breed approach by incorporating network functions from various suppliers. However, accessing data from these disparate sources poses a formidable challenge. By leveraging the expertise of Microsoft in data ingestion, transformation, and analysis, Three UK aims to unlock the full potential of this data. A key component of this collaboration is the application of Azure Operator Insights, a new service on the Azure cloud platform specifically designed to help telecommunications carriers manage and extract actionable information from their network data. 

Azure Operator Insights enables carriers like Three UK to collect, organize, and process large datasets, providing valuable business insights and improving customer experiences, in part by optimizing the efficiency of time-to-insight. What would have previously taken weeks or months to assess, will now be possible to perform in minutes with AI. With this solution, Three UK can easily ingest terabytes of telemetry, event, and log data from various sources and vendors. The service also offers powerful data analysis tools, AI and machine learning processing, and secure data governance and sharing capabilities. 

By leveraging Azure Operator Insights, Three UK gains the ability to efficiently analyze data, using applications like Azure Data Explorer, Azure Synapse Analytics, and Azure Databricks. They can optimize network performance using AI and machine learning models, identify areas for network improvements, and make data-driven decisions to enhance overall service quality. 

Enhancing customer experiences

Understanding the needs and preferences of customers is crucial in the competitive telecommunications landscape, where loyalty supports successful long-term performance. Because of this, Three UK leverages data insights to gain a comprehensive view of their customers’ experiences. By capturing and analyzing user gaming data and other important usage information, Three UK can evaluate the quality of the user’s experience for specific activities. This invaluable information allows them to identify factors that contribute to positive experiences, and to address any pain points that may arise. 

With a vast amount of data at their disposal, Three UK can determine the elements that make a gaming session either enjoyable or frustrating for its customers. Armed with this knowledge, they can proactively optimize their network infrastructure and services to ensure seamless gaming experiences. Whether it involves fine-tuning latency levels, increasing network capacity, or strategically deploying additional infrastructure, Three UK can make data-driven decisions that directly and positively impact the gaming experiences of its customers. 

Additionally, Three UK incorporates this single customer view into their network services’ key performance indicators (KPIs), taking a holistic approach to network management. By considering the entire network’s performance, they ensure that it not only leverages the power of the individual user experience but also responds to the needs of all users engaged with the network. This broader perspective enables them to allocate resources efficiently, make targeted improvements, and drive proactive maintenance and deployment strategies when it matters most. 

Azure Operator Insights driving future innovation with Three UK

Three UK uses cutting-edge technologies to drive performance, enhance services as well as efficiency in their 5G network. By recognizing the importance of data insights and collaborating with Microsoft through Azure Operator Insights, Three UK gains the ability to harness the power of their data effectively. This enables them to make data-driven adjustments, to optimize network performance, and to provide enhanced experiences for their customers.

Learn More

Lower TCO and increase operational efficiency with Azure Operator Nexus.

Explore more AIOps use case scenarios in this paper. 

Learn more about Azure Operator Insights.

References

Three UK press release.

The post Driving performance and enhancing services across Three UK’s 5G network appeared first on Azure Blog.
Quelle: Azure

Cloud Cultures, Part 3: The pursuit of excellence in the United Kingdom

The swift progression of technological innovation is truly captivating. However, for me, what holds an even greater fascination is the intricate interplay of people, narratives, and life encounters that shape how technology is used every day. The outcomes of cloud adoption are shaped dramatically by the people and their culture. These stories show firsthand how technology and tradition combine to form cloud cultures.

In our first two episodes, Poland and Sweden, we explored how the people in these countries have taken important leaps in delivering new innovative technologies, and the underlying culture that helped them get there in uniquely different ways. In Poland, there is a sense of fearlessness when it comes time to act. They are a dynamic country embracing change, reinventing themselves, and creating new innovative opportunities. For Sweden, success knows no borders. Despite being one of the largest countries in Europe by landmass, it’s one of the smallest in population, which forces Sweden’s ambitious entrepreneurs to adopt a global-first mindset from day one. Now it’s our turn to explore cloud culture in the United Kingdom. 

United Kingdom: Pursuit of excellence 

Our two data center regions in the United Kingdom have been live for nearly a decade and built to help support the exploding growth of the cloud in the United Kingdom and Europe. With the evolution of new cloud services, such as generative AI, we are on the precipice of another leap in cloud-powered innovation. My time in the United Kingdom helped me see how the innovations coming from this country and their pursuit of excellence are impacting this growth. 

In my visit to the United Kingdom, I saw how towering aspirations fuel an unwavering commitment to producing premium results and have set a global standard for greatness. Excellence is an ideal we all strive for. It drives our decision-making, fuels innovation, and propels us to exceed not only our own expectations but also the demands of an ever-evolving industry. 

The cloud culture here has shown me that excellence is more than just an outcome—it’s a mindset that permeates every aspect of our work. And while achieving it is always desirable, it’s the pursuit that matters most. 

Our conversations with customers and partners helped me see how the powerful winds of innovation that have converged with local customs, values, and ways of living, helped create something unique. 

How are United Kingdom customers using the cloud? 

These conversations helped uncover how high standards are more than just a commitment—it’s a way of life. Below are just a few of the United Kingdom customers who are transforming their businesses to adapt to the growing needs of their customers in the United Kingdom, and beyond: 

Rolls Royce Engines has delivered excellence through its engines for over 100 years. As a broad-based power and propulsion provider, they operate in some of the most complex, critical systems at the heart of global society. With each engine producing about half a gigabyte of data per flight, doing so for the next 100 years will require a transition to new ways of working. 

London Stock Exchange Group is one of the world’s leading providers of financial markets infrastructure delivering financial data, analytics, news, and index products to more than 40 thousand customers in 190 countries. Clients rely on their expertise and innovative technologies to navigate the unpredictable currents of the financial markets. And in an industry where even the slightest edge can lead to substantial margins, they’ve found theirs using the cloud to deliver insights and cutting-edge solutions at speed. 

VCreate is an innovative business operating in the sphere of healthcare. It develops secure video technology that connects patients, families, and clinical teams for improved diagnostic management and enhanced family-focused care. 

KX is a global provider of vector database technology for time-series, real-time, and embedded data that provides context and insights at the speed of thought. Its software powers generative AI applications in banking, life sciences, semiconductors, telecommunications, and manufacturing. Enabling the processing and analysis of time series and historical data at speed and scale, KX gives developers, data scientists, and engineers the tools to build high-performance data-driven applications that uncover deeper insights and drive transformative business innovation.

Data is the key 

Talking to these customers, I started to pick up on a common thread. Data is the key to unlocking this excellence. These companies process vast amounts of data in order to provide quality products or services to their customers. The cloud opens opportunities to analyze this data faster for more effective decision making, by drawing deeper insights from analytics.  

“There is a lot of focus on how to improve efficiency. You should focus more on doing the right things. It’s not about doing more for less; it’s doing the right things in the first place. It’s effectiveness, not efficiency.”—Ashok Reddy, Chief Executive Officer of KX. 

There is an important distinction between efficiency and effectiveness. Operating efficiently is undeniably important, but it doesn’t guarantee exceptional results. However, aligning our actions with meaningful outcomes can definitely be a differentiating factor. 

Learn more  

Technology is a powerful tool, not on its own but because of the people and cultures that shape it. As we move into this next era of digital transformation, with AI at the forefront, our mission has never been more important—to empower every person on the planet to achieve more. Everyone has a role to play in creating a better world and at Microsoft we simply want to provide the tools and resources to do so. 

Watch the Cloud Cultures: United Kingdom episode today.  
The post Cloud Cultures, Part 3: The pursuit of excellence in the United Kingdom appeared first on Azure Blog.
Quelle: Azure

Unlocking the potential of in-network computing for telecommunication workloads

Azure Operator Nexus is the next-generation hybrid cloud platform created for communications service providers (CSP). Azure Operator Nexus deploys Network Functions (NFs) across various network settings, such as the cloud and the edge. These NFs can carry out a wide array of tasks, ranging from classic ones like layer-4 load balancers, firewalls, Network Address Translations (NATs), and 5G user-plane functions (UPF), to more advanced functions like deep packet inspection and radio access networking and analytics. Given the large volume of traffic and concurrent flows that NFs manage, their performance and scalability are vital to maintaining smooth network operations.

Until recently, network operators were presented with two distinct options when it comes to implementing these critical NFs. One, utilize standalone hardware middlebox appliances, and two use network function virtualization (NFV) to implement them on a cluster of commodity CPU servers.

The decision between these options hinges on a myriad of factors—including each option’s performance, memory capacity, cost, and energy efficiency—which must all be weighed against their specific workloads and operating conditions such as traffic rate, and the number of concurrent flows that NF instances must be able to handle.

Our analysis shows that the CPU server-based approach typically outshines proprietary middleboxes in terms of cost efficiency, scalability, and flexibility. This is an effective strategy to use when traffic volume is relatively light, as it can comfortably handle loads that are less than hundreds of Gbps. However, as traffic volume swells, the strategy begins to falter, and more CPU cores are required to be dedicated solely to network functions.

In-network computing: A new paradigm

At Microsoft, we have been working on an innovative approach, which has piqued the interest of both industry personnel and the academic world—namely, deploying NFs on programmable switches and network interface cards (NIC). This shift has been made possible by significant advancements in high-performance programmable network devices, as well as the evolution of data plane programming languages such as Programming Protocol-Independent (P4) and Network Programming Language (NPL). For example, programmable switching Application-Specific Integrated Circuits (ASIC) offer a degree of data plane programmability while still ensuring robust packet processing rates—up to tens of Tbps, or a few billion packets per second. Similarly, programmable Network Interface Cards (NIC), or “smart NICs,” equipped with Network Processing Units (NPU) or Field Programmable Gate Arrays (FPGA), present a similar opportunity. Essentially, these advancements turn the data planes of these devices into programmable platforms.

This technological progress has ushered in a new computing paradigm called in-network computing. This allows us to run a range of functionalities that were previously the work of CPU servers or proprietary hardware devices, directly on network data plane devices. This includes not only NFs but also components from other distributed systems. With in-network computing, network engineers can implement various NFs on programmable switches or NICs, enabling the handling of large volumes of traffic (e.g., > 10 Tbps) in a cost-efficient manner (e.g., one programmable switch versus tens of servers), without needing to dedicate CPU cores specifically to network functions.

Current limitations on in-network computing

Despite the attractive potential of in-network computing, its full realization in practical deployments in the cloud and at the edge remains elusive. The key challenge here has been effectively handling the demanding workloads from stateful applications on a programmable data plane device. The current approach, while adequate for running a single program with fixed, small-sized workloads, significantly restricts the broader potential of in-network computing.

A considerable gap exists between the evolving needs of network operators and application developers and the current, somewhat limited, view of in-network computing, primarily due to a lack of resource elasticity. As the number of potential concurrent in-network applications grows and the volume of traffic that requires processing swells, the model is strained. At present, a single program can operate on a single device under stringent resource constraints, like tens of MB of SRAM on a programmable switch. Expanding these constraints typically necessitates significant hardware modifications, meaning when an application’s workload demands surpass the constrained resource capacity of a single device, the application fails to operate. In turn, this limitation hampers the wider adoption and optimization of in-network computing.

Bringing resource elasticity to in-network computing

In response to the fundamental challenge of resource constraints with in-network computing, we’ve embarked on a journey to enable resource elasticity. Our primary focus lies on in-switch applications—those running on programmable switches—which currently grapple with the strictest resource and capability limitations among today’s programmable data plane devices. Instead of proposing hardware-intensive solutions like enhancing switch ASICs or creating hyper-optimized applications, we’re exploring a more pragmatic alternative: an on-rack resource augmentation architecture.

In this model, we envision a deployment that integrates a programmable switch with other data-plane devices, such as smart NICs and software switches running on CPU servers, all connected on the same rack. The external devices offer an affordable and incremental path to scale the effective capacity of a programmable network in order to meet future workload demands. This approach offers an intriguing and feasible solution to the current limitations of in-network computing.

Figure 1: Example scenario scaling up to handle load across servers. The control plane installs programmable switch rules, which map cell sites to Far Edge servers.

In 2020, we presented a novel system architecture, called the Table Extension Architecture (TEA), at the ACM SIGCOMM conference.1 TEA innovatively provides elastic memory through a high-performance virtual memory abstraction. This allows top-of-rack (ToR) programmable switches to handle NFs with a large state in tables, such as one million per-flow table entries. These can demand several hundreds of megabytes of memory space, an amount typically unavailable on switches. The ingenious innovation behind TEA lies in its ability to allow switches to access unused DRAM on CPU servers within the same rack in a cost-efficient and scalable way. This is achieved through the clever use of Remote Direct Memory Access (RDMA) technology, offering only high-level Application Programming Interfaces (APIs) to application developers while concealing complexities.

Our evaluations with various NFs demonstrate that TEA can deliver low and predictable latency together with scalable throughput for table lookups, all without ever involving the servers’ CPUs. This innovative architecture has drawn considerable attention from members of both academia and industry and has found its application in various use cases that include network telemetry and 5G user-plane functions.

In April, we introduced ExoPlane at the USENIX Symposium on Networked Systems Design and Implementation (NSDI).2 ExoPlane is an operating system specifically designed for on-rack switch resource augmentation to support multiple concurrent applications.

The design of ExoPlane incorporates a practical runtime operating model and state abstraction to tackle the challenge of effectively managing application states across multiple devices with minimal performance and resource overheads. The operating system consists of two main components: the planner, and the runtime environment. The planner accepts multiple programs, written for a switch with minimal or no modifications, and optimally allocates resources to each application based on inputs from network operators and developers. The ExoPlane runtime environment then executes workloads across the switch and external devices, efficiently managing state, balancing loads across devices, and handling device failures. Our evaluation highlights that ExoPlane provides low latency, scalable throughput, and fast failover while maintaining a minimal resource footprint and requiring few or no modifications to applications.

Looking ahead: The future of in-network computing

As we continue to explore the frontiers of in-network computing, we see a future rife with possibilities, exciting research directions, and new deployments in production environments. Our present efforts with TEA and ExoPlane have shown us what’s possible with on-rack resource augmentation and elastic in-network computing. We believe that they can be a practical basis for enabling in-network computing for future applications, telecommunication workloads, and emerging data plane hardware. As always, the ever-evolving landscape of networked systems will continue to present new challenges and opportunities. At Microsoft we are aggressively investigating, inventing, and lighting up such technology advancements through infrastructure enhancements. In-network computing frees up CPU cores resulting in reduced cost, increased scale, and enhanced functionality that telecom operators can benefit from, through our innovative products such as Azure Operator Nexus.

References

TEA: Enabling State-Intensive Network Functions on Programmable Switches, ACM SIGCOMM 2020

ExoPlane: An Operating System for On-Rack Switch Resource Augmentation, USENIX NSDI 2023

The post Unlocking the potential of in-network computing for telecommunication workloads appeared first on Azure Blog.
Quelle: Azure

Accelerating the pace of innovation with Azure Space and our partners

Azure Space innovating into the future

Today, I’m excited to share some news spanning the full spectrum of space industry use cases, including:

Real-world examples of how Azure Orbital Ground Station is enabling both space agencies and start-ups with new ways to operate satellites in orbit.

A new addition to the Azure Space family, the Planetary Computer (and the petabytes of data within its catalog). Together, with new partnerships with Esri and Synthetaic, we are on a journey to empower our customers through rapid new insights from earth observation data. 

When Microsoft announced Azure Space in 2020, we saw a chance to innovate new solutions that meet modern demands and create opportunities for the future. So, we applied our proven partner-first approach to rapidly reimagine traditional space solutions, introduce new software-based tools, and minimize cost barriers holding back the space ecosystem.

This partner-first approach has allowed us to rapidly go from vision to solutions. In October 2020, we shared early outcomes that collectively brought Azure together with a global effort of ground stations, provided resilient connectivity to the hyperscale cloud, and launch a transformational effort to virtualize satellite communications.

Azure customers

Since then, both established satellite operators and exciting new start-ups have redefined the on-orbit possibilities. Now, start-ups, government agencies, and enterprises are experimenting with new form factor satellites, relying upon Earth observation data to derive actionable insights, and identifying ways to maintain constant connectivity in unpredictable global environments.

Together with our partners, we are rapidly innovating to provide every space operator with the solutions to solve persistent challenges in new ways and capture new opportunities in the rapidly expanding space sector.

Azure Orbital Ground Station supports our customers on and off the planet

Cloud computing is the foundation that underpins one of the biggest revolutions in the space industry, ground stations as a service. Those ground stations, in turn, have drastically lowered one of the most expensive barriers to entering space. One year after Azure Orbital Ground Station became generally available, customers including NASA and Muon Space are using it to support their operations.

Use KSAT and Azure Orbital Ground Station to improve delivery of NASA’s earth science data products

Teams from NASA (Langley Research Center and Goddard Space Flight Center), the global space company KSAT, and Microsoft have completed a technology demonstration. The demo focused on data acquisition, processing, and distribution of near real-time Earth Science data products in the cloud.

The teams successfully validated space connectivity across KSAT and Microsoft Azure Orbital Ground Station sites with four public satellites owned by both NASA and the National Oceanic and Atmospheric Administration (NOAA): Terra, Aqua, Suomi National Polar-orbiting Partnership, and JPSS-1/NOAA-20. 

The demonstration was a showcase for Azure, which provided real-time cross-region data delivery from Microsoft and KSAT sites to NASA’s virtual network in Azure. With satellite data in the cloud, NASA Azure compute and storage services to take data from raw form to higher processing levels (see sample final product in Image 1) improving latency from an average of 3-6 hours to under 25 minutes for some data products.

Integration of capabilities across Microsoft and partner KSAT allowed NASA to expand its coverage and connectivity, benefiting from ground station access through a single application programming interface (API), direct backhaul into Azure, cross-region delivery, and a unified data format experience.

Muon Space achieves liftoff and successful operations with MuSat-1 and Azure Orbital Ground Station

Muon Space selected Microsoft to support its first-ever launch in June 2023, leveraging Azure Orbital Ground Station as the sole ground station provider for their MuSat-1 mission. After MuSat-1 was deployed from the SpaceX Transporter 10, Muon Space achieved contact via Azure Orbital Ground Station within six minutes.1 From the launch and early operation (LEOP) stage to continuous on-orbit operations, Microsoft ground stations around the world are used to successfully communicate with MuSat-1. Azure Orbital Ground Station is a completely cloud-based solution. Therefore, hardware deployment was not needed, enabling Muon Space to take advantage of an innovative virtual radio frequency (RF) solution with a custom modem.

“Since we’re building constellations of multiband remote sensing spacecraft with unique revisit, resolution, and data latency capabilities, our ground station partner was a critical choice. Muon selected Azure Orbital Ground Station as we launch our constellation due to its current capabilities and product roadmap. Collaborating with Microsoft to handle ground allows us to focus on the core mission of gathering climate intelligence and serving it to our customers.”
Jim Martz, Vice President, Engineering at Muon.

Expanding capabilities to derive rapid insights from massive amounts of space data

Today, we are applying that same approach to revolutionize geospatial data, as we welcome Planetary Computer to the Azure Space family. Planetary Computer is a robust geospatial data platform. It combines a multi-petabyte catalog of global multi-source data with intuitive APIs, a flexible scientific environment that allows users to answer global questions about that data, and applications that put those answers in the hands of many stakeholders. It is used by researchers, scientists, students, and organizations across the globe with millions of API calls every month.

With Planetary Computer now part of the Azure Space family, we are beginning to work with partners in new ways. Today, we are building upon the existing catalog to drive a path to new capabilities and partnerships that will empower users with one of the largest Earth observation data sets at their fingertips—petabytes worth of possibilities for understanding our planet. In keeping with our partner-first approach, Esri and Synthetaic will provide essential capabilities to our platform and ecosystem. By combining the power of data analysis in Synthetaic’s RAIC, with the data visualization of Esri’s ArcGIS, customers will be able to glean insights from space data at previously unattainable speed and scale.

Planetary Computer is grounded in our commitment to not only better understand our world but to leverage the insights we gain to achieve our commitments to being a carbon-negative, water-positive, and zero-waste company by 2030. A quick look at Chesapeake Conservancy, a nonprofit organization based in Annapolis, Maryland, illustrates the possibilities that Planetary Computer can unlock for any organization to achieve its sustainability goals.

Chesapeake Conservancy is leading a regional effort to protect 30 percent of the Chesapeake Bay watershed by 2030 using a precision conservation approach that optimizes resources and protects land with the greatest value for water quality, outdoor recreation, wildlife habitat, and local economies.

“Collaborating with Microsoft Azure, we developed an AI system that maps ground-mounted solar arrays using up-to-date satellite data enabling us to regularly track one of the most rapid drivers of land use change in the watershed,” says Joel Dunn, CEO, Chesapeake Conservancy. “Going forward, we must contextualize these insights within a complete, up-to-date picture of land use. We’re excited to work with Microsoft and their partners to produce our 1-meter land use data more frequently and accurately keeping an active pulse on the entire Chesapeake Bay watershed.”

Using tools from our partners to harness the power of Microsoft Planetary Computer, customers such as Chesapeake Conservancy can use artificial intelligence and groundbreaking data to accelerate progress in conserving landscapes vital to the Chesapeake Bay’s health and its cultural heritage while equitably connecting people to the Chesapeake, as seen in the video here.

The space industry has made incredible progress since the dawn of the space age in the 1950s. That progress, however, has been slow, hard, and expensive. It has required expensive and sustained investments by governments, followed by the creation of bespoke, rigid, and complex-to-integrate satellite constellations and ground infrastructure.

What’s next for Azure Space

Building on the Microsoft Planetary Computer to develop a new Azure Space Data solution will create an end-to-end space fabric, providing ubiquitous connectivity, resiliency, and global insights at scale, in real-time. We are excited to build this platform and open it to the many companies, large and small, who are shaping the future of space and look forward to collaborating with those writing the next chapter in humanity’s journey beyond Earth.

We invite partners and enterprises interested in learning more about Azure Space to do the following:

Learn more about Azure Space

Sign up for news and updates on how Space data can advance your organization and missions, or complete this form to get in touch with the Azure Space team.

Save the date—join us for the Planetary Computer webinar in December.

References

Muon Space launches first satellite.

The post Accelerating the pace of innovation with Azure Space and our partners appeared first on Azure Blog.
Quelle: Azure

Real-world sustainability solutions with Azure IoT

In today’s fast-moving world, organizations are deploying innovative IoT and Digital Operations solutions that drive sustainable business practices, achieve energy conservation goals, and enhance operational efficiencies. I am amazed by their work and want to share a handful of recent stories that showcase how organizations use technology to solve real-world sustainability challenges for their customers.

Sustainability practices reduce energy use, waste, and costs

With technologies like open industrial IoT, advanced analytics, and AI, Microsoft Azure ensures manufacturing organizations are well-equipped to understand, mitigate, and validate their environmental impacts. Celanese and SGS are just two examples of Azure customers using IoT and Digital Operations to reduce energy use, waste, and costs.

Celanese, a specialty materials and chemical manufacturing company, envisions a Digital Plant of the Future powered by Cognite Data Fusion® on Microsoft Azure. The idea is to unify their processes, assets, and 25,000 employees on a common, scalable, and secure platform where AI algorithms actively identify and solve manufacturing problems.

For a global specialty manufacturer like Celanese, its ability to deploy diverse solutions quickly and cost-effectively anywhere across its value chain translates into millions of dollars in savings by optimizing heavy machinery and industrial processes. Microsoft Kubernetes Service (AKS) is core to Cognite’s infrastructure. Azure Functions orchestrates complex calculations with data stored in Azure Data Lake. AI capabilities in Azure and Azure Machine Learning provide actionable insights with contextualized industrial data. The solutions boost energy efficiency and reduce carbon emissions across their 30 industrial facilities across the globe.

Testing, inspection, and certification company SGS partnered with Microsoft Azure to develop an intelligent device for wind turbines called OCM-Online®, which uses Azure IoT Edge, Azure IoT Hub, and three Azure database services. The solution monitors and predicts turbine oil conditions and levels by collecting data from sensors that provide more than 17 different parameters from over 315 wind turbines. The solution is installed across one of the largest wind farms in the world, the Three Gorges Yangiiang Shaba Offshore project which powers 2.4 million households.

Instead of following a prescribed schedule for oil changes, wind farm operators now only change oil only when data shows it is needed. This greatly reduces unnecessary oil changes and recycling challenges. Historically, field teams manually collected samples and delivered them to a lab for analysis. With the global market size for online oil fluid monitoring valued at 689.7 million USD in 2021 and projected to reach 1.4 billion USD by 2031, digital solutions like OCM-Online are paramount to reducing waste and recycling challenges.

Data drives energy conservation efforts

We are seeing a massive build-out of clean energy technologies—wind, solar, hydro, and nuclear energy. However, tackling the supply side of energy use will not get us to global energy reduction goals. We need to reduce the demand. It’s challenging for consumers to make energy use decisions without clear and accessible data. Azure customers like SA Power Networks and Watts have innovative solutions consumers need to make smart, informed decisions.

One consumer-based solution comes from Watts, a Danish energy technology company. Watts uses Microsoft Azure for its smart home energy-tracking applications that allow households to monitor their own energy consumption patterns to understand how energy is being used and make decisions about when or if to run appliances. Consumers can even see where the energy comes from so, they can choose to power the house with green energy. The company is at the forefront of developing intuitive, accessible, user-friendly tools that use IoT devices to monitor power consumption. Near real-time data monitoring has driven down energy use in almost all homes on the grid.

Watts chose to build on the Microsoft.NET platform, a free, open-source software development framework and ecosystem designed by Microsoft. It created a system of 50 microservices communicating via Azure Service Bus and running on Azure Event Hubs. The system also relies on Azure Table Storage and Azure Blob Storage. The company has seen a huge increase in its customer base, indicating consumers want to make decisions that have an impact. Watts went from 150,000 users to 550,000 at the end of 2022.

Another consumer-facing solution comes from South Australian utility company SA Power Networks. It developed a solution based on Microsoft Azure IoT which enables customers with rooftop solar panels to export excess solar energy to the power grid. This excess energy provides a significant share of renewables on the grid that services 1.7 million customers spread across 180,000 square kilometers. 

Data from devices provides visibility into network conditions down to the local level, allowing SA Power Networks to respond more quickly to potential issues. It also allows for dynamically managed network capacity to keep energy resources balanced for a stable and more resilient grid. In just 12 months, the average customer doubled their exported energy which makes more low-cost, renewable energy available to all customers on the SA Power Networks grid.

Operational efficiencies support growth while reducing costs

When companies optimize their operations, they experience increased productivity and reduced production costs. They also consume less energy and use fewer resources. Telefónica, a telecommunications provider, uses an Azure-IoT-based platform to efficiently and securely manage 6.5 billion messages each day. Its Home Advanced Connectivity (HAC) platform uses Microsoft Azure IoT and Azure IoT Hub device provisioning service to enable real-time, bidirectional data flows between 4.5 million in-home gateway devices and the Telefónica cloud. Operations teams can diagnose or predict connectivity issues by retrieving information directly from a customer’s router and delivering a fix within a single, continuous data flow. HAC also uses IoT Hub device twins to help ensure precise, remote configuration of routers. It’s an efficient digital solution that streamlines scaling up to 20 million devices in the next few years.

Let Azure unlock your potential

From startups to Fortune 500 powerhouses, Azure is fueling innovation and driving success across diverse industries worldwide. This is a small sampling of the work our customers are doing to support sustainability goals for the public and private sectors. You can read their success stories and about other companies here.
The post Real-world sustainability solutions with Azure IoT appeared first on Azure Blog.
Quelle: Azure