Future proof—Navigating risk management with Azure OpenAI Service

Risk management is a systematic—and necessary—process designed to identify, assess, prioritize, and minimize the impact of uncertain events on an organization. Worldwide end-user spending on security and risk management is projected to total USD215 billion in 2024, an increase of 14.3 percent from 2023, according to new forecast from Gartner, Inc. who also states that in 2023, global security and risk management end-user spending is estimated to reach USD188.1 billion.

Companies around the world are using AI to understand potential threats, make informed decisions, and take actions to avoid or reduce risk.

For example:

Identification: Recognize both internal and external factors that could affect your objectives.

Assessment: Determine which risks are most critical to allow your business to better prioritize.

Mitigation: Implement safety procedures, implement security measures, and develop contingency plans.

Compliance and regulation: Comply with regulations to avoid legal penalties and reputational damage.

Business continuity: Withstand unexpected disruptions and recover more quickly when they occur.

Financial stability: Protect investments, reduce the likelihood of financial crises, and maintain stakeholder confidence.

Strategic decision-making: Make informed choices and navigate uncertainties in rapidly changing business environments.

By analyzing historical data and using machine learning algorithms, businesses can anticipate future risks and their potential impact. This allows for the development of risk mitigation strategies that are both data-driven and forward-looking. Microsoft uses sophisticated data analytics and AI algorithms to better understand and protect against digital threats and cybercriminal activity. In 2021 Microsoft blocked more than 70 billion email and identity threat attacks.

Azure OpenAI Service also aids in operational risk management. It can be employed to monitor and analyze data from IoT devices and sensors, helping companies identify potential operational disruptions or equipment failures before they occur. This proactive approach can prevent costly downtime and production losses, enhancing overall business continuity.

Following, learn how Azure OpenAI Service is helping mitigate risk for diverse businesses around the globe.

Azure OpenAI Service contributes to improved risk management

Azure OpenAI’s natural language processing (NLP) algorithms can analyze vast amounts of unstructured data from various sources, including news articles, social media, and financial reports, to identify emerging risks and trends. This real-time analysis enables businesses to stay proactive in identifying potential threats, such as market fluctuations, regulatory changes, or emerging competitive challenges. By staying ahead of these risks, companies can develop proactive strategies to mitigate or exploit them, thereby enhancing their resilience.

Orca Security

A front-runner in agentless cloud security, Orca Security delivers comprehensive risk management to global enterprises. Impressed with Azure’s stronger privacy and compliance protocols, and Orca Security believed that Microsoft could provide better support. It also guaranteed a 99.9 percent uptime. By integrating OpenAI’s GPT API, they empowered clients to swiftly respond to security alerts with AI-guided solutions. With Azure OpenAI, customers can choose where to store their data depending on the regulations they want to adhere to. Azure also secures data at rest (data stored on physical or virtual disk drives or other media) and in transit (when it’s actively being transferred over a network). 

NTTAirports need a reliable network connection for various processes and applications, especially at the bridging point between Wi-Fi networks and public networks. NTT partnered with Microsoft to deliver a smart airport solution, helping the airport digitally transform their operations, including baggage handling, passenger screening, and data transfer. The airport is building a completely private 5G network with NTT across 1000 hectares, allowing them to transform critical business processes like luggage handling and border control. The private network enables digital solutions to optimize the movement of people, baggage, and equipment safely and in real-time across the airport, without the risk of congestion on public networks.

Intapp

“Knowledge-based industries have special needs and require out-of-the-box AI capabilities that deliver specific use cases,” says Lavinia Calvert: Vice President and Legal Industry Principal at Intapp. General software solutions don’t adequately address the needs of financial and professional services firms. With their complex relationships, partner-led operations, and compliance and regulatory mandate, they require purpose-built cloud solutions. Azure underpins all of Intapp’s solutions and AI initiatives, including managing end-to-end risk, compliance, and confidentiality, business development, driving profitability, and effective collaboration. Robust compliance capabilities ensure conformity with leading risk and compliance management practices by using technology designed to meet industry-specific mandates and regulatory requirements. These benefits help Intapp deliver an out-of-the-box industry cloud experience designed for the evolving needs and demanding use cases of financial and professional services firms.

A fundamental aspect of any successful business strategy

Azure OpenAI plays a pivotal role in helping businesses achieve better risk management. Its NLP and machine learning capabilities enable companies to analyze vast amounts of data, identify emerging risks, and make data-driven decisions. By leveraging Azure OpenAI, businesses can enhance their risk resilience, seize opportunities, and navigate the ever-changing business landscape with confidence.

Our commitment to responsible AI

With Responsible AI tools in Azure, Microsoft is empowering organizations to build the next generation of AI apps safely and responsibly. Microsoft has announced the general availability of Azure AI Content Safety, a state-of-the art AI system that helps organizations keep AI-generated content safe and create better online experiences for everyone. Customers—from startup to enterprise—are applying the capabilities of Azure AI Content Safety to social media, education and employee engagement scenarios to help construct AI systems that operationalize fairness, privacy, security, and other responsible AI principles.

Get started with Azure OpenAI Service 

Apply for access to Azure OpenAI Service by completing this form. 

Learn about Azure OpenAI Service and the latest enhancements. 

Get started with GPT-4 in Azure OpenAI Service in Microsoft Learn. 

Read our partner announcement blog, empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service. 

Learn how to use the new Chat Completions API (in preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service.

Learn more about Azure AI Content Safety.

The post Future proof—Navigating risk management with Azure OpenAI Service appeared first on Azure Blog.
Quelle: Azure

Microsoft named a Leader in 2023 Gartner® Magic Quadrant™ for Strategic Cloud Platform Services (SCPS)

We are honored to be recognized by Gartner® as a Leader in the recently published 2023 Gartner® Magic Quadrant™ for Strategic Cloud Platform Services (SCPS). In the report, Gartner placed Microsoft furthest in Completeness of Vision.

For years, we’ve understood that the industry trusts Gartner Magic Quadrant reports to provide a holistic review of cloud providers’ capabilities. We’re pleased by this placement in the Gartner report as we continue to prioritize investments to make Azure the global cloud computing platform powering transformation and growth—enabling new possibilities for organizations to embrace the latest technologies and to advance at a rapid pace. With highly secure, state-of-the-art Azure datacenters designed with data residency in mind, Azure hosts one of the most advanced supercomputers in the world.

The Gartner report validates our commitment to empowering customers to reach new heights. We are proud of our purpose-built cloud infrastructure for the era of AI, one that is adaptive across on-prem, multicloud and edge environments, for our complete development platform with advanced tools to accelerate developer productivity, for our AI services and tools to empower innovation, and for our extensive partnerships across a wide range of industry leaders to give customers the choices they desire.

We are honored for this recognition and will continue to build the future together with our customers, no matter where they are in the cloud journey.

Purpose-built cloud infrastructure for the era of AI

We continue to build our AI infrastructure in close collaboration with silicon providers and industry leaders, incorporating the latest innovations in software, power, models, and silicon. Azure works closely with NVIDIA to provide NVIDIA H100 Tensor Core (GPU) graphics processing unit-based virtual machines (VMs) for mid to large-scale AI workloads. We’ve also expanded our partnership with AMD, enabling our customers with choices to meet their unique business needs. These investments have allowed Azure to pioneer performance for AI supercomputing in the cloud and have consistently ranked us as the number one cloud in the top 500 of the world’s supercomputers.

With these additions to the Azure infrastructure hardware portfolio, our platform enables us to deliver the best performance and efficiency across all workloads.

An adaptive cloud across on-prem, multicloud and edge environments

The cloud is evolving to support customer workloads wherever they’re needed. We realize cloud migration is not a one-size-fits-all approach, and that’s why we’re committed to meeting customers where they are in their cloud journey. With Azure you have an adaptive cloud that enables you to thrive in dynamic environments by unifying siloed teams, distributed sites, and sprawling systems into single operations, application, and data model in Azure. 

Azure Arc helps customers implement their adaptive cloud strategies, providing a bridge that extends the Azure platform and enables them to build applications and services across datacenters, at the edge, and in multicloud environments. Through a portfolio of services, tools, and infrastructure, organizations can take advantage of Azure services within a single control plane. And with the recent general availability of VMware vSphere enabled by Azure Arc that brings together Azure and the VMware vSphere infrastructure, VMware administrators can empower their developers to use Azure technologies with their existing server-based workloads and new Kubernetes workloads all from Azure.

Every day, cloud administrators and IT professionals are being asked to do more. We consistently hear from customers they’re tasked with a wider range of operations; they are required to collaborate with more users, and support more complex needs to deliver on increasing customer demand—all while integrating more workloads into their cloud environment. To support our customers, we recently launched the public preview of Microsoft Copilot for Azure, a new solution built into Azure that will help simplify how they design, operate, or troubleshoot apps and infrastructure from cloud to edge.

One location to build, test, and deploy AI innovations securely

We’re only just starting to understand the potential of generative AI and how it will transform the way we live and work. Developers are at the heart of this new wave of innovation, pushing the boundaries of what’s possible. With a cloud-first approach developers spend less time on maintaining apps and infrastructure and more time on innovating and ideating, reducing the time to market. What’s more, developers can build with confidence, knowing that Azure has built-in tools and technologies to help ensure a secure and responsible approach from development to deployment.

The public preview of Azure AI Studio gives developers everything they need to build, test, and deploy AI innovations in one convenient location: cutting-edge models, data integration for retrieval augmented generation (RAG), intelligent search capabilities, full-lifecycle model management, and content safety. Azure AI Content Safety is available in Azure AI Studio so developers can easily evaluate model responses all in one unified development platform to quickly and efficiently detect offensive or inappropriate content in text and images, Customers like Heineken, Thread, Moveworks, Manulife , and so many more are putting Azure AI technologies to work for their businesses and their own customers and employees.

Integrated, AI-based tools to help developers innovate efficiently

The integration of AI-based tools in the development cycle is not just accelerating innovation, but also enabling developers to spend more time on strategic, meaningful work, and less time on tasks like debugging and infrastructure management. With Microsoft DevBox, developers can streamline development with secure, ready-to-code workstations in the cloud, leveraging self-service access to high-performance, cloud-based workstations preconfigured and ready-to-code for specific projects. GitHub Copilot uses AI technology to suggest code in the editor maximizing time spent on business logic over boilerplate, with developers reporting they can complete tasks up to 55% faster and feel up to 88% more productive.

With tools that are designed to work seamlessly together, Microsoft’s complete development platform stack empowers developers with flexible solutions, so they can build next-gen apps productively and securely, where they want. GitHub integrates with Azure to provide a continuous integration and deployment (CI/CD) pipeline for developers, and with GitHub Enterprise they can be more efficient with up to 75% improvement in time spent managing tools and code infrastructure. Customers like GM are collaborating with Microsoft to help speed up innovation within their organization, test and learn, and create agile environments using Microsoft development platforms such as GitHub, Visual Studio and Microsoft DevBox. 

Create differentiated AI experiences with cloud-native apps

Azure’s cloud-native platform is the best place to run and scale applications while seamlessly embedding Azure’s native AI services. Azure gives developers the choice between control and flexibility, with complete focus on productivity regardless of what option is chosen. Azure App Service allows developers to host .NET, Java, Node.js, and Python web apps and APIs in a fully managed Azure service. Azure takes care of all the infrastructure management like high availability, load balancing, and autoscaling enabling developers to accelerate app development to production up to 50 percent with fully-managed Azure App Service. Developers can further streamline the development process for faster time to market with cloud-based tools and services including Azure Kubernetes Service (AKS), GitHub Enterprise and Advanced Security, Azure Cosmos DB, and Azure Cognitive Services. And with Microsoft Copilot for Azure, developers have an AI companion to help them design, operate, optimize, and troubleshoot everyday tasks with AKS and Kubernetes. Customers such as Sapiens have leveraged the synergy between cloud-native technologies and AI to accelerate their digital transformation and deliver more value to their end users with intelligent apps.

Building the future together

We are dedicated to empowering our customers with technology that unlocks limitless innovation, helping them wherever they are on their technology journey. We use the decades of experience we have in migrating Microsoft’s on-premises workloads to the cloud, to inform how we make it easier for customers and partners to use the cloud—from how we build products, to the real-world migration guidance we provide. It’s why 95 percent of Fortune 500 companies trust Azure with their business on Azure. Customers like AT&T rely on Azure AI for Enterprise Chat GPT and better knowledge mining and the World Bank who is using Azure cloud-based solution to centralized monitoring, performance, resource consumption, and security management across clouds, all in a single package.

And we are not doing this alone. We have a vast global partner network and a growing number of technology partnerships across a wide range of industry leaders such as, Databricks, Netapp, NVIDIA, Oracle, OpenAI, SAP, Snowflake, VMware, and others. We recently announced a partnership to bring Oracle Database Services into Azure to help maximize efficiency and resiliency for our mutual customers’ businesses. Our investments with SAP continue to grow, enhancing performance and resilience for mission critical workloads with new powerful infrastructure options for our SAP customers such as the Azure M-series Mv3 family, the next generation of memory optimized virtual machines (VMs). As we expand partnerships with OpenAI, Meta and Hugging Face, we create more opportunities for organizations and developers to build generative AI experiences, offering the most comprehensive selection of frontier, open and commercial models.

As Microsoft continues to innovate at the speed of AI, Azure is at the foundation of all our innovation, powering all aspects of the Microsoft Cloud and our copilots. Azure makes it possible for organizations to securely embrace the latest technologies and leverage them to create new ones. We endlessly optimize our infrastructure to bring faster, secure, reliable, and more sustainable computing power, so that our customers and partners can build with confidence. Linked by one of the largest interconnected networks on the planet, we’re providing unprecedented scalability, low latency, data residency, and high availability to our customers around the world. Azure provides the cloud platform wherever you are with highly secure, state-of the art Azure datacenters, offering 60+ regions—more than any other cloud provider.

With Azure, customers can trust they are on a secure and well-managed foundation to utilize the latest advancements in AI and cloud-native services, safely and responsibly, to create today’s solutions and tomorrow’s breakthroughs. We are dedicated to the success of our customers and partners, and continue to invest in ways that ensure Azure is the leading choice for customers, big and small around the globe.

Discover resources for your cloud journey

Learn more about Azure Migrate and Modernize and Azure Innovate and how they can help you from migration to AI innovation.

Check out the new and free Azure Migrate application and code assessment feature to save on application migrations.

Find out how to take your AI ambitions from ideation to reality with Azure. 

For the latest Azure innovations watch our Ignite 2023 sessions on demand.

Disclaimer: 

Gartner, Magic Quadrant for Strategic Cloud Platform Services, David Wright, Dennis Smith, and 4 more, 4 December 2023.

The report was previously known as Magic Quadrant for Cloud Infrastructure and Platform Services (2020-2022) and Magic Quadrant for Cloud Infrastructure as a Service till 2019. 

Gartner is a registered trademark and service mark and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request here.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

The post Microsoft named a Leader in 2023 Gartner® Magic Quadrant™ for Strategic Cloud Platform Services (SCPS) appeared first on Azure Blog.
Quelle: Azure

Infuse responsible AI tools and practices in your LLMOps

This is the third blog in our series on LLMOps for business leaders. Read the first and second articles to learn more about LLMOps on Azure AI.

As we embrace advancements in generative AI, it’s crucial to acknowledge the challenges and potential harms associated with these technologies. Common concerns include data security and privacy, low quality or ungrounded outputs, misuse of and overreliance on AI, generation of harmful content, and AI systems that are susceptible to adversarial attacks, such as jailbreaks. These risks are critical to identify, measure, mitigate, and monitor when building a generative AI application.

Note that some of the challenges around building generative AI applications are not unique to AI applications; they are essentially traditional software challenges that might apply to any number of applications. Common best practices to address these concerns include role-based access (RBAC), network isolation and monitoring, data encryption, and application monitoring and logging for security. Microsoft provides numerous tools and controls to help IT and development teams address these challenges, which you can think of as being deterministic in nature. In this blog, I’ll focus on the challenges unique to building generative AI applications—challenges that address the probabilistic nature of AI.

First, let’s acknowledge that putting responsible AI principles like transparency and safety into practice in a production application is a major effort. Few companies have the research, policy, and engineering resources to operationalize responsible AI without pre-built tools and controls. That’s why Microsoft takes the best in cutting edge ideas from research, combines that with thinking about policy and customer feedback, and then builds and integrates practical responsible AI tools and methodologies directly into our AI portfolio. In this post, we’ll focus on capabilities in Azure AI Studio, including the model catalog, prompt flow, and Azure AI Content Safety. We’re dedicated to documenting and sharing our learnings and best practices with the developer community so they can make responsible AI implementation practical for their organizations.

Mapping mitigations and evaluations to the LLMOps lifecycle

We find that mitigating potential harms presented by generative AI models requires an iterative, layered approach that includes experimentation and measurement. In most production applications, that includes four layers of technical mitigations: (1) the model, (2) safety system, (3) metaprompt and grounding, and (4) user experience layers. The model and safety system layers are typically platform layers, where built-in mitigations would be common across many applications. The next two layers depend on the application’s purpose and design, meaning the implementation of mitigations can vary a lot from one application to the next. Below, we’ll see how these mitigation layers map to the large language model operations (LLMOps) lifecycle we explored in a previous article.

Fig 1. Enterprise LLMOps development lifecycle.

Ideating and exploring loop: Add model layer and safety system mitigations

The first iterative loop in LLMOps typically involves a single developer exploring and evaluating models in a model catalog to see if it’s a good fit for their use case. From a responsible AI perspective, it’s crucial to understand each model’s capabilities and limitations when it comes to potential harms. To investigate this, developers can read model cards provided by the model developer and work data and prompts to stress-test the model.

Model

The Azure AI model catalog offers a wide selection of models from providers like OpenAI, Meta, Hugging Face, Cohere, NVIDIA, and Azure OpenAI Service, all categorized by collection and task. Model cards provide detailed descriptions and offer the option for sample inferences or testing with custom data. Some model providers build safety mitigations directly into their model through fine-tuning. You can learn about these mitigations in the model cards, which provide detailed descriptions and offer the option for sample inferences or testing with custom data. At Microsoft Ignite 2023, we also announced the model benchmark feature in Azure AI Studio, which provides helpful metrics to evaluate and compare the performance of various models in the catalog.

Safety system

For most applications, it’s not enough to rely on the safety fine-tuning built into the model itself. large language models can make mistakes and are susceptible to attacks like jailbreaks. In many applications at Microsoft, we use another AI-based safety system, Azure AI Content Safety, to provide an independent layer of protection to block the output of harmful content. Customers like South Australia’s Department of Education and Shell are demonstrating how Azure AI Content Safety helps protect users from the classroom to the chatroom.

This safety runs both the prompt and completion for your model through classification models aimed at detecting and preventing the output of harmful content across a range of categories (hate, sexual, violence, and self-harm) and configurable severity levels (safe, low, medium, and high). At Ignite, we also announced the public preview of jailbreak risk detection and protected material detection in Azure AI Content Safety. When you deploy your model through the Azure AI Studio model catalog or deploy your large language model applications to an endpoint, you can use Azure AI Content Safety.

Building and augmenting loop: Add metaprompt and grounding mitigations

Once a developer identifies and evaluates the core capabilities of their preferred large language model, they advance to the next loop, which focuses on guiding and enhancing the large language model to better meet their specific needs. This is where organizations can differentiate their applications.

Metaprompt and grounding

Proper grounding and metaprompt design are crucial for every generative AI application. Retrieval augmented generation (RAG), or the process of grounding your model on relevant context, can significantly improve overall accuracy and relevance of model outputs. With Azure AI Studio, you can quickly and securely ground models on your structured, unstructured, and real-time data, including data within Microsoft Fabric.

Once you have the right data flowing into your application, the next step is building a metaprompt. A metaprompt, or system message, is a set of natural language instructions used to guide an AI system’s behavior (do this, not that). Ideally, a metaprompt will enable a model to use the grounding data effectively and enforce rules that mitigate harmful content generation or user manipulations like jailbreaks or prompt injections. We continually update our prompt engineering guidance and metaprompt templates with the latest best practices from the industry and Microsoft research to help you get started. Customers like Siemens, Gunnebo, and PwC are building custom experiences using generative AI and their own data on Azure.

Fig 2. Summary of responsible AI best practices for a metaprompt.

Evaluate your mitigations

It’s not enough to adopt the best practice mitigations. To know that they are working effectively for your application, you will need to test them before deploying an application in production. Prompt flow offers a comprehensive evaluation experience, where developers can use pre-built or custom evaluation flows to assess their applications using performance metrics like accuracy as well as safety metrics like groundedness. A developer can even build and compare different variations of their metaprompts to assess which may result in the higher quality outputs aligned to their business goals and responsible AI principles.

Fig 3. Summary of evaluation results for a prompt flow built in Azure AI Studio.

Fig 4. Details for evaluation results for a prompt flow built in Azure AI Studio.

Operationalizing loop: Add monitoring and UX design mitigations

The third loop captures the transition from development to production. This loop primarily involves deployment, monitoring, and integrating with continuous integration and continuous deployment (CI/CD) processes. It also requires collaboration with the user experience (UX) design team to help ensure human-AI interactions are safe and responsible.

User experience

In this layer, the focus shifts to how end users interact with large language model applications. You’ll want to create an interface that helps users understand and effectively use AI technology while avoiding common pitfalls. We document and share best practices in the HAX Toolkit and Azure AI documentation, including examples of how to reinforce user responsibility, highlight the limitations of AI to mitigate overreliance, and to ensure users are aware that they are interacting with AI as appropriate.

Monitor your application

Continuous model monitoring is a pivotal step of LLMOps to prevent AI systems from becoming outdated due to changes in societal behaviors and data over time. Azure AI offers robust tools to monitor the safety and quality of your application in production. You can quickly set up monitoring for pre-built metrics like groundedness, relevance, coherence, fluency, and similarity, or build your own metrics.

Looking ahead with Azure AI

Microsoft’s infusion of responsible AI tools and practices into LLMOps is a testament to our belief that technological innovation and governance are not just compatible, but mutually reinforcing. Azure AI integrates years of AI policy, research, and engineering expertise from Microsoft so your teams can build safe, secure, and reliable AI solutions from the start, and leverage enterprise controls for data privacy, compliance, and security on infrastructure that is built for AI at scale. We look forward to innovating on behalf of our customers, to help every organization realize the short- and long-term benefits of building applications built on trust.

Learn more

Explore Azure AI Studio.

Watch the 45-minute breakout session on “Evaluating and designing Responsible AI Systems for the Real World” and “End-to-End AI App Development: Prompt Engineering to LLMOps” from Microsoft Ignite 2023.

Take the 45-minute Introduction to Azure AI Studio course on Microsoft Learn.

The post Infuse responsible AI tools and practices in your LLMOps appeared first on Azure Blog.
Quelle: Azure

Create new ways to serve your mission with Microsoft Azure Space

Since launching Microsoft Azure Space, we’ve been focused on three main goals:

Connect anyone, anywhere, at any security level, back to the full power and potential of the Microsoft Cloud. This includes working with exciting space start-ups like Muon Space and True Anomaly as well as government agencies like the United States Space Force.

Enable real-time analysis across petabytes of data gathered on orbit, so that our customers can take immediate action that delivers on their mission.

Empower developers to develop, deploy, and run their applications on orbit.

As customers and partners have adopted and experimented with the Azure Space portfolio, new and interesting use cases are emerging that illustrate what’s possible. Today, we are excited to share some of those customer stories, along with updates for Azure Orbital Ground Station, Azure Orbital’s software development kit, and Microsoft Planetary Computer. While it is still early days, these stories offer a glimpse at understanding how an accessible space layer can transform the way organizations across the public and private sectors serve their missions.

Satellite operators are using Azure Orbital Ground Station for spacecraft communications

Delivering space data to Earth requires a secure, robust ground network with low latency and high throughput—presenting various challenges for the operator. Opportunities for satellite contacts are limited by ground station coverage, and it can be difficult and expensive to achieve sufficient capacity.

Azure Space is enabling partner-powered, space-to-cloud transmissions with end-to-end support for space data downlink, processing, storage, analytics, and dissemination. Azure Orbital Ground Station provides easy, secure access to communication products and services required to support all phases of satellite missions—from launch to operations and decommissioning. Mission operations are seamless with self-service scheduling of contacts in Microsoft Azure with a managed data path.

Muon Space collaborates with Microsoft for its first two launches

Learn More

Accelerating the pace of innovation with Azure Space and our partners chevron_right

As previously announced, Muon Space selected Microsoft to support its first-ever launch, using Azure Orbital Ground Station as the sole ground provider for their MuSat-1 mission. Muon Space is ramping up for the launch of its second satellite, MuSat-2, in early 2024—again leveraging Azure Orbital Ground Station to bring down data gathered by a prototype microwave sensor. Muon Space will provide space weather and ionospheric data to the United States Space Force.

“Launch and early operations is always a very stressful period for satellite operators. With Azure Orbital, we achieved contact with MuSat-1 within six minutes of separation from the launch vehicle. This early success, along with our continuous on-orbit operations, gives us confidence to use Azure Orbital for future missions.”
Paige Holland, Operations Automation Lead, Muon Space

Azure Orbital Ground Station for government customers

Learn More

Azure Space technologies advance digital transformation across government agencies chevron_right

We’ve seen increasing momentum with commercial customers adopting Azure Orbital Ground Station. Azure Orbital Ground Station is now available in preview within the Microsoft Azure Government region. Introducing Azure Orbital Ground Station into Azure Government enables government customers to fully leverage a global partner ecosystem of ground stations, cloud modems, self-service scheduling, and a managed data path.

True Anomaly and Viasat are leveraging Azure Orbital Ground Station in Azure Government for space domain awareness

True Anomaly selected Microsoft and Viasat to provide ground support for its upcoming launch of two Jackal spacecrafts—autonomous orbital vehicles for rendezvous and proximity operations. True Anomaly will schedule satellite contacts at Viasat Real Time Earth (RTE) sites using Azure Orbital Ground Station in Azure Government.

“Azure Orbital Ground Station’s managed data path makes it easy to connect to a global ground network. With one click of a button on Azure, we gain access to all Viasat Real Time Earth sites and simply indicate where the data from our spacecraft should land, while Microsoft handles the orchestration and connectivity. Working within Azure Government lets us meet our customers where they are.”
Jared Kirkpatrick, Jackal Block 1 Project Manager, True Anomaly

Provisioning fiber to Viasat sites

To provide customers with their data as quickly and securely as possible, Microsoft is provisioning high-speed, real-time cloud connectivity to select Viasat RTE sites, allowing customers to stream multi-gigabit per second downlinks.

“Viasat is collaborating with Microsoft to enable low-touch access to space communication solutions for our customers like True Anomaly. Azure Orbital Ground Station offers a common data plane and API to access our global antenna network that includes very high throughput data downlinks over Ka-band.”
Aaron Hawkins, Real Time Earth Director for Strategic Partnerships, Viasat

Watch this video to learn more about how our customers are using Azure Orbital Ground Support in support of their missions.  

Gaining insights from space data

As the volume and value of space data continues to grow, having easy and affordable access to ground infrastructure will play a central role in serving customers’ mission-critical operations. So too will be the ability of customers to access and analyze near real-time data gathered from space.

The future of the cloud will incorporate space solutions such as satellite connectivity and Earth observational data. Space-based sensors observing Earth and satellite data will increasingly be used to improve our data insights on the ground.

The latest episode in the Microsoft Future of the Cloud Webinar series explores the role of space data in creating “a planetary computer for a sustainable future.”

Watch the series to learn about:

Leveraging the potential of the cloud and space to enable data-driven decision making for your organization and missions.

How Microsoft Planetary Computer supports global efforts of environmental sustainability and Earth science by enabling developers to build tools for measuring, monitoring, modeling, and managing healthy ecosystems.

The potential opportunities that a new Azure Space data solution built on the Microsoft Planetary Computer will create for Microsoft’s customers to unlock the full potential of their Earth observation data.

Empowering developers to build, deploy, and operate on-orbit

Empowering any developer to build and deploy applications into space will be critical to lowering the barrier to entry for participating in the space industry. Azure Orbital’s software development kit provides satellite operators with the tools and capabilities to unlock new business models and enable mission requirements.

Loft Orbital customer onboarding for virtual missions on YAM-6 is now open

Over the past two years, Microsoft and Loft Orbital have been collaborating to lower the barriers to entry for space. A key pillar in this collaboration has been the enablement of “virtual missions,” making it easier for developers to access space capabilities without having to develop or launch their own hardware in space, and instead by simply writing software applications.

YAM-6 is the first satellite fully dedicated to offering this capability. Last week, we announced that YAM-6 is now publicly accepting customers for virtual missions for 2024. General availability is planned for April 2024.

“Our joint product offering leverages Loft’s space infrastructure and Microsoft’s cloud and ground infrastructure to make it simple for anyone to deploy AI applications in space at scale. YAM-6 is supported by the Azure Orbital product portfolio, including Azure Orbital Ground Station, Azure Orbital space edge on-orbit application framework.”
Pierre Damien Voujour, Cofounder and Chief Executive Officer, Loft Orbital

Space Compass leveraging virtual missions to prove out concepts quickly

Space Compass—a joint venture company between NTT, Japanese Information and Communications Technology (ICT) leader, and SKY Perfect JSAT Corporation, Asia’s largest satellite operator—is on a multi-year mission to deploy space-edge computing capabilities together with an ultra-speed optical data relay network. This will allow space data users to utilize real-time data much more efficiently in the cloud environment (see Figure 1).

Over the past three months, Space Compass has been working with Microsoft to explore use cases in an effort to better understand and demonstrate the value of on-orbit processing, and how to shape their future space infrastructure to support it.

“We are very excited to closely collaborate with the Microsoft team to develop a cutting-edge space computing solution. This is one of our key initiatives to realize the Space Integrated Computing Network.”
Shigehiro Hori, Co-Chief Executive Officer, Space Compass

Space Compass will be running a virtual mission on YAM-6 to demonstrate AI-based ship detection. This demonstration paves the way and de-risks future missions that will be flown on Space Compasses’ own satellites. Learn more about the Space Compass mission.

Figure 1: Ship detection program.

Both Microsoft and Space Compass believe in the power of on-orbit processing, bringing AI to the edge in space with high-speed connectivity to the cloud.

Learn More

These exciting use cases are just the beginning. Learn more about the ways that Microsoft Azure Space can transform how you deliver on your mission:

Sign up for news and updates on how Azure Space data can advance your organization and missions, or complete this form to get in touch with the Azure Space team.

Visit the Azure Orbital Ground Station website and documentation page.

The post Create new ways to serve your mission with Microsoft Azure Space appeared first on Azure Blog.
Quelle: Azure

Azure OpenAI Service powers the Microsoft Copilot ecosystem

Many AI systems are designed for collaboration: Copilot is one of them. Copilot—powered by Microsoft Azure OpenAI Service—allows you to simplify how you design, operate, optimize, and troubleshoot apps and infrastructure from cloud to edge. It utilizes language models, the Azure control plane, and insights about your Azure and Arc-enabled assets. All of this is carried out within the framework of Azure’s steadfast commitment to safeguarding data security and privacy.

A brief history of AI collaboration with copilots

In aviation terms a copilot is responsible for assisting the pilot in command, sharing control of the airplane, and handling various navigational and operational tasks. Having a copilot ensures that there is a second trained professional who can take over controls if the main pilot is unable to perform their duties, thereby enhancing safety.

Microsoft originally introduced the concept of a copilot two years ago as an AI pair programmer in GitHub to assist developers in generating code, catching errors, and suggesting improvements. Today, Azure OpenAI Service powers more than just GitHub Copilot. Microsoft 365 Copilot performs as a digital companion for your whole life creating a single Copilot user experience across Bing, Edge, Microsoft 365, and Windows.

AI at the service of others

Microsoft Copilot represents a profound shift in how AI-powered software can support the user experience, the architecture, the services that it uses, and how we think about safety and security.

“We now have machines that are so fluent in human language. Every place that you interact with a machine ought to be much more fluent in human natural language and I think we’ll start to see that change coming in a lot of different places as well and it will really redefine the interfaces that we’re used to.”—Eric Boyd, head of AI at Microsoft.

Copilots powered by Azure OpenAI Service can be trained on a specific set of data to adapt the model to a specific domain. We’re seeing developments across a variety of sectors. For example:

Language translation: Language translation models can help bridge communication gaps between people who speak different languages. This can be particularly useful in situations such as emergency response, disaster relief, and international diplomacy.

Educational support: Educational chatbots that can help students with homework, provide personalized tutoring, and answer questions related to different subjects.Crime investigation: Financial crimes such as money laundering and fraud are linked to human trafficking, child exploitation, terrorism, theft, and wildlife trafficking. SymphonyAI’s new Sensa Copilot acts as a sophisticated AI assistant to a financial crime investigator by automatically collecting, collating, and summarizing financial and third-party information.

Learn More

Microsoft Azure AI Fundamentals: Generative AI chevron_right

Medical reporting: Generative AI has the potential to increase the power and accessibility of self-service reporting, making it easier for healthcare organizations and their providers to identify operational improvements, including ways to reduce costs and to find answers to questions both locally and within a broader context. 

Climate change: Azure OpenAI Service can be used to generate educational materials or assist in research on topics related to climate change, including natural disasters, global warming, and environmental conservation.

Inclusive and diverse avatars: DeepBrain AI includes a library of photo-realistic and virtual avatars that businesses can use for training videos, news broadcasts, marketing videos, and more. An integral part of the digital world, avatars foster a sense of inclusivity and diversity by allowing people to choose representations that reflect their individuality, regardless of physical appearance or other limitations.

Industrial advances: ABB is partnering with Microsoft to integrate Azure OpenAI Service into its ABB Ability™ Genix Industrial Analytics and AI suite with the goal of boosting real-time insights and asset longevity by 20% and reducing unplanned downtimes by 60%. Additionally, it will aid in monitoring and optimizing industrial emissions and energy usage, contributing to sustainability goals.

Prioritizing human agency

The Copilot System powered by Azure OpenAI Service builds on our existing commitments to data security and privacy in the enterprise. Copilot automatically inherits your organization’s security, compliance, and privacy policies for Microsoft 365. Data is managed in line with our current commitments. Copilot prioritizes human agency and puts the user in control. This includes noting limitations, providing links to sources, and prompting users to review, fact-check, and fine-tune content based on their own knowledge and judgment.

AI systems can analyze and learn from copious amounts of data and help employees make decisions based on that data. They can be programmed for specific tasks such as image recognition and natural language processing.

While technology has the potential to generate both favorable and adverse consequences, technological developments such as Copilot are proving far more likely to help society steer a straight and humane course toward a future that benefits us all.

Learn, connect, and explore with the latest technologies announced at Microsoft Ignite.

Our commitment to responsible AI

Explore

Empowering responsible AI practices chevron_right

With Responsible AI tools in Azure, Microsoft is empowering organizations to build the next generation of AI apps safely and responsibly. Microsoft has announced the general availability of Azure AI Content Safety, a state-of-the art AI system that helps organizations keep AI-generated content safe and create better online experiences for everyone. Customers—from startup to enterprise—are applying the capabilities of Azure AI Content Safety to social media, education, and employee engagement scenarios to help construct AI systems that operationalize fairness, privacy, security, and other responsible AI principles.

Get started with Azure OpenAI Service 

Apply for access to Azure OpenAI Service by completing this form. 

Learn about Azure OpenAI Service and the latest enhancements. 

Get started with GPT-4 in Azure OpenAI Service in Microsoft Learn. 

Read our partner announcement blog, empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service. 

Learn how to use the new Chat Completions API (preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service.

Learn more about Azure AI Content Safety.

The post Azure OpenAI Service powers the Microsoft Copilot ecosystem appeared first on Azure Blog.
Quelle: Azure

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Cloud computing has revolutionized the way you build, deploy, and scale applications and services. While you have unprecedented flexibility, agility, and scalability, you also face greater challenges in managing cost, security, and compliance. While IT security and compliance are often managed by central teams, cost is a shared responsibility across executive, finance, product, and engineering teams, which is what makes managing cloud cost such a challenge. Having the right tools to enable cross-group collaboration and make data-driven decisions is critical.

Fortunately, you have everything you need in the Microsoft Cloud to implement a streamlined FinOps practice that brings people together and connects them to the data they need to make business decisions. And with new developments like Copilot in Microsoft Cost Management and Microsoft Fabric, there couldn’t be a better time to take a fresh look at how you manage cost within your organization and how you can leverage the FinOps Framework and the FinOps Open Cost and Usage Specification (FOCUS) to accelerate your FinOps efforts.

There’s a lot to cover in this space, so I’ll split this across a series of blog posts. In this first blog post, I’ll introduce the core elements of Cost Management and Fabric that you’ll need to lay the foundation for the rest of the series, including how to export data, how FOCUS can help, and a few quick options that anyone can use to setup reports and alerts in Fabric with just a few clicks.

No-code extensibility with Cost Management exports

As your FinOps team grows to cover new services, endpoints, and datasets, you may find they spend more time integrating disparate APIs and schemas than driving business goals. This complexity also keeps simple reports and alerts just out of reach from executive, finance, and product teams. And when your stakeholders can’t get the answers they need, they push more work on to engineering teams to fill those gaps, which again, takes away from driving business goals.

We envision a future where FinOps teams can empower all stakeholders to stay informed and get the answers they need through turn-key integration and AI-assisted tooling on top of structured guidance and open specifications. And this all starts with Cost Management exports—a no-code extensibility feature that brings data to you.

As of today, you can sign up for a limited preview of Cost Management expands where you can export five new datasets directly into your storage account without a single line of code. In addition to the actual and amortized cost and usage details you get today, you’ll also see:

Cost and usage details aligned to FOCUS

Price sheets

Reservation details

Reservation recommendations

Reservation transactions

Of note, the FOCUS dataset includes both actual and amortized costs in a single dataset, which can drive additional efficiencies in your data ingestion process. You’ll benefit from reduced data processing times and more timely reporting on top of reduced storage and compute costs due to fewer rows and less duplication of data.

Beyond the new datasets, you’ll also discover optimizations that deliver large datasets more efficiently, reduced storage costs by updating rather than creating new files each day, and more. All exports are scheduled at the same time, to ensure scheduled refreshes of your reports will stay in sync with the latest data. Coupled with file partitioning, which is already available and recommended today, and data compression, which you’ll see in the coming months, the exports preview removes the need to write complex code to extract, transfer, and load large datasets reliably via APIs. This better enables all FinOps stakeholders to build custom reports to get the answers they need without having to learn a single API or write a single line of code.

To learn about all the benefits of the exports preview—yes, there’s more—read the full synopsis in Cost Management updates. And to start exporting your FOCUS cost and usage, price sheet, and reservation data, sign up for the exports preview today.

FOCUS democratizes cloud cost analytics

In case you’re not familiar, FOCUS is a groundbreaking initiative to establish a common provider and service-agnostic format for billing data that empowers organizations to better understand cost and usage patterns and optimize spending and performance across multiple cloud, software as a service (SaaS), and even on-premises service offerings. FOCUS provides a consistent, clear, and accessible view of cost data, explicitly designed for FinOps needs. As the new “language” of FinOps, FOCUS enables practitioners to collaborate more efficiently and effectively with peers throughout the organization and even maximize transferability and onboarding for new team members, getting people up and running quicker.

FOCUS 0.5 was originally announced in June 2023, and we’re excited to be leading the industry with our announcement of native support for the FOCUS 1.0 preview as part of Cost Management exports on November 13, 2023. We believe FOCUS is an important step forward for our industry, and we look forward to our industry partners joining us and collaboratively evolving the specification alongside FinOps practitioners from our collective customers and partners.

FOCUS 1.0 preview adds new columns for pricing, discounts, resources, and usage along with prescribed behaviors around how discounts are applied. Soon, you’ll also have a powerful new use case library, which offers a rich set of problems and prebuilt queries to help you get the answers you need without the guesswork. Armed with FOCUS and the FinOps Framework, you have a literal playbook on how to understand and extract answers out of your data effortlessly, enabling you to empower FinOps stakeholders regardless of how much knowledge or experience they have, to get the answers they need to maximize business value with the Microsoft Cloud.

For more details about FOCUS or why we believe it’s important, see FOCUS: A new specification for cloud cost transparency. And stay tuned for more updates as we dig into different scenarios where FOCUS can help you.

Microsoft Fabric and Copilot enable self-service analytics

So far, I’ve talked about how you can leverage Cost Management exports as a turn-key solution to extract critical details about your costs, prices, and reservations using FOCUS as a consistent, open billing data format with its use case library that is a veritable treasure map for finding answers to your FinOps questions. While these are all amazing tools that will accelerate your FinOps efforts, the true power of democratizing FinOps lies at the intersection of Cost Management and FOCUS with a platform that enables you to provide your stakeholders with self-serve analytics and alerts. And this is exactly what Microsoft Fabric brings to the picture.

Microsoft Fabric is an all-in-one analytics solution that encompasses data ingestion, normalization, cleansing, analysis, reporting, alerting, and more. I could write a separate blog post about how to implement each FinOps capability in Microsoft Fabric, but to get you acclimated, let me introduce the basics.

Your first step to leveraging Microsoft Fabric starts in Cost Management, which has done much of the work for you by exporting details about your prices, reservations, and cost and usage data aligned to FOCUS.

Once exported, you’ll ingest your data into a Fabric lakehouse, SQL, or KQL database table and create a semantic model to bring data together for any reports and alerts you’ll want to create. The database option you use will depend on how much data you have and your reporting needs. Below is an example using a KQL database, which uses Azure Data Explorer under the covers, to take advantage of the performance and scale benefits as well as the powerful query language.

Fabric offers several ways to quickly explore data from a semantic model. You can explore data by simply selecting the columns you want to see, but I recommend trying the auto-create a report option which takes that one step further by generating a quick summary based on the columns you select. As an example, here’s an auto-generated summary of the FOCUS EffectiveCost broken down by ChargePeriodStart, ServiceCategory, SubAccountName, Region, PricingCategory, and CommitmentDiscountType. You can apply quick tweaks to any visual or switch to the full edit experience to take it even further.

Those with a keen eye may notice the Copilot button at the top right. If we switch to edit mode, we can take full advantage of Copilot and even ask it to create the same summary:

Copilot starts to get a little fancier with the visuals and offers summarized numbers and a helpful filter. I can also go further with more specific questions about commitment-based discounts:

Of course, this is barely scratching the surface. With a richer semantic model including relationships and additional details, Copilot can go even further and save you time by giving you the answers you need and building reports with less time and hassle.

In addition to having unparalleled flexibility in reporting on the data in the way you want, you can also create fine-grained alerts in a more flexible way than ever before with very little effort. Simply select the visual you want to measure and specify when and how you want to be alerted:

This gets even more powerful when you add custom visuals, measures, and materialized views that offer deeper insights.

This is just a glimpse of what you can do with Cost Management and Microsoft Fabric together. I haven’t even touched on the data flows, machine learning capabilities, and the potential of ingesting data from multiple cloud providers or SaaS vendors also using FOCUS to give you a full, single pane of glass for your FinOps efforts. You can imagine the possibilities of how Copilot and Fabric can impact every FinOps capability, especially when paired with rich collaboration and automation tools like Microsoft Teams, Power Automate, and Power Apps that can help every stakeholder accomplish more together. I’ll share more about these in a future blog post or tutorial.

Next steps to accomplish your FinOps goals

I hope you’re as excited as I am about the potential of low- or even no-code solutions that empower every FinOps stakeholder with self-serve analytics. Whether you’re in finance seeking answers to complex questions that require transforming, cleansing, and joining multiple datasets, in engineering looking for a solution for near-real-time alerts and analytics that can react quickly to unexpected changes, or a FinOps team that now has more time to pursue something like unit cost economics to measure the true value of the cloud, the possibilities are endless. As someone who uses Copilot often, I can say that the potential of AI is real. Copilot saves me time in small ways throughout the day, enabling me to accomplish more with less effort. And perhaps the most exciting part is knowing that the more we leverage Copilot, the better it will get at automating tasks that free us up to solve bigger problems. I look forward to Copilot familiarizing itself with FOCUS and the use case library to see how far we’re able to go with a natural language description of FinOps questions and tasks.

And of course, this is just the beginning. We’re on the cusp of a revolutionary change to how organizations manage and optimize costs in the cloud. Stay tuned for more updates in the coming months as we share tutorials and samples that will help you streamline and accomplish FinOps tasks in less time. In the meantime, familiarize yourself with Microsoft Fabric and Copilot and learn more about how you can accomplish your FinOps goals with an end-to-end analytics platform.
The post Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric appeared first on Azure Blog.
Quelle: Azure

How Azure is ensuring the future of GPUs is confidential

In Microsoft Azure, we are continually innovating to enhance security. One such pioneering effort is our collaboration with our hardware partners to create a new foundation based on silicon, that enables new levels of data protection through the protection of data in memory using confidential computing.

Data exists in three stages in its lifecycle: in use (when it is created and computed upon), at rest (when stored), and in transit (when moved). Customers today already take measures to protect their data at rest and in transit with existing encryption technologies. However, they have not had the means to protect their data in use at scale. Confidential computing is the missing third stage in protecting data when in use via hardware-based trusted execution environments (TEEs) that can now provide assurance that the data is protected during its entire lifecycle.

The Confidential Computing Consortium (CCC), which Microsoft co-founded in September 2019, defines confidential computing as the protection of data in use via hardware-based TEEs. These TEEs prevent unauthorized access or modification of applications and data during computation, thereby always protecting data. The TEEs are a trusted environment providing assurance of data integrity, data confidentiality, and code integrity. Attestation and a hardware-based root of trust are key components of this technology, providing evidence of the system’s integrity and protecting against unauthorized access, including from administrators, operators, and hackers.

Confidential computing can be seen as a foundational defense in-depth capability for workloads who prefer an extra level of assurance for their cloud workloads. Confidential computing can also aid in enabling new scenarios such as verifiable cloud computing, secure multi-party computation, or running data analytics on sensitive data sets.

While confidential computing has recently been available for central processing units (CPUs), it has also been needed for graphics processing units (GPU)-based scenarios that require high-performance computing and parallel processing, such as 3D graphics and visualization, scientific simulation and modeling, and AI and machine learning. Confidential computing can be applied to the GPU scenarios above for use cases that involve processing sensitive data and code on the cloud, such as healthcare, finance, government, and education. Azure has been working closely with NVIDIA® for several years to bring confidential to GPUs. And this is why, at Microsoft Ignite 2023, we announced Azure confidential VMs with NVIDIA H100-PCIe Tensor Core GPUs in preview. These Virtual Machines, along with the increasing number of Azure confidential computing (ACC) services, will allow more innovations that use sensitive and restricted data in the public cloud.

Potential use cases

Confidential computing on GPUs can unlock use cases that deal with highly restricted datasets and where there is a need to protect the model. An example use case can be seen with scientific simulation and modeling where confidential computing can enable researchers to run simulations and models on sensitive data, such as genomic data, climate data, or nuclear data, without exposing the data or the code (including model weights) to unauthorized parties. This can facilitate scientific collaboration and innovation while preserving data privacy and security.

Another possible use case for confidential computing applied to image generation is medical image analysis. Confidential computing can enable healthcare professionals to use advanced image processing techniques, such as deep learning, to analyze medical images, such as X-rays, CT scans, or MRI scans, without exposing the sensitive patient data or the proprietary algorithms to unauthorized parties. This can improve the accuracy and efficiency of diagnosis and treatment, while preserving data privacy and security. For example, confidential computing can help detect tumors, fractures, or anomalies in medical images.

Given the massive potential of AI, confidential AI is the term we use to represent a set of hardware-based technologies that provide cryptographically verifiable protection of data and models throughout their lifecycle, including when data and models are in use. Confidential AI addresses several scenarios spanning the AI lifecycle.

Confidential inferencing. Enables verifiable protection of model IP while simultaneously protecting inferencing requests and responses from the model developer, service operations and the cloud provider.

Confidential multi-party computation. Organizations can collaborate to train and run inferences on models without ever exposing their models or data to each other, and enforcing policies on how the outcomes are shared between the participants.

Confidential training. With confidential training, models builders can ensure that model weights and intermediate data such as checkpoints and gradient updates exchanged between nodes during training aren’t visible outside of TEEs. Confidential AI can enhance the security and privacy of AI inferencing by allowing data and models to be processed in an encrypted state, preventing unauthorized access or leakage of sensitive information.

Confidential computing building blocks

In response to growing global demands for data security and privacy, a robust platform with confidential computing capabilities is essential. It begins with innovative hardware as part of its core foundation and incorporating core infrastructure service layers with Virtual Machines and containers. This is a crucial step towards allowing services to transition to confidential AI. Over the next few years, these building blocks will enable a confidential GPU ecosystem of applications and AI models.

Confidential Virtual Machines

Confidential Virtual Machines are a type of virtual machine that provides robust security by encrypting data in use, ensuring that your sensitive data remains private and secure even while being processed. Azure was the first major cloud to offer confidential Virtual Machines powered by AMD SEV-SNP based CPUs with memory encryption that protects data while processing and meets the Confidential Computing Consortium (CCC) standard for data protection at the Virtual Machine level.

Confidential Virtual Machines powered by Intel® TDX offer foundational virtual machines-level protection of data in use and are now broadly available through the DCe and ECe virtual machines. These virtual machines enable seamless onboarding of applications with no code changes required and come with the added benefit of increased performance due to the 4th Gen Intel® Xeon® Scalable processors they run on. 

Confidential GPUs are an extension of confidential virtual machines, which are already available in Azure. Azure is the first and only cloud provider offering confidential virtual machines with 4th Gen AMD EPYC™ processors with SEV-SNP technology and NVIDIA H100 Tensor Core GPUs in our NCC H100 v5 series virtual machines. Data is protected throughout its processing due to the encrypted and verifiable connection between the CPU and the GPU, coupled with memory protection mechanism for both the CPU and GPU. This ensures that the data is protected throughout processing and only seen as cipher text from outside the CPU and GPU memory.

Confidential containers

Container support for confidential AI scenarios is crucial as containers provide modularity, accelerate the development/deployment cycle, and offer a lightweight and portable solution that minimizes virtualization overhead, making it easier to deploy and manage AI/machine learning workloads.

Azure has made innovations to bring confidential containers for CPU-based workloads:

To reduce the infrastructure management on organizations, Azure offers serverless confidential containers in Azure Container Instances (ACI). By managing the infrastructure on behalf of organizations, serverless containers provide a low barrier to entry for burstable CPU-based AI workloads combined with strong data privacy-protective assurances, including container group-level isolation and the same encrypted memory powered by AMD SEV-SNP technology. 

To meet various customer needs, Azure now also has confidential containers in Azure Kubernetes Service (AKS), where organizations can leverage pod-level isolation and security policies to protect their container workloads, while also benefiting from the cloud-native standards built within the Kubernetes community. Specifically, this solution leverages investment in the open source Kata Confidential Containers project, a growing community with investments from all of our hardware partners including AMD, Intel, and now NVIDIA, too.

These innovations will need to be extended to confidential AI scenarios on GPUs over time.

The road ahead

Innovation in hardware takes time to mature and replace existing infrastructure. We’re dedicated to integrating confidential computing capabilities across Azure, including all virtual machine shop keeping units (SKUs) and container services, aiming for a seamless experience. This includes data-in-use protection for confidential GPU workloads extending to more of our data and AI services.

Eventually confidential computing will become the norm, with pervasive memory encryption across Azure’s infrastructure, enabling organizations to verify data protection in the cloud throughout the entire data lifecycle.

Learn about all of the Azure confidential computing updates from Microsoft Ignite 2023.
The post How Azure is ensuring the future of GPUs is confidential appeared first on Azure Blog.
Quelle: Azure

Building resilience to your business requirements with Azure

At Microsoft, we understand the trust customers put in us by running their most critical workloads on Microsoft Azure. Whether they are retailers with their online stores, healthcare providers running vital services, financial institutions processing essential transactions, or technology partners offering their solutions to other enterprise customers—any downtime or impact could lead to business loss, social services interruptions, and events that could damage their reputation and affect the end-user confidence. In this blog post, we will discuss some of the design principles and characteristics that we see among the customer leaders we work with closely to enhance their critical workload availability according to their specific business needs.

A commitment to reliability with Azure

As we continue making investments that drive platform reliability and quality, there remains a need for customers to evaluate their technical and business requirements against the options Azure provides to meet availability goals through architecture and configuration. These processes, along with support from Microsoft technical teams, ensure you are prepared and ready in the event of an incident. As part of the shared responsibility model, Azure offers customers various options to enhance reliability. These options involve choices and tradeoffs, such as possible higher operational and consumption costs. You can use the flexibility of cloud services to enable or disable some of these features if your needs change. In addition to technical configuration, it is essential to regularly check your team’s technical and process readiness.

“We serve customers of all sizes in an effort to maximize their return on investment, while offering support on their migration and innovation journey. After a major incident, we participated in executive discussions with customers to provide clear contextual explanations as to the cause and reassurances on actions to prevent similar issues. As product quality, stability, and support experience are important focus areas, a common outcome of these conversations is an enhancement of cooperation between customer and cloud provider for the possibility of future incidents. I’ve asked Director of Executive Customer Engagement, Bryan Tang, from the Customer Support and Service team to share more about the types of support you should seek from your technical Microsoft team & partners.”—Mark Russinovich, CTO, Azure.

Design principles

Key elements to building a reliable workload begin with establishing an agreed available target with your business stakeholders, as that would influence your design and configuration choices. As you continue to measure uptime against baseline, it is critical to be ready to adopt any new services or features that can benefit your workload availability given the pace of Cloud innovation. Finally, adopt a Continuous Validation approach to ensure your system is behaving as designed when incidents do occur or identify weak points early, along with your team’s readiness upon major incidents to partner with Microsoft on minimizing business disruptions. We will go into more details on these design principles:

Know and measure against your targets

Continuously assess and optimize

Test, simulate, and be ready

Know and measure against your targets

Azure customers may have outdated availability targets, or workloads that don’t have targets defined with business stakeholders. To cover the targets mentioned more extensively, you can refer to the business metrics to design resilient Azure applications guide. Application owners should revisit their availability targets with respective business stakeholders to confirm those targets, then assess if their current Azure architecture is designed to support such metrics, including SLA, Recovery Time Objective (RTO), and Recovery Point Objective (RPO). Different Azure services, along with different configurations or SKU levels, carry different SLAs. You need to ensure that your design does, at a minimum, reflect: 

Defined SLA versus Composite SLA: Your workload architecture is a collection of Azure services. You can run your entire workload based on infrastructure as a service (IaaS) virtual machines (VMs) with Storage and Networking across all tiers and microservices, or you can mix your workloads with PaaS such as Azure App Service and Azure Database for PostgreSQL, they all provide different SLAs to the SKUs and configurations you selected. To assess their workload architecture, we asked customers about their SLA. We found that some customers had no SLA, some had an outdated SLA, and some had unrealistic SLAs. The key is to get a confirmed SLA from your business owners and calculate the Composite SLA based on your workload resources. This shows you how well you meet your business availability objectives.

Continuously assess options and be ready to optimize

One of the most significant drivers for cloud migration is the financial benefits, such as shifting from Capital Expenditure to Operating Expenditure and taking advantage of the economies cloud providers operating at scale. However, one often-overlooked benefit is our continued investment and innovation in the newest hardware, services, and features.

Many customers have moved their workloads from on-premises to Azure in a quick and simple way, by replicating workload architecture from on-premises to Azure, without using the extra options and features Azure offers to improve availability and performance. Or we see customers treating their Cloud architecture as pets versus cattle, instead of seeing them as resources that work together and can be changed with better options when they are available. We fully understand customer preference, habit, and maybe the worries of black-box as opposed to managing your own VMs where you do maintenance or security scans. However, with our ongoing innovation and commitment to providing platform as a service (PaaS) and software as a service (SaaS), it gives you opportunities to focus your limited resources and effort on functions that make your business stand out.

Architecture reliability recommendations and adoption:

We make every effort to ensure you have the most specific and latest recommendations through various channels, our flagship channel through Azure Advisor, which now also supports the Reliability Workbook, and we partner closely with engineering to ensure any additional recommendations that might take time to work into workbook and Azure Advisor are available to your consideration through Azure Proactive Resiliency Library (APRL). These collectively provide a comprehensive list of documented recommendations for the Azure services you leverage for your considerations.

Security and data resilience:

While the previous point focuses on configurations and options to leverage for the Azure components that make up your application architecture, it is just as critical to ensure your most critical asset is protected and replicated. Architecture gives you a solid foundation to withstand failure in cloud service level failure, it is as critical to ensure you have the necessary data and resource protection from any accidental or malicious deletes. Azure offers options such as Resource Locks, enabling soft delete on your storage accounts. Your architecture is as solid as the security and identity access management applied to it as an overall protection. 

Assess your options and adopt:

While there are many recommendations that can be made, ultimately, implementation remains your decision. It is understandable that changing your architecture might not just a matter of modifying your deployment template, as you want to ensure your test cases are comprehensive, and it may involve time, effort, and cost to run your workloads. Our field is prepared to help you with exploring options and tradeoffs, but the decision is ultimately yours to enhance availability to meet the business requirements of your stakeholders. This mentality to change is not limited to reliability, but also other aspects of Well-Architected Framework, such as Cost Optimization. 

Test, simulate, and be ready

Testing is a continuous process, both at a technical and process level, with automation being a key part of the process. In addition to a paper-based exercise in ensuring the selection of the right SKUs and configurations of cloud resources to strive for the right Composite SLA, applying Chaos Engineering to your testing helps find weaknesses and verify readiness otherwise. The criticality of monitoring your application to detect any disruptions and react to quickly recover, and finally, knowing how to engage Microsoft support effectively, when needed, can help set the proper expectations to your stakeholders and end users in the event of an incident. 

Continuous validation-Chaos Engineering: Operating a distributed application, with microservices and different dependencies between centralized services and workloads, having a chaos mindset helps inspire confidence in your resilient architecture design by proactively finding weak points and validating your mitigation strategy. For customers that have been striving for DevOps success through automation, continuous validation (CV) became a critical component for reliability, besides continuous integration (CI) and continuous delivery (CD). Simulating failure also helps you to understand how your application would behave with partial failure, how your design would respond to infrastructure issues, and the overall level of impact to end users. Azure Chaos Studio is now generally available to assist you further with this ongoing validation. 

Detect and react: Ensure your workload is monitored at the application and component level for a comprehensive health view. For instance, Azure Monitor helps collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. Azure also offers a suite of experiences to keep you informed about the health of your cloud resources in Azure Status that informs you of Azure service outages, Service Health that provides service impacting communications such as planned maintenance, and Resource Health on individual services such as a VM. 

Incident response plan: Partner closely with our technical support teams to jointly develop an incident response plan. The action plan is essential to developing shared accountability between yourself and Microsoft as we work towards resolution of your incident. The basics of who, what, when for you and us to partner through a quick resolution. Our teams are ready to run test drill with you as well to validate this response plan for our joint success. 

Ultimately, your desired reliability is an outcome that you can only achieve if you take into account all these approaches and the mentality to update for optimization. Building application resilience is not a single feature or phase, but a muscle that your teams will build, learn, and strengthen over time. For more details, please check out our Well Architected Framework guidance to learn more and consult with your Microsoft team as their only objective is you realizing full business value on Azure. 
The post Building resilience to your business requirements with Azure appeared first on Azure Blog.
Quelle: Azure

The seven pillars of modern AI development: Leaning into the era of custom copilots

In an era where technology is rapidly advancing and information consumption is exponentially growing, there are many new opportunities for businesses to manage, retrieve, and utilize knowledge. The integration of generative AI (content creation by AI) and knowledge retrieval mechanisms is revolutionizing knowledge management, making it more dynamic and readily available. Generative AI offers businesses more efficient ways to capture and retrieve institutional knowledge, improving user productivity by reducing time spent looking for information 

This business transformation was enabled by copilots. Azure AI Studio is the place for AI Developers to build custom copilot experiences.

Copilots infuse data with large language models (LLM) to improve the response generation process. This process can be described as follows: the system receives a query (e.g., a question), then, before responding, fetches pertinent information from a designated data source related to the query, and uses the combined content and query to guide the language model in formulating an appropriate response.

The power of copilots is in their adaptability, particularly their unparalleled ability to seamlessly and securely tap into both internal and external data sources. This dynamic, always-updated integration doesn’t just increase the accessibility and usability of enterprise knowledge, it improves the efficiency and responsiveness of businesses to ever-evolving demands.

Although there is much excitement for copilot pattern-based solutions, it’s important for businesses to carefully consider the design elements to design a durable, adaptable, and effective approach. How can AI developers ensure their solutions do not just capture attention, but also enhance customer engagement? Here are seven pillars to think through when building your custom copilot.

Retrieval: Data ingestion at scale

Data connectors are vital for businesses aiming to harness the depth and breadth of their data across multiple expert systems using a copilot. These connectors serve as the gateways between disparate data silos, connecting valuable information, making accessible and actionable in a unified search experience. Developers can ground models on their enterprise data and seamlessly integrate structured, unstructured, and real-time data using Microsoft Fabric.

For copilot, data connectors are no longer just tools. They are indispensable assets that make real-time, holistic knowledge management a tangible reality for enterprises.

Enrichment: Metadata and role-based authentication

Enrichment is the process of enhancing, refining, and valuing raw data. In the context of LLMs, enrichment often revolves around adding layers of context, refining data for more precise AI interactions, and data integrity. This helps transform raw data into a valuable resource. 

When building custom copilots, enrichment helps data become more discoverable and precise across applications. By enriching the data, generative AI applications can deliver context-aware interactions. 

LLM-driven features often rely on specific, proprietary data. Simplifying data ingestion from multiple sources is critical to create a smooth and effective model. To make enrichment even more dynamic, introducing templating can be beneficial. Templating means crafting a foundational prompt structure, which can be filled in real-time with the necessary data, which can safe-guard and tailor AI interactions.

The combined strength of data enrichment and chunking leads AI quality improvements, especially when handling large datasets. Using enriched data, retrieval mechanisms can grasp cultural, linguistic, and domain-specific nuances. This results in more accurate, diverse, and adaptable responses, bridging the gap between machine understanding and human-like interactions.

Search: Navigating the data maze 

Advanced embedding models are changing the way we understand search. By transforming words or documents into vectors, these models capture the intrinsic meaning and relationships between them. Azure AI Search, enhanced with vector search capabilities, is a leader in this transformation. Using Azure AI Search with the power of semantic reranking gives users contextually pertinent results, regardless of their exact search keywords.

With copilots, search processes can leverage both internal and external resources, absorbing new information without extensive model training. By continuously incorporating the latest available knowledge, responses are not just accurate but also deeply contextual, setting the stage for a competitive edge in search solutions.

The basis of search involves expansive data ingestion, including source document retrieval, data segmentation, embedding generation, vectorization, and index loading to ensure that the results align closely with the user’s intent when a user inputs a query, that undergoes vectorization before heading to Azure AI Search for retrieving most relevant results.

Continuous innovation to refine search capabilities has led to a new concept of hybrid search. This innovative approach melds the familiarity of keyword-based search with the precision of vector search techniques. The blend of keyword, vector, and semantic ranking further improves the search experience, delivering more insightful and accurate results for end users.

Prompts: Crafting efficient and responsible interactions

In the world of AI, prompt engineering provides specific instructions to guide the LLM’s behavior and generate desired outputs. Crafting the right prompt is crucial to get not just accurate, but safe and relevant responses that meet user expectations. 

Prompt efficiency requires clarity and context. To maximize the relevance of AI responses, it is important to be explicit with instructions. For instance, if concise data is needed, specify that you want a short answer. Context also plays a central role. Instead of just asking about market trends, specify current digital marketing trends in e-commerce. It can even be helpful to provide the model with examples that demonstrate the intended behavior.

Azure AI prompt flow enables users to add content safety filters that detect and mitigate harmful content, like jailbreaks or violent language, in inputs and outputs when using open source models. Or, users can opt to use models offered through Azure OpenAI Service, which have content filters built-in. By combining these safety systems with prompt engineering and data retrieval, customers can improve the accuracy, relevance, and safety of their application. 

Learn More

Get started with prompt flow chevron_right

Achieving quality AI responses often involves a mix of tools and tactics. Regularly evaluating and updating prompts helps align responses with business trends. Intentionally crafting prompts for critical decisions, generating multiple AI responses to a single prompt, and then selecting the best response for the use case is a prudent strategy. Using a multi-faceted approach helps AI to become a reliable and efficient tool for users, driving informed decisions and strategies.

User Interface (UI): The bridge between AI and users 

An effective UI offers meaningful interactions to guide users through their experience. In the ever-evolving landscape of copilots, providing accurate and relevant results is always the goal. However, there can be instances when the AI system might generate responses that are irrelevant, inaccurate, or ungrounded. A UX team should implement human-computer interaction best practices to mitigate these potential harms, for example by providing output citations, putting guardrails on the structure of inputs and outputs, and by providing ample documentation on an application’s capabilities and limitations. 

To mitigate potential issues like harmful content generation, various tools should be considered. For example, classifiers can be employed to detect and flag possibly harmful content, guiding the system’s subsequent actions, whether that’s changing the topic or reverting to a conventional search. Azure AI Content Safety is a great tool for this.

A core principle for Retrieval Augmented Generation (RAG)-based search experiences is user-centric design, emphasizing an intuitive and responsible user experience. The journey for first-time users should be structured to ensure they comprehend the system’s capabilities, understand its AI-driven nature, and are aware of any limitations. Features like chat suggestions, clear explanations of constraints, feedback mechanisms, and easily accessible references enhance the user experience, fostering trust and minimizing over-reliance on the AI system.

Continuous improvement: The heartbeat of AI evolution 

The true potential of an AI model is realized through continuous evaluation and improvement. It is not enough to deploy a model; it needs ongoing feedback, regular iterations, and consistent monitoring to ensure it meets evolving needs. AI developers need powerful tools to support the complete lifecycle of LLMs, including continuously reviewing and improving AI quality. This not only brings the idea of continuous improvement to life, but also ensures that it is a practical, efficient process for developers. 

Identifying and addressing areas of improvement is a fundamental step to continuously refine AI solutions. It involves analyzing the system’s outputs, such as ensuring the right documents are retrieved, and going through all the details of prompts and model parameters. This level of analysis helps identify potential gaps, and areas for refinement to optimize the solution.

Prompt flow in Azure AI Studio is tailored for LLMs and transforming LLM development lifecycle. Features like visualizing LLM workflows and the ability to test and compare the performance of various prompt versions empowers developers with agility and clarity. As a result, the journey from conceptualizing an AI application to deploying it becomes more coherent and efficient, ensuring robust, enterprise-ready solutions.

Unified development

The future of AI is not just about algorithms and data. It’s about how we retrieve and enrich data, create robust search mechanisms, articulate prompts, infuse responsible AI best practices, interact with, and continuously refine our systems. 

AI developers need to integrate pre-built services and models, prompt orchestration and evaluation, content safety, and responsible AI tools for privacy, security, and compliance. Azure AI Studio offers a comprehensive model catalog, including the latest multimodal models like GPT-4 Turbo with Vision coming soon to Azure OpenAI Service and open models like Falcon, Stable Diffusion, and the Llama 2 managed APIs. Azure AI Studio is a unified platform for AI developers. It ushers in a new era of generative AI development, empowering developers to explore, build, test, and deploy their AI innovations at scale. VS Code, GitHub Codespaces, Semantic Kernel, and LangChain integrations support a code-centric experience.

Whether creating custom copilots, enhancing search, delivering call center solutions, developing bots and bespoke applications, or a combination of these, Azure AI Studio provides the necessary support.

Learn more about the power of Azure AI Studio

As AI continues to evolve, it is essential to keep these seven pillars in mind to help build systems that are efficient, responsible, and always at the cutting-edge of innovation.

Are you eager to tap into the immense capabilities of AI for your enterprise? Start your journey today with Azure AI Studio! 

We’ve pulled together two GitHub repos to help you get building quickly. The Prompt Flow Sample showcases prompt orchestration for LLMOps—using Azure AI Search and Cosmos DB for grounding. Prompt flow streamlines prototyping, experimenting, iterating, and deploying AI applications. The Contoso Website repository houses the eye-catching website featured at Microsoft Ignite, featuring content and image generation capabilities, along with vector search. These two repos can be used together to help build end-to-end custom copilot experiences.

Learn more

Build with Azure AI Studio

Join our SMEs during the upcoming Azure AI Studio AMA session – December 14th, 9-10am PT

Azure AI SDK

Azure AI Studio documentation

Introduction to Azure AI Studio (learn module) 

The post The seven pillars of modern AI development: Leaning into the era of custom copilots appeared first on Azure Blog.
Quelle: Azure

Optimize your Azure cloud journey with skilling tools from Microsoft

Optimization is a crucial strategy for businesses seeking to extract maximum value from their Azure cloud investment, minimize unnecessary expenses, and ultimately drive better return on investment (ROI). At Microsoft, we’re dedicated to optimizing your Azure environments and teaching you how to approach it with resources, tools, and guidance, promoting continuous development of your cloud architectures and workloads, both in new and existing projects. We want you to build confidence to achieve your cloud goals, and to become more efficient and productive once you have a better understanding of how to operate in the cloud most successfully. That’s why we’re proud to offer a wide array of optimization skilling opportunities to help you confidently achieve your cloud goals, resulting in increased efficiency and productivity through a deeper understanding of successful cloud operations.

With Azure optimization skilling, we aim to be your guide in achieving these business goals. By engaging with our curated learning paths, modules, and gamified cloud skills challenges, you’ll quickly begin the process of planning, deploying, and managing your cloud investments. Training topics include Cloud Adoption Framework (CAF), Well-Architected Framework (WAF), FinOps, security, and much more to help you drive continuous improvement and business innovation.

Level up on optimization with our 30 Days to Learn It challenge

Microsoft “30 Days to Learn It” challenges are dynamic and immersive learning experiences designed to empower individuals with the skills and knowledge needed to excel in their chosen tech career path. These gamified, interactive challenges offer a blend of hands-on exercises, tutorials, and assessments to ensure a well-rounded learning experience.

Within the accelerated timeframe of 30 days, the structured framework engages participants in friendly competitions to see who can top the leaderboard on their way to mastering any number of Microsoft tools or concepts.

The challenge is open to IT professionals and developers of all skill levels and is designed to provide a flexible and accessible way to learn new skills and advance their careers. To participate, individuals simply need to sign up for the challenge on the Microsoft Learn platform and begin completing the available learning modules.

This month, we’ll be launching a new Azure Optimization 30 Days to Learn It challenge loaded with resources, tools, and guidance to help you optimize your Azure workloads. Learn to optimize your cloud architecture and workloads effectively so that you can invest in projects that drive ongoing growth and innovation. In about 16 hours, you’ll master how to drive continuous improvement of your architecture and workloads while managing and optimizing cloud costs.

Tailor your skilling experience with the Azure Optimization Collection

Explore

Azure Optimization Collection chevron_right

Whether you’re in the process of migrating to the cloud or have already established Azure workloads, we have assembled a handpicked collection of training and resources to help you on our journey. The collection is tailored to support the ongoing enhancement of your architecture and workloads, all while effectively managing and optimizing your cloud expenses.

ModuleDescriptionPurchase Azure savings plan for computeBy the end of this module, you’ll be able to describe the characteristics and benefits of Azure savings plan for compute and identify scenarios most suitable for its usage.Save money with Azure Reserved InstancesLearn how to analyze and buy reserved instances, optimize against underused resources, and understand the benefits provided through compute purchases.Get started with Azure AdvisorWith Azure Advisor, you can analyze your cloud environment to determine whether your workloads are following documented best practices for cost, security, reliability, performance, and operational excellence.Getting started with the Microsoft Cloud Adoption Framework for AzureDiscover how a range of getting-started resources in the Cloud Adoption Framework can accelerate results across your cloud-adoption efforts.Address tangible risks with the Govern methodology of the Cloud Adoption Framework for AzureWithout proper governance, it can be difficult and laborious to maintain consistent control across a portfolio of workloads. Fortunately, cloud-native tools like Azure Policy and Azure Blueprints provide convenient means to establish those controls.Ensure stable operations and optimization across all supported workloads deployed to the cloudAs workloads are deployed to the cloud, operations are critical to success. In this learn module, you learn how to deploy an operations baseline to manage workloads in your environment.Choose the best Azure landing zone to support your requirements for cloud operationsAzure landing zones can accelerate configuration of your cloud environment. This module will help you choose and get started with the best landing zone option for your needs.Introduction to the Microsoft Azure Well-Architected FrameworkYou want to build great things on Azure, but you’re not sure exactly what that means. Using key principles throughout your architecture, regardless of technology choice, can help you design, build, and continuously improve your architecture.Microsoft Azure Well-Architected Framework: operational excellenceIn this module, you learn about the operational excellence pillar of the Azure Well-Architected Framework, which allows you to answer these types of questions and improve the operations of your Azure cloud deployments.Microsoft Azure Well-Architected Framework: Cost optimizationLearn about the cost optimization pillar of the Azure Well-Architected Framework to identify cost optimization opportunities to maximize cloud efficiency and visibility.Microsoft Azure Well-Architected Framework: Performance efficiencyScaling your system to handle load, identifying network bottlenecks, and optimizing your storage performance are important to ensure your users have the best experience. Learn how to make your application perform at its best.Microsoft Azure Well-Architected Framework: SecurityLearn how to incorporate security into your architecture design and discover the tools that Azure provides to help you create a secure environment through all the layers of your architecture.Microsoft Azure Well-Architected Framework: ReliabilityYour business relies on access to its systems and data. Each moment that a customer or internal team can’t access what they need can result in a loss of revenue. It’s your job to prevent that by designing and implementing reliable systems.Describe cost management in AzureIn this module, you’ll be introduced to factors that impact costs in Azure and tools to help you both predict potential costs and monitor and control costs.

Discover more in the Azure Optimization Collection, including e-books and further reading, at the Microsoft Learn site.

Watch optimization tips and tricks from Azure experts

In our Azure Enablement Show video series, hear about the latest resources on how to accelerate your cloud journey and optimize your solutions in Azure. These expert-led videos share technical advice, tips, and best practices to help you do all that and more.

Our newest video on Azure optimization skilling will walk you through the newest training resources, guidance, tools, and skilling that you need to foster continuous development of your cloud architectures and workloads. Get an in-depth understanding of how successful cloud operations increase efficiency and productivity to help you confidently achieve your cloud goals.

In addition, go deeper into optimization with these two-video series on cloud frameworks that provide a comprehensive approach to cloud adoption and continuous improvement:

Cloud Adoption Framework (CAF) series: Address common blockers in your cloud adoption journey using best practices, tools, and templates featured in CAF and shared by Microsoft experts. This series covers scenarios such as enabling your landing zones, assessing your cloud environments, and applying an Azure savings plan.

Well-Architected Framework (WAF) series: Engage with technical guidance for your cloud adoption journey at the workload level across the five pillars of WAF: cost optimization, security, reliability, performance efficiency, and operational excellence.

Get started today with Azure optimization skilling

The journey to cloud optimization is not a destination, but an ongoing pursuit that can transform your organization’s digital landscape. Engaging with learning paths on Microsoft Learn isn’t just about gaining knowledge—it’s about investing in your organization’s future success. Our comprehensive skilling resources provide you with the tools, insights, and skills you need to unlock the full potential of Azure’s cloud optimization capabilities.

Take the first step today toward a more efficient, cost-effective, and competitive cloud environment by exploring Microsoft Learn’s cloud optimization learning paths in this Collection. Whether you’re an IT professional, a developer, or a decision-maker, there’s a tailored learning path waiting for you. Start your journey now and empower your organization to thrive in the cloud-first world.

Attendees to Microsoft Ignite 2023 were given the chance to learn more about leveling up their Azure through live keynotes, breakout sessions, and expert workshops. View recorded sessions, including the “Optimize your Azure investment through FinOps” discussion session, to learn how you can facilitate a culture of continuous improvement in your organization.

Lastly, game on! Be sure to register for our Azure Optimization 30 Days to Learn It Challenge to compete against your peers from around the globe as you master optimizing your cloud architecture and workloads.
The post Optimize your Azure cloud journey with skilling tools from Microsoft appeared first on Azure Blog.
Quelle: Azure