Mercedes-Benz enhances drivers’ experience with Azure OpenAI Service 

With ChatGPT, MBUX Voice Assistant “Hey Mercedes” will become even more intuitive – the U.S. beta program is expected to last three months.

When I started driving in the 1990s, I thought I was living in the future. My first car had everything I thought I could ever need: a built-in radio, lighting when you opened the door, windows you could roll down with a crank, a clock and even air-conditioning for those really hot days growing up on the East Coast.

That car is long gone, but my passion for driving things forward lives on, which is why I’m excited to share how Mercedes-Benz is using Microsoft AI capabilities to enhance experiences for some drivers today.

As the last six months have shown us, the power of generative AI goes beyond cutting-edge language models—it’s what you build with it that matters most. Our Azure OpenAI Service lets companies tap into the power of the most advanced AI models (Open AI’s GPT-4, GPT-3.5, and more) combined with Azure’s enterprise capabilities and AI-optimized infrastructure to do extraordinary things.

Mercedes-Benz takes in-car voice control to a new level with Azure OpenAI Service

Today, Mercedes-Benz announced they are integrating ChatGPT via Azure OpenAI Service to transform the in-car experience for drivers. Starting June 16, drivers in the United States can opt into a beta program that makes the MBUX Voice Assistant’s “Hey Mercedes” feature even more intuitive and conversational. Enhanced capabilities include:

Elevated voice command and interaction: ChatGPT enables more dynamic conversations, allowing customers to experience a voice assistant that not only understands voice commands but also engages in interactive conversations.

Expanded task capability: Whether users need information about their destination, a recipe, or answers to complex questions, the enhanced voice assistant will provide comprehensive responses, allowing drivers to keep their hands on the wheel and eyes on the road.

Contextual follow-up questions: Unlike standard voice assistants that often require specific commands, ChatGPT excels at handling follow-up questions and maintaining contextual understanding. Drivers can ask complex queries or engage in multi-turn conversations, receiving detailed and relevant responses from the voice assistant.

Integration with third-party services: Mercedes-Benz is exploring the ChatGPT plugin ecosystem, which would open up possibilities for integration with various third-party services. This could enable drivers to accomplish tasks like restaurant reservations, movie ticket bookings, and more, using natural speech commands, further enhancing convenience and productivity on the road.

With the three-month beta program, Mercedes-Benz customers can become early adopters of this groundbreaking technology. Based on the findings of the beta program and customer feedback, Mercedes-Benz will consider further integration of this technology into future iterations of their MBUX Voice Assistant while maintaining the highest standards of customer privacy on and off the road.

With Microsoft, Mercedes-Benz is paving the way for a more connected, intelligent, and personalized driving experience, and accelerating the automotive industry through AI.

In case you missed it, at Microsft Build we recently announced updates to Azure OpenAI Service to help you more easily and responsibly deploy generative AI capabilities powered by Azure. You can now:

Use your own data (coming to public preview later this month), allowing you to create more customized, tailored experiences based on organizational data.

Add plugins to simplify integrating external data sources with APIs.

Reserve provision throughput (generally available with limited access later this month) to gain control over the configuration and performance of OpenAI’s large language models at scale.

Create safer online environments and communities with Azure AI Content Safety, a new Azure AI service integrated into Azure OpenAI Service and Azure Machine Learning prompt flow that helps detect and remove content from prompts and generation that don’t meet content management standards.

A responsible approach

Microsoft has a layered approach for generative models, guided by Microsoft’s responsible AI principles. In Azure OpenAI Service, an integrated safety system provides protection from undesirable inputs and outputs and monitors for misuse. In addition, Microsoft provides guidance and best practices for customers to responsibly build applications using these models and expects customers to comply with the Azure OpenAI Code of Conduct. With Open AI’s GPT-4, new research advances from OpenAI have enabled an additional layer of protection. Guided by human feedback, safety is built directly into the GPT-4 model, which enables the model to be more effective at handling harmful inputs, thereby reducing the likelihood that the model will generate a harmful response. 

Get started with Azure OpenAI Service

Apply now for access to Azure OpenAI Service.

Bookmark the What’s New page.

Review Azure OpenAI Service documentation.

Explore the playground and customization in Azure AI Studio. No programming is required.

Dive right in with QuickStarts.

Watch the new explainer video about Azure OpenAI Service.

The post Mercedes-Benz enhances drivers’ experience with Azure OpenAI Service  appeared first on Azure Blog.
Quelle: Azure

Build next-generation, AI-powered applications on Microsoft Azure

The potential of generative AI is much bigger than any of us can imagine today. From healthcare to manufacturing to retail to education, AI is transforming entire industries and fundamentally changing the way we live and work. At the heart of all that innovation are developers, pushing the boundaries of possibility and creating new business and societal value even faster than many thought possible. Trusted by organizations around the world with mission-critical application workloads, Azure is the place where developers can build with generative AI securely, responsibly, and with confidence.

Welcome to Microsoft Build 2023—the event where we celebrate the developer community. This year, we’ll dive deep into the latest technologies across application development and AI that are enabling the next wave of innovation. First, it’s about bringing you state-of-the-art, comprehensive AI capabilities and empowering you with the tools and resources to build with AI securely and responsibly. Second, it’s about giving you the best cloud-native app platform to harness the power of AI in your own business-critical apps. Third, it’s about the AI-assisted developer tooling to help you securely ship the code only you can build.

We’ve made announcements in all key areas to empower you and help your organizations lead in this new era of AI.

Bring your data to life with generative AI

Generative AI has quickly become the generation-defining technology shaping how we search and consume information every day, and it’s been wonderful to see customers across industries embrace Microsoft Azure OpenAI Service. In March, we announced the preview of OpenAI’s GPT-4 in Azure OpenAI Service, making it possible for developers to integrate custom AI-powered experiences directly into their own applications. Today, OpenAI’s GPT-4 is generally available in Azure OpenAI Service, and we’re building on that announcement with several new capabilities you can use to apply generative AI to your data and to orchestrate AI with your own systems.  

We’re excited to share our new Azure AI Studio. With just a few clicks, developers can now ground powerful conversational AI models, such as OpenAI’s ChatGPT and GPT-4, on their own data. With Azure OpenAI Service on your data, coming to public preview, and Azure Cognitive Search, employees, customers, and partners can discover information buried in the volumes of data, text, and images using natural language-based app interfaces. Create richer experiences and help users find organization-specific insights, such as inventory levels or healthcare benefits, and more.

To further extend the capabilities of large language models, we are excited to announce that Azure Cognitive Search will power vectors in Azure (in private preview), with the ability to store, index, and deliver search applications over vector embeddings of organizational data including text, images, audio, video, and graphs. Furthermore, support for plugins with Azure OpenAI Service, in private preview, will simplify integrating external data sources and streamline the process of building and consuming APIs. Available plugins include plugins for Azure Cognitive Search, Azure SQL, Azure Cosmos DB, Microsoft Translator, and Bing Search. We are also enabling a Provisioned Throughput Model, which will soon be generally available in limited access to offer dedicated capacity.  

Customers are already benefitting from Azure OpenAI Service today, including DocuSign, Volvo, Ikea, Crayon, and 4,500 others. Learn more about what’s new with Azure OpenAI Service.

We continue to innovate across our AI portfolio, including new capabilities in Azure Machine Learning, so developers and data scientists can use the power of generative AI with their data. Foundation models in Azure Machine Learning, now in preview, empower data scientists to fine-tune, evaluate, and deploy open-source models curated by Azure Machine Learning, models from Hugging Face Hub, as well as models from Azure OpenAI Service, all in a unified model catalog. This will provide data scientists with a comprehensive repository of popular models directly within the Azure Machine Learning registry.

We are also excited to announce the upcoming preview of Azure Machine Learning prompt flow that will provide a streamlined experience for prompting, evaluating, tuning, and operationalizing large language models. With prompt flow, you can quickly create prompt workflows that connect to various language models and data sources. This allows for building intelligent applications and assessing the quality of your workflows to choose the best prompt for your case. See all the announcements for Azure Machine Learning.

It’s great to see momentum for machine learning with customers like Swift, a member-owned cooperative that provides a secure global financial messaging network, who is using Azure Machine Learning to develop an anomaly detection model with federated learning techniques, enhancing global financial security without compromising data privacy. We cannot wait to see what our customers build next.

Run and scale AI-powered, intelligent apps on Azure

Azure’s cloud-native platform is the best place to run and scale applications while seamlessly embedding Azure’s native AI services. Azure gives you the choice between control and flexibility, with complete focus on productivity regardless of what option you choose.

Azure Kubernetes Service (AKS) offers you complete control and the quickest way to start developing and deploying intelligent, cloud-native apps in Azure, datacenters, or at the edge with built-in code-to-cloud pipelines and guardrails. We’re excited to share some of the most highly anticipated innovations for AKS that support the scale and criticality of applications running on it.

To give enterprises more control over their environment, we are announcing long-term support for Kubernetes that will enable customers to stay on the same release for two years—twice as long as what’s possible today. We are also excited to share that starting today, Azure Linux is available as a container host operating system platform optimized for AKS. Additionally, we are now enabling Azure customers to access a vibrant ecosystem of first-party and third-party solutions with easy click-through deployments from Azure Marketplace. Lastly, confidential containers are coming soon to AKS, as a first-party supported offering. Aligned with Kata Confidential Containers, this feature enables teams to run their applications in a way that supports zero-trust operator deployments on AKS.

Azure lets you choose from a range of serverless execution environments to build, deploy, and scale dynamically on Azure without the need to manage infrastructure. Azure Container Apps is a fully managed service that enables microservices and containerized applications to run on a serverless platform. We announced, in preview, several new capabilities for teams to simplify serverless application development. Developers can now run Azure Container Apps jobs on demand and schedule applications and event-driven ad hoc tasks to asynchronously execute them to completion. This new capability enables smaller executables within complex jobs to run in parallel, making it easier to run unattended batch jobs right along with your core business logic. With these advancements to our container and serverless products, we are making it seamless and natural to build intelligent cloud-native apps on Azure.

Integrated, AI-based tools to help developers thrive

Making it easier to build intelligent, AI-embedded apps on Azure is just one part of the innovation equation. The other, equally important part is about empowering developers to focus more time on strategic, meaningful work, which means less toiling on tasks like debugging and infrastructure management. We’re making investments in GitHub Copilot, Microsoft Dev Box, and Azure Deployment Environments to simplify processes and increase developer velocity and scale.

GitHub Copilot is the world’s first at-scale AI developer tool, helping millions of developers code up to 55 percent faster. Today, we announced new Copilot experiences built into Visual Studio, eliminating wasted time when getting started with a new project. We’re also announcing several new capabilities for Microsoft Dev Box, including new starter developer images and elevated integration of Visual Studio in Microsoft Dev Box, that accelerates setup time and improves performance. Lastly, we’re announcing the general availability of Azure Deployment Environments and support for HashiCorp Terraform in addition to Azure Resource Manager.

Enable secure and trusted experiences in the era of AI

When it comes to building, deploying, and running intelligent applications, security cannot be an afterthought—developer-first tooling and workflow integration are critical. We’re investing in new features and capabilities to enable you to implement security earlier in your software development lifecycle, find and fix security issues before code is deployed, and pair with tools to deploy trusted containers to Azure.

We’re pleased to announce GitHub Advanced Security for Azure DevOps in preview soon. This new solution provides the three core features of GitHub Advanced Security into the Azure DevOps platform, so you can integrate automated security checks into your workflow. It includes code scanning powered by CodeQL to detect vulnerabilities, secret scanning to prevent the inclusion of sensitive information in code repositories, and dependency scanning to identify vulnerabilities in open-source dependencies and provide update alerts.

While security is at the top of the list for any developer, using AI responsibly is no less important. For almost seven years, we have invested in a cross-company program to ensure our AI systems are responsible by design. Our work on privacy and the General Data Protection Regulation (GDPR) has taught us that policies aren’t enough; we need tools and engineering systems that help make it easy to build with AI responsibly. We’re pleased to announce new products and features to help organizations improve accuracy, safety, fairness, and explainability across the AI development lifecycle.

Azure AI Content Safety, now in preview, enables developers to build safer online environments by detecting and assigning severity scores to unsafe images and text across languages, helping businesses prioritize what content moderators review. It can also be customized to address an organization’s regulations and policies. As part of Microsoft’s commitment to responsible AI, we’re integrating Azure AI Content Safety across our products, including Azure OpenAI Service and Azure Machine Learning, to help users evaluate and moderate content in prompts and generated content.

Additionally, the responsible AI dashboard in Azure Machine Learning now supports text and image data in preview. This means users can more easily identify model errors, understand performance and fairness issues, and provide explanations for a wider range of machine learning model types, including text and image classification and object detection scenarios. In production, users can continue to monitor their model and production data for model and data drift, perform data integrity tests, and make interventions with the help of model monitoring, now in preview.

We are committed to helping developers and machine learning engineers apply AI responsibly, through shared learning, resources, and purpose-built tools and systems. To learn more, join us at the Building and using AI models responsibly breakout session and download our Responsible AI Standard.

Let’s write this history, together

AI is a massive shift in computing. Whether it is part of your workflow or part of cloud development, powering your next-generation, intelligent apps, this community of developers is leading this shift. 

We are excited to bring Microsoft Build to you, especially this year as we go deep into the latest AI technologies, connect you with experts from within and outside of Microsoft, and showcase real-world solutions powered by AI.

Learn more about Azure at Microsoft Build

Join us at Microsoft Build 2023.

Request access to Azure OpenAI Service.

Start building skills with Microsoft Learn Collections.

Learn more about Microsoft Dev Box.

The post Build next-generation, AI-powered applications on Microsoft Azure appeared first on Azure Blog.
Quelle: Azure

Unlock new insights with Azure OpenAI Service for government

Microsoft continues to develop and advance cloud services to meet the full spectrum of government needs while complying with United States regulatory standards for classification and security. The latest of these tools, generative AI capabilities through Microsoft Azure OpenAI Service, can help government agencies improve efficiency, enhance productivity, and unlock new insights from their data.

Many agencies require a higher level of security given the sensitivity of government data. Microsoft Azure Government provides the stringent security and compliance standards they need to meet government requirements for sensitive data. 

Currently, large language models that power generative AI tools live in the commercial cloud. For government customers, Microsoft has developed a new architecture that enables government agencies to securely access the large language models in the commercial environment from Azure Government allowing those users to maintain the stringent security requirements necessary for government cloud operations.

If you’re an Azure Government customer (United States federal, state, and local government or their partners), you now have the opportunity to use the Microsoft Azure OpenAI Service through purpose-built, AI-optimized infrastructure providing access to OpenAI’s advanced generative models.  

Azure OpenAI Service 

Azure OpenAI Service REST APIs provide access to OpenAI’s powerful language models, including GPT-4, GPT-3, and Embeddings. You can adapt these models to your specific task, including but not limited to content generation, summarization, semantic search, and natural language-to-code translation.

You can also access the service using REST APIs, Python SDK, or our web-based interface in the Azure AI Studio. As an Azure Government customer or partner, you can access and operationalize advanced AI models and algorithms at scale. Developers can use Azure OpenAI Service to access pre-trained GPT models to build and deploy AI-enabled applications more quickly and with minimal effort.  

Capability enhancements with Azure OpenAI Service

Azure OpenAI Services can help government customers accelerate their operations and unlock new insights to meet their mission needs. This service will enable key new functions to help customers:   

Accelerate content generation: Automatically generate responses based on mission or project inquiries to help reduce the time and effort required for research and analysis, enabling teams to focus on higher-level decision-making and strategic tasks.   

Streamline content summarization: Generate summaries of logs and rapid analysis of articles, analysts, and field reports.  

Optimize semantic search: Enable enhanced information discovery and knowledge mining.  

Simplify code generation: Build custom applications using natural language to query proprietary data models and rapidly generate code documentation.

One of the most effective ways to generate reliable answers is to prompt the model to draw its responses from grounding data. If your use case relies on up-to-date, reliable information and is not purely a creative scenario, we strongly recommend providing grounding data based on trusted internal data sources. In general, the closer you can get your source material to the final form of the answer you want, the less work the model needs to do, which means there is less opportunity for error.

Azure Government to Azure commercial networking

Azure Government peers directly to the commercial Microsoft Azure network, including routing and transport capabilities to the internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by applying extra protections and communications capabilities of the commercial Azure network. Additional information highlighting Azure Government environment isolation can be found on our Azure Government security website.

Microsoft encrypts all Azure traffic within a region or between regions using MACsec, which relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft global network backbone and never enters the public internet. The backbone is one of the largest in the world with more than 250,000 km of lit fiber optic and undersea cable systems.

Access and reference architecture

Access to the Azure OpenAI Service is available through the Azure Government environment. Azure Government peers directly with the commercial Azure network and doesn’t peer directly with the public internet or the Microsoft corporate network. As shown in the reference architecture in Figure 1, connection to Azure OpenAI is over the Microsoft backbone network to access and operationalize advanced AI models and algorithms securely and at scale.

Figure 1: Azure Government OpenAI access reference architecture.

Protecting your data, privacy, and security​

Microsoft Azure Government provides stringent security and compliance standards necessary to meet government requirements for sensitive data. Through this architecture, government applications and data environments remain on Azure Government. Only the queries submitted to the Azure OpenAI Service transit into the Azure OpenAI model in the commercial environment through an encrypted network and do not remain in the commercial environment. Government data is not used for learning about your data or to train the OpenAI model.

Microsoft allows customers who meet additional Limited access eligibility criteria and attest to specific use cases to apply to modify the Azure OpenAI content management features. If Microsoft approves a customer’s request to modify data logging, then Microsoft does not store any prompts and completions associated with the approved Azure subscription for which data logging is configured off in Azure commercial.

As part of our reference architecture, it is recommended to complete the approval process to modify content filters and data logging via this online form to ensure no logging data exists in Azure commercial. An example of how to modify your data logging settings is available on our Data, privacy, and security for Azure OpenAI Service website.

Microsoft responsible AI principles

When you create technologies that can change the world, we believe you must also ensure that the technology is used responsibly. That’s why we are committed to creating responsible AI by design. Our work is guided by decades of research on AI, grounding, and privacy-preserving machine learning as well as our Responsible AI Standard and a core set of AI principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We put these principles into practice across the company to develop and deploy AI that will have a positive impact on society. We take a cross-company approach through cutting-edge research, best-of-breed engineering systems, and excellence in policy and governance. Additional information on our Microsoft Responsible AI Principles is available at Our approach to responsible AI at Microsoft website.

Azure OpenAI Service Frequently Asked Questions

How does Microsoft recommend implementing this reference architecture? 

Have an account and subscription in Azure Government and Azure Commercial. 

Recommended steps per environment: 

Azure CommercialAzure GovernmentRequest access to Azure OpenAI.Deploy your application utilizing your access to Azure OpenAI API.Request to modify content filters and data logging.Complete the required authorizations (IATT and ATO) for customer-specific workloads.Only utilize prompts for inferencing—do not leverage fine-tuning with Controlled Unclassified Information (CUI) data.

When will access to Azure OpenAI be available for Azure Government customers? 

Access to the Azure OpenAI Service is available to approved enterprise customers and partners through the Microsoft Azure Government environment. Customers can access the Azure OpenAI Service REST APIs on Azure Commercial from Azure Government as highlighted in the reference architecture above.

How do the capabilities of the Azure OpenAI Service compare to OpenAI? 

Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, and Embeddings. The Azure OpenAI API is compatible with the OpenAI API, providing efficiencies for developers and users. With Azure OpenAI Service, customers get the benefit of the security capabilities of Microsoft Azure Government powered by OpenAI’s models.

How do you enable secure access to Azure OpenAI Service? 

Access to Azure OpenAI Service is enabled through transport-layer security (TLS). Azure Government peers directly with the commercial Microsoft Azure network and doesn’t peer directly with the public internet or the Microsoft corporate network. Your data is never used to train the OpenAI model (your data is your data).  

Getting started with Azure OpenAI Service

Government enterprise workloads can be complex and mission-critical with requirements such as high throughput, low latency, compliance, availability, and data sovereignty. Azure OpenAI Service requires registration and is only available to approved enterprise customers and partners.

Sign up here to learn how AI can accelerate your mission and stay up to date on Microsoft’s AI for government advancements.

We published an Azure Government OpenAI Access QuickStart that uses Azure CLI to deploy an isolated Docker container to Azure Container Instances in Azure Government using code from the Azure OpenAI QuickStart.
The post Unlock new insights with Azure OpenAI Service for government appeared first on Azure Blog.
Quelle: Azure

Microsoft is a Leader in the 2023 Gartner® Magic Quadrant™ for Cloud AI Developer Services

We are excited to announce that Microsoft is recognized as a Leader for the fourth year in a row in the Gartner Magic Quadrant for Cloud AI Developer Services and are especially proud to be placed furthest for our Completeness of Vision.

We continue to innovate with Azure AI and are committed to making Azure AI the trusted AI platform for building intelligent applications. Azure AI gives you the ability to responsibly build AI capabilities, with the flexibility to choose pre-built models, customizable models, or build, train, deploy, and manage your own models with Azure Machine Learning.  

Our Azure AI services can help modernize your business processes with ready-made tools for specific scenarios like document processing automation, language translation, video analysis, anomaly detection, and intelligent search. We also have pre-trained foundation models, through Azure OpenAI Service (which was generally available shortly after January 1, the cutoff for the report), including ChatGPT, GPT-4, and DALL-E. With Azure AI, pre-trained models can be customized and embedded in your apps to solve for industry and organization specific needs. Using pre-trained models, you can summarize documents, classify medical imagery, build conversational interfaces into applications, analyze customer sentiment in reviews, process medical text, and build many other tailored solutions.  

KPMG is a company on which banks and institutions rely to identify fraudulent transactions or other misconduct by financial traders. To help banks detect undesirable activity, KPMG created Magna, a risk analytics solution built using AI capabilities from Azure AI for Speech, Language, and Translator. It consolidates growing volumes of unstructured data from email, phone calls, and chats to identify potential risks quickly, reducing the time to issue an alert from 30 days down to 2 days. Separately, the KPMG global tax group is using Azure OpenAI Service as a foundational layer for building use cases on top of generative AI. They are incorporating it into KPMG’s Digital Gateway, initially focused on helping companies more efficiently identify and classify tax data that can be applied to ESG Taxes. Azure OpenAI Service is helping KPMG assess data relationships to pull and predict the right tax data and type, reducing risk factors and increasing confidence in making tax contributions public.  

Reddit, a popular online platform where users can create and join communities based on their interests and share various types of content, is using Azure AI pre-built models to improve the user experience on their platform. Millions of images get shared across Reddit’s communities, making it harder for users who use screen readers or have low-bandwidth internet connections to fully engage. To make its content more accessible and discoverable, Reddit decided all images needed captions as alternative text. Reddit chose Azure Cognitive Services for Vision to automatically generate the captions for images on its platform. Using our pre-trained models, Reddit was able to build this project quickly and without machine-learning engineering support. 

For those who want to build their own models, we’re making the machine learning model development process more accessible with automated machine learning (AutoML) capabilities in our Azure Machine Learning platform. Auto ML features can help train and tune a model based on provided target metrics, iterating hundreds of times to produce a model with the highest training score that fits a dataset. Our partnership with DataRobot further increases the accessibility of our machine-learning platform. With DataRobot, users can interact with and interpret model results and predictions directly using conversational AI. We believe that all developers and organizations, no matter their data science expertise, can build, deploy, and manage AI models with confidence. That’s why we’ve also built in the responsible AI dashboard, which monitors model performance for errors, fairness, and bias.  

Another customer, Broward College, is using Azure Machine Learning and has embedded responsible AI features to better support its diverse student body. Broward is harnessing its data to understand student pathways with the goal of increasing its student retention rate. Using Azure Machine Learning and responsible AI, the Broward team identified five key predictors of student attrition, leading to data-driven, actionable strategies to help more students transform their lives and reach their goals. 

Another way we are supporting our customers and developers is through collaboration and partnerships. On the Azure AI platform, you can easily access sophisticated AI models from companies like Databricks, Hugging Face, and OpenAI, all backed by Azure AI’s infrastructure and enterprise grade safety, security, and privacy. We’re continuing to expand our Azure OpenAI Services through our collaboration with OpenAI. Inside our Azure AI Studio you can run powerful AI models on your own data, and easily use the models in your own applications with plugins. We’re also embedding OpenAI models into our other Azure AI services, like Cognitive Search, Vision, Speech, and Language. 

Azure AI is more than just a cloud platform for building and deploying AI solutions. It is a comprehensive AI ecosystem that empowers developers, organizations, and customers to create transformative intelligent applications. Whether you need ready-to-integrate AI services, customizable models, or machine learning platform capabilities, Azure AI has you covered. With our extensive AI portfolio, aligned to Microsoft’s Responsible AI Standard, you can ensure that your AI solutions are accessible, inclusive, and fair. Azure AI is a platform that helps you tackle real world challenges today and build with the future in mind.

Get the latest news on Azure AI products, features, and updates from Microsoft Build 2023. 

Get your copy of the report to learn more to learn more about why Microsoft was named a Leader in 2023 Gartner Magic Quadrant for Cloud AI Developer Services.

Gartner, Magic Quadrant for Cloud AI Developer Services, Jim Scheibmeir, Svetlana Sicular, Arun Batchu, Mike Fang, Van Baker, Frank O’Connor, 22 May 2023.  

Gartner is a registered trademark and service mark and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from this link. 

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 
The post Microsoft is a Leader in the 2023 Gartner® Magic Quadrant™ for Cloud AI Developer Services appeared first on Azure Blog.
Quelle: Azure

Introducing Microsoft Fabric: Data analytics for the era of AI

Today’s world is awash with data—ever-streaming from the devices we use, the applications we build, and the interactions we have. Organizations across every industry have harnessed this data to digitally transform and gain competitive advantages. And now, as we enter a new era defined by AI, this data is becoming even more important.  

Generative AI and language model services, such as Azure OpenAI Service, are enabling customers to use and create everyday AI experiences that are reinventing how employees spend their time. Powering organization-specific AI experiences requires a constant supply of clean data from a well-managed and highly integrated analytics system. But most organizations’ analytics systems are a labyrinth of specialized and disconnected services.  

And it’s no wonder given the massively fragmented data and AI technology market with hundreds of vendors and thousands of services. Customers must stitch together a complex set of disconnected services from multiple vendors themselves and incur the costs and burdens of making these services function together. 

Introducing Microsoft Fabric 

Today we are unveiling Microsoft Fabric—an end-to-end, unified analytics platform that brings together all the data and analytics tools that organizations need. Fabric integrates technologies like Azure Data Factory, Azure Synapse Analytics, and Power BI into a single unified product, empowering data and business professionals alike to unlock the potential of their data and lay the foundation for the era of AI. 

Watch a quick overview:  

What sets Microsoft Fabric apart? 

Fabric is an end-to-end analytics product that addresses every aspect of an organization’s analytics needs. But there are five areas that really set Fabric apart from the rest of the market:

1. Fabric is a complete analytics platform 

Every analytics project has multiple subsystems. Every subsystem needs a different array of capabilities, often requiring products from multiple vendors. Integrating these products can be a complex, fragile, and expensive endeavor.  

With Fabric, customers can use a single product with a unified experience and architecture that provides all the capabilities required for a developer to extract insights from data and present it to the business user. And by delivering the experience as software as a service (SaaS), everything is automatically integrated and optimized, and users can sign up within seconds and get real business value within minutes.  

Fabric empowers every team in the analytics process with the role-specific experiences they need, so data engineers, data warehousing professionals, data scientists, data analysts, and business users feel right at home.  

Fabric comes with seven core workloads: 

Data Factory (preview) provides more than 150 connectors to cloud and on-premises data sources, drag-and-drop experiences for data transformation, and the ability to orchestrate data pipelines.

Synapse Data Engineering (preview) enables great authoring experiences for Spark, instant start with live pools, and the ability to collaborate.

Synapse Data Science (preview) provides an end-to-end workflow for data scientists to build sophisticated AI models, collaborate easily, and train, deploy, and manage machine learning models. 

Synapse Data Warehousing (preview) provides a converged lake house and data warehouse experience with industry-leading SQL performance on open data formats.

Synapse Real-Time Analytics (preview) enables developers to work with data streaming in from the Internet of Things (IoT) devices, telemetry, logs, and more, and analyze massive volumes of semi-structured data with high performance and low latency.

Power BI in Fabric provides industry-leading visualization and AI-driven analytics that enable business analysts and business users to gain insights from data. The Power BI experience is also deeply integrated into Microsoft 365, providing relevant insights where business users already work.  

Data Activator (coming soon) provides real-time detection and monitoring of data and can trigger notifications and actions when it finds specified patterns in data—all in a no-code experience. 

You can try these experiences today by signing up for the Microsoft Fabric free trial. 

2. Fabric is lake-centric and open 

Today’s data lakes can be messy and complicated, making it hard for customers to create, integrate, manage, and operate data lakes. And once they are operational, multiple data products using different proprietary data formats on the same data lake can cause significant data duplication and concerns about vendor lock-in.  

OneLake—The OneDrive for data 

Fabric comes with a SaaS, multi-cloud data lake called OneLake that is built-in and automatically available to every Fabric tenant. All Fabric workloads are automatically wired into OneLake, just like all Microsoft 365 applications are wired into OneDrive. Data is organized in an intuitive data hub, and automatically indexed for discovery, sharing, governance, and compliance.  

OneLake serves developers, business analysts, and business users alike, helping eliminate pervasive and chaotic data silos created by different developers provisioning and configuring their own isolated storage accounts. Instead, OneLake provides a single, unified storage system for all developers, where discovery and sharing of data are easy with policy and security settings enforced centrally. At the API layer, OneLake is built on and fully compatible with Azure Data Lake Storage Gen2 (ADLSg2), instantly tapping into ADLSg2’s vast ecosystem of applications, tools, and developers.  

A key capability of OneLake is “Shortcuts.” OneLake allows easy sharing of data between users and applications without having to move and duplicate information unnecessarily. Shortcuts allow OneLake to virtualize data lake storage in ADLSg2, Amazon Simple Storage Service (Amazon S3), and Google Storage (coming soon), enabling developers to compose and analyze data across clouds. 

Open data formats across analytics offerings 

Fabric is deeply committed to open data formats across all its workloads and tiers. Fabric treats Delta on top of Parquet files as a native data format that is the default for all workloads. This deep commitment to a common open data format means that customers need to load the data into the lake only once and all the workloads can operate on the same data, without having to separately ingest it. It also means that OneLake supports structured data of any format and unstructured data, giving customers total flexibility.  

By adopting OneLake as our store and Delta and Parquet as the common format for all workloads, we offer customers a data stack that’s unified at the most fundamental level. Customers do not need to maintain different copies of data for databases, data lakes, data warehousing, business intelligence, or real-time analytics. Instead, a single copy of the data in OneLake can directly power all the workloads.  

Managing data security (table, column, and row levels) across different data engines can be a persistent nightmare for customers. Fabric will provide a universal security model that is managed in OneLake, and all engines enforce it uniformly as they process queries and jobs. This model is coming soon.  

3. Fabric is powered by AI  

We are infusing Fabric with Azure OpenAI Service at every layer to help customers unlock the full potential of their data, enabling developers to leverage the power of generative AI against their data and assisting business users to find insights in their data. With Copilot in Microsoft Fabric in every data experience, users can use conversational language to create dataflows and data pipelines, generate code and entire functions, build machine learning models, or visualize results. Customers can even create their own conversational language experiences that combine Azure OpenAI Service models and their data and publish them as plug-ins.   

Copilot in Microsoft Fabric builds on our existing commitments to data security and privacy in the enterprise. Copilot inherits an organization’s security, compliance, and privacy policies. Microsoft does not use organizations’ tenant data to train the base language models that power Copilot. 

Copilot in Microsoft Fabric will be coming soon. Stay tuned to the Microsoft Fabric blog for the latest updates and public release date for Copilot in Microsoft Fabric.  

4. Fabric empowers every business user 

Customers aspire to drive a data culture where everyone in their organization is making better decisions based on data. To help our customers foster this culture, Fabric deeply integrates with the Microsoft 365 applications people use every day.  

Power BI is a core part of Fabric and is already infused across Microsoft 365. Through Power BI’s deep integrations with popular applications such as Excel, Microsoft Teams, PowerPoint, and SharePoint, relevant data from OneLake is easily discoverable and accessible to users right from Microsoft 365—helping customers drive more value from their data

With Fabric, you can turn your Microsoft 365 apps into hubs for uncovering and applying insights. For example, users in Microsoft Excel can directly discover and analyze data in OneLake and generate a Power BI report with a click of a button. In Teams, users can infuse data into their everyday work with embedded channels, chat, and meeting experiences. Business users can bring data into their presentations by embedding live Power BI reports directly in Microsoft PowerPoint. Power BI is also natively integrated with SharePoint, enabling easy sharing and dissemination of insights. And with Microsoft Graph Data Connect (preview), Microsoft 365 data is natively integrated into OneLake so customers can unlock insights on their customer relationships, business processes, security and compliance, and people productivity.  

5. Fabric reduces costs through unified capacities 

Today’s analytics systems typically combine products from multiple vendors in a single project. This results in computing capacity provisioned in multiple systems like data integration, data engineering, data warehousing, and business intelligence. When one of the systems is idle, its capacity cannot be used by another system causing significant wastage.  

Purchasing and managing resources is massively simplified with Fabric. Customers can purchase a single pool of computing that powers all Fabric workloads. With this all-inclusive approach, customers can create solutions that leverage all workloads freely without any friction in their experience or commerce. The universal compute capacities significantly reduce costs, as any unused compute capacity in one workload can be utilized by any of the workloads. 

Explore how our customers are already using Microsoft Fabric  

Ferguson 

Ferguson is a leading distributor of plumbing, HVAC, and waterworks supplies, operating across North America. And by using Fabric to consolidate their analytics stack into a unified solution, they are hoping to reduce their delivery time and improve efficiency. 

“Microsoft Fabric reduces the delivery time by removing the overhead of using multiple disparate services. By consolidating the necessary data provisioning, transformation, modeling, and analysis services into one UI, the time from raw data to business intelligence is significantly reduced. Fabric meaningfully impacts Ferguson’s data storage, engineering, and analytics groups since all these workloads can now be done in the same UI for faster delivery of insights.”
—George Rasco, Principal Database Architect, Ferguson

See Fabric in action at Ferguson: 

T-Mobile 

T-Mobile, one of the largest providers of wireless communications services in the United States, is focused on driving disruption that creates innovation and better customer experiences in wireless and beyond. With Fabric, T-Mobile hopes they can take their platform and data-driven decision-making to the next level. 

“T-Mobile loves our customers and providing them with new Un-Carrier benefits! We think that Fabric’s upcoming abilities will help us eliminate data silos, making it easier for us to unlock new insights into how we show our customers even more love. Querying across the lakehouse and warehouse from a single engine—that’s a game changer. Spark compute on-demand, rather than waiting for clusters to spin up, is a huge improvement for both standard data engineering and advanced analytics. It saves three minutes on every job, and when you’re running thousands of jobs an hour, that really adds up. And being able to easily share datasets across the company is going to eliminate so much data duplication. We’re really looking forward to these new features.”
—Geoffrey Freeman, MTS, Data Solutions and Analytics, T-Mobile

Aon  

Aon provides professional services and management consulting services to a vast global network of customers. With the help of Fabric, Aon hopes that they can consolidate more of their current technology stack and focus on adding more value to their clients. 

“What’s most exciting to me about Fabric is simplifying our existing analytics stack. Currently, there are so many different PaaS services across the board that when it comes to modernization efforts for many developers, Fabric helps simplify that. We can now spend less time building infrastructure and more time adding value to our business.”   
—Boby Azarbod, Data Services Lead, Aon

What happens to current Microsoft analytics solutions? 

Existing Microsoft products such as Azure Synapse Analytics, Azure Data Factory, and Azure Data Explorer will continue to provide a robust, enterprise-grade platform as a service (PaaS) solution for data analytics. Fabric represents an evolution of those offerings in the form of a simplified SaaS solution that can connect to existing PaaS offerings. Customers will be able to upgrade from their current products into Fabric at their own pace.  

Get started with Microsoft Fabric

Microsoft Fabric is currently in preview. Try out everything Fabric has to offer by signing up for the free trial—no credit card information is required. Everyone who signs up gets a fixed Fabric trial capacity, which may be used for any feature or capability from integrating data to creating machine learning models. Existing Power BI Premium customers can simply turn on Fabric through the Power BI admin portal. After July 1, 2023, Fabric will be enabled for all Power BI tenants. 

Microsoft Fabric resources 

If you want to learn more about Microsoft Fabric, consider:  

Signing up for the Microsoft Fabric free trial.

Visiting the Microsoft Fabric website.

Reading the more in-depth Fabric experience announcement blogs: 

Data Factory experience in Fabric blog

Synapse Data Engineering experience in Fabric blog

Synapse Data Science experience in Fabric blog

Synapse Data Warehousing experience in Fabric blog

Synapse Real-Time Analytics experience in Fabric blog

Power BI announcement blog

Data Activator experience in Fabric blog

Administration and governance in Fabric blog

OneLake in Fabric blog

Fabric event streams blog

Microsoft 365 data integration in Fabric blog

Dataverse and Microsoft Fabric integration blog

Exploring the Fabric technical documentation.

Reading the free e-book on getting started with Fabric. 

Exploring Fabric through the Guided Tour.

Joining the Fabric community to post your questions, share your feedback, and learn from others. 

The post Introducing Microsoft Fabric: Data analytics for the era of AI appeared first on Azure Blog.
Quelle: Azure

Microsoft Cost Management updates—May 2023

Whether you’re a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you’re spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We’re always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Customize the lookback period for virtual machine right-sizing recommendations.

Updates for Azure.com pricing experiences.

Automate cost savings with Azure Resource Graph in Azure Government and Azure China.

Four cost optimization strategies with Microsoft Azure.

Help shape the future of Cost Management.

What’s new in Cost Management Labs.

New ways to save money with Microsoft Cloud.

New videos and learning opportunities.

Documentation updates.

Let’s dig into the details.

Customize the lookback period for virtual machine right-sizing recommendations

Optimization isn’t purely about cutting costs—it’s about maximizing efficiency and maximizing value with the cloud. The biggest way to drive efficiency continues to be right-sizing existing investments. Now you can customize the lookback period for virtual machine right-sizing recommendations in Azure Advisor to tune recommendations even further.

You can now customize your virtual machine instance and virtual machine scale set (VMSS) recommendations based on utilization from the previous 7, 14, 21, 30, 60, or 90 days, giving you more flexibility to drive efficiency based on recent changes or longer historical patterns. To learn more, visit Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances.

Updates for Azure.com pricing experiences

We’ve been working hard to make some changes to our Azure pricing experiences, and we’re excited to share them with you. These changes will help make it easier for you to estimate the costs of your solutions.

We’ve added a new feature to our Virtual Machines Selector tool—”Add to Portal”. Now, with the click of a button, you can switch from discovery to deployment when exploring Azure Virtual Machines.

We have brought some notable changes to the Azure pricing experience this month. Azure pricing now supports Poland Central. Additionally, you can estimate your costs using the Azure Savings Plan in the Azure Kubernetes Service and the Azure Virtual Desktop pricing calculators. Additionally, thanks to your feedback, we’ve added a new FAQ about exchange rates to our pricing FAQs.

We’ve released several new updates in Microsoft Build 2023 that we’re excited about! We’ve introduced a new Hyperscale service tier in Elastic Pools on SQL DB services, Azure Deployment Environments is also now generally available, and Azure Spring Apps now includes a new Dedicated Plan pricing.

On top of all that, we have also introduced new pricing offers for various services including: Virtual Machines (new HX & HBv4 series), Computer Vision (pricing for Project Florence), Block Blobs Storage (a new optimized Cold access tier), Azure NetApp files (a new capacity pools capability), Azure Firewall (new Basic tier for Secured Virtual Hub), API Management (estimation for workspaces added to the pricing calculator), Container Apps (new dedicated plan), and a new service Azure Container Storage.

We hope that these changes will streamline your workflow and help you accurately estimate the cost of your solutions in Azure. Please feel free to leave us feedback or make suggestions for future pricing improvements—we’re always eager to hear your thoughts!

Automate cost savings with Azure Resource Graph in Azure Government and Azure China

You already know Azure Advisor helps you reduce and optimize costs without sacrificing quality. And you may already be familiar with the Azure Advisor APIs that enable you to integrate recommendations into your own reporting or automation. Now you can also get recommendations via Azure Resource Graph in Azure Government and Azure China.

Azure Resource Graph enables you to explore your Azure resources across subscriptions. You can use advanced filtering, grouping, and sorting based on resource properties and relationships to target specific workloads and even take that further to automate resource management and governance at scale. Now, with the addition of Azure Advisor recommendations, you can also query your cost saving recommendations.

Querying for recommendations is easy. Just open Azure Resource Graph in the Azure portal and explore the advisorresources table. Let’s say you want a summary of your potential cost savings opportunities:

advisorresources// First, we trim down the list to only cost recommendations| where type == ‘microsoft.advisor/recommendations’| where properties.category == ‘Cost’//// Then we group rows…| summarize// …count the resources and add up the total savings     resources = dcount(tostring(properties.resourceMetadata.resourceId)),     savings = sum(todouble(properties.extendedProperties.savingsAmount))     by// …for each recommendation type (solution)     solution = tostring(properties.shortDescription.solution),     currency = tostring(properties.extendedProperties.savingsCurrency)//// And lastly, format and sort the list| project solution, resources, savings = bin(savings, 0.01), currency| order by savings desc

Take this one step further using Logic Apps or Azure Functions and send out weekly emails to subscription and resource group owners. Or pivot this on resource ID and set up an approval workflow to automatically delete unused resources or downsize underutilized virtual machines. The sky’s the limit! To learn more, visit Query for Advisor data in Resource Graph Explorer. 

Four cost optimization strategies with Microsoft Azure

We’ve seen many businesses make significant shifts toward cloud computing in the last decade. The Microsoft Azure public cloud offers many benefits to companies, such as increased flexibility, scalability, and availability of resources. However, with the increased usage of resources, implementing best practices in cloud efficiency is a necessity to validate spending and avoid waste.

Paulo Annis explores how right-sizing, cleaning up resources, leveraging commitment-based discounts, and tuning databases and applications can help you achieve your optimization and efficiency goals in 4 cloud cost optimization strategies with Azure.

Help shape the future of Cost Management

Are you responsible for managing cost using Microsoft Cost Management and Billing? We’re exploring new capabilities to improve your experience and would love to hear from you in two 10-minute surveys about your use of and interest in AI systems and your experience with cost monitoring.

Please share these surveys with others involved in cost management and optimization and if you’re interested in participating in future research topics, we encourage you to join our research panel.

What’s new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what’s coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Update: Settings in the Cost analysis preview—Now available in the public portal.Get quick access to cost-impacting settings from the Cost analysis preview. You will see this by default in Labs and can enable the option from the Try preview menu.

Update: Customers view for Cloud Solution Provider (CSP) partners—Now available in the public portal.View a breakdown of costs by customer and subscription in the Cost analysis preview. Note this view is only available for CSP billing accounts and billing profiles. You will see this by default in Labs and can enable the option from the Try preview menu.

Update: Merge cost analysis menu items—Now enabled by default in Labs.Only show one cost analysis item in the Cost Management menu. All classic and saved views are one-click away, making them easier than ever to find and access. You can enable this option from the Try preview menu.

Recommendations view.View a summary of cost recommendations that help you optimize your Azure resources in the cost analysis preview. You can opt in using the Try preview menu.

Forecast in the cost analysis preview.Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.

Group related resources in the cost analysis preview.Group related resources, like disks under virtual machines or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.

Charts in the cost analysis preview.View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.

View cost for your resources.The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource.

Change scope from the menu.Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that’s not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it’s in the full Azure portal or Microsoft 365 admin center. We’re eager to hear your thoughts and understand what you’d like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money in the Microsoft Cloud

Six new and updated offers to help you save:

General availability: Ebsv5 and Ebdsv5 NVMe-enabled VM sizes.

General availability: Serverless SQL for Azure Databricks.

Preview: Azure Cold Storage.

Preview: Palo Alto Networks SaaS Cloud NGFW Integration with Virtual WAN.

Preview: Cloud Next-Generation Firewall (NGFW) Palo Alto Networks—an Azure Native ISV Service.

Preview: DCesv5 and ECesv5-series Confidential VMs with Intel TDX.

New videos and learning opportunities

Lots of videos helping you manage and optimize costs this month:

Block storage options with Azure Disk Storage and Elastic SAN (11 minutes).

Azure Backup for SAP HANA Databases on Azure VM (19 minutes).

Azure Backup for SQL Server Databases on Azure VM (19 minutes).

How to Leverage Centrally-managed Azure Hybrid Benefit to Save Money, Manage Cost and Stay Compliant (10 minutes).

Onboarding and Partner Management in the Azure Portal (4 minutes).

Managing Enrollments in the Azure Portal (5 minutes).

Managing Partner Administrators in the Azure Portal (4 minutes).

Managing Markup in the Azure Portal (3 minutes).

Managing Purchase Order (PO) Number in the Azure portal (3 minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you’d like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are a few documentation updates you might be interested in:

New: Copy billing roles from one MCA to another MCA across tenants with a script.

New: Reservation utilization alerts.

New: EA billing administration for partners in the Azure portal.

Updated: Azure EA agreements and amendments.

Updated: SQL IaaS extension registration options for Cost Management administrators.

Updated: Tutorial – Optimize centrally managed Azure Hybrid Benefit for SQL Server.

15 updates based on your feedback.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

What’s next?

These are just a few of the big updates from last month. Don’t forget to check out the previous Microsoft Cost Management updates. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
The post Microsoft Cost Management updates—May 2023 appeared first on Azure Blog.
Quelle: Azure

Increase gaming performance with NGads V620-series virtual machines

Gaming customers across the world tend to look for the same critical components when choosing their playing environment: Performance, Affordability, and Timely Content. And for gaming in the cloud, there’s a fourth: Reliability.

With these clear guidelines in mind, we are excited to announce the public preview of our new NGads V620-series virtual machines (VMs). This VM series has GPU, CPU, and memory resources balanced to generate and stream high-quality graphics for a performant, interactive gaming experience hosted on Microsoft Azure. The new NGads instances give online gaming providers the power and stability that they need, at an affordable price.   

The NGads V620-series are GPU-enabled virtual machines powered by AMD Radeon PRO V620 GPU and AMD EPYC 7763 CPUs. The AMD Radeon PRO V620 GPUs have a maximum frame buffer of 32GB which can be divided up to 4 ways through hardware partitioning, or by providing multiple users with access to shared, session-based operating systems such as Windows Server 2022 or Windows 11 EMS. 1 The AMD EPYC CPUs have a base clock speed of 2.45 GHz and a boost speed of 3.5 GHz. VMs are assigned full cores instead of threads, enabling full access to AMD’s powerful Zen 3 cores.

NGads instances come in four sizes, allowing customers to right-size their gaming environments for the performance and cost that best fits their business needs.

The two smallest instances rely on industry-standard SR-IOV technology to partition the GPUs into one-fourth and one-half instances, enabling customers to run workloads with no interference or security concerns between users sharing the same physical graphics card.

The VMs also feature the AMD Software Cloud Edition, which targets the same optimizations available in the consumer gaming version of the Adrenaline driver but is further tested and optimized for the cloud environment.

Instance ConfigsvCPU (Physical Cores)GPU Memory (GiB)GPU Partition Size  Memory (GiB)Azure Network (Gbps)Standard_NG8ads_V620_v188¼ GPU1610Standard_NG16ads_V620_v11616½ GPU3220Standard_NG32ads_V620_v132321x GPU6440Standard_NG32adms_V620_v132321x GPU17640

The NGads V620-series VMs will support a new AMD Cloud Software driver that comes in two editions: A Gaming driver with regular updates to support the latest titles, as well as a Professional driver for accelerated Virtual Desktop environments, with Radeon PRO optimizations to support high-end workstation applications.

Microsoft Azure, do more with less

Deployment in Azure enables gaming and desktop providers to take advantage of the infrastructure investments put in place by Microsoft in data centers across the world. This gives our customers the ability to only pay for what they use. They can depend on an infrastructure framework that is constantly kept up to date with highly reliable uptime. Customers can innovate faster to differentiate their offerings and provide customers with a richer experience. As our customers’ business needs expand, they can benefit from the economies of scale available from Azure. In addition, customers can build a more complete and robust solution through integration with the broad range of cloud services for storage, networking, and application management available as part of the Azure offerings.

Flexible workloads, flexible costs

High-performance GPU-accelerated workloads have always ranged from workstation design apps to VDI and simulation rendering. Each of these has the potential to tax even powerful graphics boards. Gaming workloads bring the additional challenges of requiring very fast graphics remoting—the interactive transfer of graphics and user controls over the internet. Further, there is a wide variety of games, connection types, and resolutions available to the user.

The NGads V620-series helps resolve these challenges by providing support for a range of visualization applications so that gaming or desktop service providers can optimize for precisely the experiences expected by the end users. Service provider customers can choose the right-sized VM that will best serve their needs without over-allocating resources. As the needs of their offering change, the common software support across VMs allows service providers to shift to a VM size with either a higher or lower GPU partition, or to shift capacity to other regions of the world as their business footprint expands.

Performance powered by AMD GPU and CPU

The NGads V620-series combines AMD Radeon™ GPU and Epyc™ CPU technology to provide a powerful and well-balanced environment for hosting rich and highly-interactive cloud services. 

The AMD Radeon PRO V620 GPU is based on AMD’s RDNA™ 2 Architecture, AMD Software, and AMD Graphics Virtualization technology. 

Each AMD Radeon PRO V620 GPU is equipped with 32MB of GDDR6 dedicated memory, a 256-bit memory interface with up to 512GB/s bandwidth, and ECC support for data correction.  To enhance the user experience, they are designed with hardware raytracing using 72 Ray Accelerators, 4608 Stream Processors, and a peak Engine Clock of 2200 MHz.

The AMD software supports the DirectX® 12.0, OpenGL®4.6, OpenCL™ 2.2, and Vulkan® 1.1 APIs for broad compatibility with gaming and graphics applications.  This enables the NG series VMs to support a very broad range of workloads from cloud gaming, GPU-enhanced VDI, and GPU-intensive Workstation-as-a-Service solutions.

The NGads V620-series uses GPU Partitioning to virtualize the GPU and provide partitions from the full 32 GB memory size (1x GPU), 16GB (one-half GPU), or 8GB (one-fourth GPU).  The Azure GPU Partitioning is based on the PCIe standard SR-IOV extension, which provides a highly predictable and secure method to host multiple independent user environments on the same hardware GPU board.

The AMD EPYC 7763 CPU is built on the 7nm process technology, featuring AMD Zen 3 cores, Infinity Architecture, and the AMD Infinity Guard suite of security features. The AMD EPYC CPUs have a base clock speed of 2.45GHz and a boost clock speed of 3.5 GHz to allow the user to take advantage of a single powerful core when required by the application.

Learn more about NGads V620-series

Customers can sign up for the NGads V620-series preview today. NGads V620-series VMs are initially available in the East US2, Europe West, and West US3 Azure regions.

Footnotes

EPYC-018: Max boost for AMD EPYC processors is the maximum frequency achievable by any single core on the processor under normal operating conditions for server systems
The post Increase gaming performance with NGads V620-series virtual machines appeared first on Azure Blog.
Quelle: Azure

Azure Virtual WAN now supports full mesh secure hub connectivity

In May 2023, we announced the general availability of Routing intent and routing policies for all Virtual WAN customers. This feature is powered by the Virtual WAN routing infrastructure and enables Azure Firewall customers to set up policies for private and internet traffic. We are also extending the same routing capabilities to all Firewall solutions deployed within Azure Virtual WAN including Network Virtual Appliances and software-as-a-service (SaaS) solutions that provide Firewall capabilities.

Routing Intent also completes two secured hub use cases wherein users can secure traffic between Virtual WAN hubs as well as inspect traffic between different on-premises (branch/ExpressRoute/SD-WAN) that transits through Virtual WAN hubs.

Azure Virtual WAN (vWAN), networking-as-a-service, brings networking, security, and routing functionalities together to simplify networking in Azure. With ease of use and simplicity built in, vWAN is a one-stop shop to connect, protect, route traffic, and monitor your wide area network.

In this blog, we will first describe routing intent use cases, product experiences, and summarize with some additional considerations and resources for using routing intent with Virtual WAN.

Use cases for Virtual WAN

You can use Routing Intent to engineer traffic within Virtual WAN in multiple ways. Here are the main use cases:

Apply routing policies for Virtual Networks and on-premises

Customers implementing hub-and-spoke network architectures with large numbers of routes often find their networks hard to understand, maintain, and troubleshoot. In Virtual WAN, these routes can be simplified for traffic between Azure Virtual Networks and on-premises (ExpressRoute, VPN, and SD-WAN).

Virtual WAN makes this easier for customers by allowing customers to configure simple and declarative private routing policies. It is assumed that private routing policies will be applied for all Azure Virtual Networks and on-premises networks connected to Virtual WAN. Further customizations for Virtual Network and on-premises prefixes are currently not supported. Private routing policies instruct Virtual WAN to program the underlying Virtual WAN routing infrastructure to enable transit between two different on-premises (1) via a security solution deployed in the Virtual Hub. It also enables traffic transiting between two Azure Virtual Networks (2) or between an Azure Virtual Network and an on-premises endpoint (3) via a security solution deployed in the Virtual Hub. The same traffic use cases are supported for Azure Firewall, Network Virtual Appliances, and software-as-a-service solutions deployed in the hub.

Figure 1: Diagram of a Virtual Hub showing sample private traffic flows (between on-premises and Azure).

Apply routing policies for internet traffic

Virtual WAN lets you set up routing policies for internet traffic in order to advertise a default (0.0.0.0/0) route to your Azure Virtual Networks and on-premises. Internet traffic routing configurations allow you to configure Azure Virtual Networks and on-premises networks to send internet outbound traffic (1) to security appliances in the hub. You can also leverage Destination-Network Address Translation (DNAT) features of your security appliance if you want to provide external users access to applications in an Azure Virtual Network or on-premises (2).

Figure 2: Diagram of a Virtual Hub showing internet outbound and inbound DNAT traffic flows.

Apply routing policies for inter-hub cross-region traffic

Virtual WAN automatically deploys all Virtual Hubs across your Virtual WAN in a full mesh, providing zero-touch any-to-any connectivity region-to-region and hub-to-hub using the Microsoft global backbone. Routing policies program Virtual WAN to inspect inter-hub and inter-region traffic between two Azure Virtual Networks (1), between two on-premises (2), and between Azure Virtual Networks and on-premises (3) connected to different hubs. Every packet entering or leaving the hub is routed to the security solution deployed in the Virtual Hub before being routed to its final destination.

Figure 3: Diagram of inter-region and inter-hub traffic flows inspected by security solutions in the hub.

User experience for routing intent

To use routing intent, navigate to your Virtual WAN hub. Under Routing, select Routing Intent and routing policies.

Configure an Internet or Private Routing Policy to send traffic to a security solution deployed in the hub by selecting the next hop type (Azure Firewall, Network Virtual Appliance, or SaaS solution) and corresponding next hop resource.

Figure 4: Example configuration of routing intent with both Private and Internet routing policy in Virtual WAN Portal.

Azure Firewall customers can also configure routing intent using Azure Firewall Manager by enabling the ‘inter-hub’ setting.

Figure 5: Enabling Routing Intent through Azure Firewall Manager.

After configuring routing intent, you can view the effective routes of the security solution by navigating to your Virtual Hub, then select Routing, and click Effective Routes. The effective routes of the security solution provide additional visibility to troubleshoot how Virtual WAN routes traffic that has been inspected by the Virtual hub’s security solution.

Figure 6: View of getting the effective routes on a security solution deployed in the hub.

Before you get started with this feature, here are some key considerations:

The feature caters to users that consider Virtual Network and on-premises traffic as private traffic. Virtual WAN applies private routing policies to all Virtual Networks and on-premises traffic.

Routing intent is mutually exclusive with custom routing and static routes in the ‘defaultRouteTable’ pointing to Network Virtual Appliance (NVA) deployed in a Virtual Network spoke connected to Virtual WAN. As a result, use cases where users are using custom route tables or NVA-in-spoke use cases are not applicable.

Routing Intent advertises prefixes corresponding to all connections to Virtual WAN towards on-premises networks. Users may use Route Maps to summarize and aggregate routes and filter based on defined match conditions.

Learn more about Azure Virtual WAN

We look forward to continuing to build out Azure Virtual WAN and adding more capabilities in the future. We encourage you to try out the Routing Intent feature in Azure Virtual WAN and look forward to hearing more about your experiences to incorporate your feedback into the product.

How to configure Virtual WAN Hub routing policies

What’s new in Azure Virtual WAN?

Tutorial: Secure your virtual hub using Azure Firewall Manager

Fortinet Next-Generation Firewall

Check Point Cloud Guard for Virtual WAN

Install Palo Alto Networks Cloud NGFW in a Virtual WAN hub

The post Azure Virtual WAN now supports full mesh secure hub connectivity appeared first on Azure Blog.
Quelle: Azure

Explore the latest features for Datadog—An Azure Native ISV Service

Datadog – An Azure Native ISV Service, that brings the power of Datadog’s observability capabilities to Azure, is generally available since 2021. The natively integrated service allows you to monitor and diagnose issues with your Azure resources by automatically sending logs and metrics to your Datadog organization.

The service is easy to provision and manage, like any other Azure resource, using the Azure Portal, Azure Command-Line Interface (CLI), software development kits (SDKs), and more. You do not need any custom code or connectors to start viewing your logs and metrics on the Datadog portal.

The service has continued to grow and has been adopted well by our joint customers. This service is developed and managed by Microsoft and Datadog and based on your feedback, we continue to invest in deeper integrations to make the experience smoother for you. Here are some of the top features shipped recently that we would like to highlight:

Monitor multiple subscriptions with a single Datadog Resource

We are excited to announce a scalable multi-subscription monitoring capability that allows you to configure monitoring for all your subscriptions through a single Datadog resource. This simplifies the process of monitoring numerous subscriptions as you do not need to setup a separate Datadog resource in every single subscription that you wish to monitor.

To start monitoring multiple subscriptions through a single “Datadog—An Azure Native ISV Service” resource, click on the Monitored Subscriptions blade under the Datadog organizations configurations section.

The subscription in which the Datadog resource is created is monitored by default. To include additional subscriptions, click on the “Add subscriptions” button and on the window that opens, select the subscriptions that you want to monitor using the same resource.

We recommend deleting redundant Datadog resources linked to the same organization and consolidating multiple subscriptions into a single Datadog resource wherever possible. This would help avoid duplicate data flow and issues like throttling. For example, in the image shown below, there is a resource named DatadogLinkingTest linked to the same organization in one of the subscriptions. You should ideally delete the resource before proceeding to add the subscription.

Click on Add to include the chosen subscriptions to the list of subscriptions being monitored through the Datadog resource.

The set of tag rules for metrics and logs defined for the Datadog resource apply to all subscriptions that are added for monitoring. If you wish to reconfigure the tag rules at any point, check Reconfigure rules for metrics and logs.

And now you are done. Go to the “Monitored Resources” blade in your Datadog resource and filter the subscription of your choice to check the status of logs and metrics being sent to Datadog for the resources in that subscription.

Likewise, agent management experience for App Services and virtual machines (VMs) also spans multiple subscriptions now. 

Check out Monitor virtual machines using the Datadog agent and Monitor App Services using the Datadog agent as an extension.

If at any point you wish to stop monitoring resources in a subscription via the Datadog resource, you can remove the subscription from the Monitored subscriptions list. In the Monitored Subscriptions blade, choose the subscription you no longer wish to monitor and click on “Remove subscriptions”. The default subscription (the one in which the Datadog resource is created) can’t be removed.

Log forwarder

The automatic log forwarding capability available out of the box with Datadog’s native integration on Azure eliminates time-consuming steps that require you to setup additional infrastructure and write custom code.

We are constantly working to support all resource categories on Azure Monitor to ship logs to Datadog. For customers who have setup monitoring tag rules in an Azure subscription, new resource types or categories get automatically enrolled for sending logs, without the need for customers to manually do any changes to enable new resource types. As of today, the native integration on Azure supports logs from 126 resource types to flow to Datadog.

Cloud Security Posture Management

In the Datadog Azure Native integration, enabling Cloud Security Posture Management (CSPM) for your Azure Resources is a straightforward operation in your Datadog resource. Navigate to the Cloud Security Posture Management blade, click on the checkbox to enable CSPM and click Save. The setting can be disabled at any point.

You can learn more about Datadog’s CSPM product here. 

Mute monitor for expected virtual machine shutdowns

Imagine alerts being sent for expected VM shutdowns and waking you up in the middle of the night. Yikes! Now, with just the click of a checkbox, you can avoid scenarios where Datadog’s disaster prevention alert notifications get triggered during scheduled shutdowns. To mute the monitor for expected Azure Virtual Machine shutdowns, select the checkbox shown below in the Metrics and Logs blade.

Hope you are excited to try out all the cool features highlighted in this blog!

Next steps

If you would like to subscribe to the service, check out Datadog – An Azure Native ISV Service from Azure marketplace.

If you already use the Datadog—an Azure Native ISV Service, and have feedback or feature requests, please share below in the comments.

To learn more about the service, check out our documentation—Get started with Datadog – an Azure Native ISV Service.

Share additional information about how you use resource and subscription logs to monitor and manage your cloud infrastructure and applications by responding to this survey.

The post Explore the latest features for Datadog—An Azure Native ISV Service appeared first on Azure Blog.
Quelle: Azure

New and upcoming capabilities with Elastic Cloud (Elasticsearch)—An Azure Native ISV Service

Microsoft and Elastic partnered together in 2020 to build an Elastic Cloud (Elasticsearch)—An Azure Native ISV Service to create cloud native deeply integrated experiences for all Azure and Elastic customers to power their digital transformation. Since general availability, thanks to you, this service is growing rapidly while improving efficiency for all its customers.

Case in point is that Mr. Turing’s cognitive intelligence software as a service (SaaS) product “Alan”, greatly benefited from the native Elasticsearch offering on Azure and deep integration between products like Azure, GitHub, and Visual Studio Code, as elaborated in their story here:

“On Microsoft Azure, Alan is twice as fast and less costly to operate compared to when he was running on our previous cloud provider. In addition, because of the strong integration between Azure, GitHub, and Visual Studio Code, we can deliver new features faster than we could before.”—Marcelo Noronha, Chief Executive Officer of Mr. Turing (October ’2022).

Microsoft and Elastic are continuously striving to bring more delightful experiences to our customers and enable newer capabilities to usher an era of superfast speed, massive scale, and trustworthy reliability.

Better together with Azure and Elastic Cloud

The core setup of Elastic Cloud (Elasticsearch)—An Azure Native ISV Service makes it simpler for developers and IT administrators to manage their Elastic deployments right from Azure. Users no longer must go through multiple manual steps to integrate Azure with Elastic or manage their own infrastructure.

While this is immensely beneficial, the true power relies on when we can continue to bring Elastic’s newer capabilities natively for Azure customers.

Here are a few of the newest capabilities added since announcing general availability:

Elastic 8.X version support

The Elastic 8.X versions bring enhancements to Elasticsearch’s vector search capabilities, native support for natural language processing models, increasingly simplified data onboarding, and a streamlined security experience. This helps people and teams connect quickly and search enterprise content to find relevant information and insights, enable observability to keep mission critical applications and infrastructure running, and protect entire digital ecosystems from increasingly sophisticated cyber threats. New Elastic deployments created using the Azure native service are automatically set up with the latest Elastic version, so that customers can leverage these enhanced capabilities easily out of the box.

Cluster and user management

Setting up Elastic clusters using the Azure native service ensures provisioning of the right configuration as part of deployment itself. Apart from automated cluster provisioning, we have also enabled user management capabilities where the primary owner or creator of the initial cluster can now add multiple users from the organization to manage the deployments. This helps ensure easy management of production workloads, even when the primary owner changes roles or moves out of the organization.

Private link

For customers who are interested in sending Azure resource and subscription logs to Elastic clusters setup at private link endpoint inside an Azure VNet, we have enabled easy configuration to set this up from right within the native experience. Users have the ability to set traffic filters for Azure private links, to manage how Elastic deployments can be accessed.

Observability resource types

We are constantly working to support all resource categories on Azure Monitor to ship logs to Elastic. For customers who have setup monitoring tag rules in an Azure subscription, new resource types and categories get automatically enrolled for logs shipping, without the need for customers to manually do any changes to enable new resource types. As of now, the Azure native service supports logs shipping from 126 resource types to flow to Elastic.

Region expansion

Azure and Elastic teams have been continuously partnering to add additional regions support, to be available closer to where customers need the native offering and data residency. As of now, we support 16 Azure regions (including four new regions—South Africa North, Central India, Brazil South and Canada Central) for the Elastic Cloud (Elasticsearch)—An Azure Native ISV Service, and we are on the path to grow to additional regions.

Looking at the future

Here are some of the key capabilities that Microsoft and Elastic teams are working together to bring to you in the next six months:

Elastic version selection

Currently, the Elastic Cloud (Elasticsearch)—An Azure Native ISV Service automatically takes care of setting up Elastic with the right configuration and the latest cluster version. We heard from customers that there might be situations where the user consciously wants to create new resources leveraging an older Elastic version to support compatibility with their overall technology architecture. We are planning to address that by offering the flexibility to customers to select the Elastic version from right within the Azure portal experience.

Billing visibility enhancements

Given that today we support Elastic deployments to be set up across multiple Azure subscriptions—while still retaining the ability for customers to receive a unified bill—we are planning multiple enhancements on the native offering experience to bring visibility and transparency to billing resource and deployments that the usage and billing correspond to, so that customers can correlate better, optimize costs, or raise requests for support in case something is out of place.

Native experience for Elastic customers on standard Azure marketplace listing

Customers who started using Elastic on Azure by subscribing to the standard marketplace offer before the native offering went live, are missing out on the deep integration capabilities that the native Elastic Cloud (Elasticsearch service) brings to the table. Microsoft and Elastic teams are working together to migrate these customers to the Azure native service seamlessly, so that customers can get the added integration benefits.

There are many more exciting capabilities being planned for customers beyond the next six months, stay up to date with the latest news on the Microsoft Azure blog.

Next steps

Subscribe to the Elastic Cloud (Elasticsearch)—An Azure Native ISV Service from Azure marketplace.

To learn more about the Elastic Cloud (Elasticsearch)—An Azure Native ISV Service, check out our documentation on the Elastic integration with Azure.

Watch the Microsoft Ignite session The Elastic on Microsoft Azure Native Integration Story: Helping Customers Turn Challenges to Advantages presented by Elastic.

Share additional information about how you use resource and subscription logs to monitor and manage your cloud infrastructure and applications by responding to this survey.

The post New and upcoming capabilities with Elastic Cloud (Elasticsearch)—An Azure Native ISV Service appeared first on Azure Blog.
Quelle: Azure