Microsoft Azure delivers purpose-built cloud infrastructure in the era of AI

This year’s Microsoft Ignite brings us together to experience AI transformation in action. AI is driving a new wave of innovation, rapidly changing what applications look like, how they’re designed and built, and how they’re delivered. At the same time, business leaders continue to face challenges, needing to juggle various priorities to offset rising costs, be sustainable, and outmaneuver economic uncertainty. Today’s customers are looking for AI solutions that will meet all their needs.

At Ignite, we’re announcing innovation in Microsoft Azure that is powering more AI capabilities for our customers and helping enterprises with their cloud management and operations. We’re committed to bringing your AI ambitions to production and meeting you where you are. Whether you choose to build hybrid, cloud-native, or open source solutions, we’re rapidly expanding our infrastructure and adding intuitive tools for customers to help take your ideas to production safely and responsibly in this new era of AI. 

With Azure, you can trust that you are on a secure and well-managed foundation to utilize the latest advancements in AI and cloud-native services. Azure is adaptive and purpose-built for all your workloads, helping you seamlessly unify and manage all your infrastructure, data, analytics, and AI solutions. 

Powering groundbreaking AI solutions

The era of AI has largely been shaped by an exponential growth in the sophistication of large language models like OpenAI’s GPT trained on trillions of parameters and groundbreaking generative AI services like Bing Chat Enterprise and Microsoft Copilot used by millions of people globally. The leadership by Azure in optimizing infrastructure for AI workloads in the cloud is pioneering this innovation and why customers like OpenAI, Inflection, and Adept are choosing Azure to build and run AI solutions. 

Learn More

Deliver high-powered performance to your most compute-intensive AI workloads chevron_right

In this new era of AI, we are redefining cloud infrastructure, from silicon to systems, to prepare for AI in every business, in every app, for everyone. At Ignite, we’re introducing our first custom AI accelerator series, Azure Maia, designed to run cloud-based training and inferencing for AI workloads such as OpenAI models, Bing, GitHub Copilot, and ChatGPT. Maia 100 is the first generation in the series, with 105 billion transistors, making it one of the largest chips on 5nm process technology. The innovations for Maia 100 span across the silicon, software, network, racks, and cooling capabilities. This equips the Azure AI infrastructure with end-to-end systems optimization tailored to meet the needs of groundbreaking AI such as GPT.

Alongside the Maia 100, we’re introducing our first custom in-house central processing unit series, Azure Cobalt, built on Arm architecture for optimal performance or watt efficiency, powering common cloud workloads for the Microsoft Cloud. From in-house silicon to systems, Microsoft now optimizes and innovates at every layer in the infrastructure stack. Cobalt 100, the first generation in the series, is a 64-bit 128-core chip that delivers up to 40 percent performance improvement over current generations of Azure Arm chips and is powering services such as Microsoft Teams and Azure SQL. 

Networking innovation runs across our first-generation Maia 100 and Cobalt 100 chips. From hollow core fiber technology to the general availability of Azure Boost, we’re enabling faster networking and storage solutions in the cloud. You can now achieve up to 12.5 GBs throughput, 650K input output operations per second (IOPs) in remote storage performance to run data-intensive workloads, and up to 200 GBs in networking bandwidth for network-intensive workloads. 

We continue to build our AI infrastructure in close collaboration with silicon providers and industry leaders, incorporating the latest innovations in software, power, models, and silicon. Azure works closely with NVIDIA to provide NVIDIA H100 Tensor Core (GPU) graphics processing unit-based virtual machines (VMs) for mid to large-scale AI workloads, including Azure Confidential VMs. On top of that, we are adding the latest NVIDIA H200 Tensor Core GPU to our fleet next year to support larger model inferencing with no reduction in latency. 

As we expand our partnership with AMD, customers can access AI-optimized VMs powered by AMD’s new MI300 accelerator early next year. This demonstrates our commitment to adding optionality for customers in price, performance, and power for all of their unique business needs. 

These investments have allowed Azure to pioneer performance for AI supercomputing in the cloud and have consistently ranked us as the number one cloud in the top 500 of the world’s supercomputers. With these additions to the Azure infrastructure hardware portfolio, our platform enables us to deliver the best performance and efficiency across all workloads.

Being adaptive and purpose-built for your workloads

We’ve heard about your challenges in migrating workloads to the public cloud, especially for mission-critical workloads. We continue to work with the technology vendors you’ve relied on to run your workloads and ensure Azure is supporting your needs such as SAP, VMware, NetApp, RedHat, Citrix, and Oracle. We’re excited about our recent partnership to bring Oracle Database Services into Azure to help keep your business efficient and resilient.  

At Ignite, we’re announcing the general availability of Oracle Database@Azure in the US East Azure region as of December 2023. Customers will now have direct access to Oracle database services running on Oracle Cloud Infrastructure (OCI) deployed in Azure data centers. The new service will deliver all the performance, scale, and workload availability advantages of Oracle Exadata Database Service on OCI combined with the security, flexibility, and best-in-class services of Azure. Microsoft is the only other hyper scaler to offer OCI Database Services to simplify cloud migration, multicloud deployment, and management.

As we’ve observed through our interactions the durable state of the cloud is evolving to one where customer workloads need to be supported wherever they’re needed. We realize that cloud migration is not a one-size-fits-all approach, and that’s why we’re committed to meeting you where you are on your cloud journey. An adaptive cloud enables you to thrive in dynamic environments by unifying siloed teams, distributed sites, and sprawling systems into a single operations, application, and data model in Azure.  

Our vision for adaptive cloud builds on the work we’ve already started through Azure Arc. With Azure Arc, customers can project their on-premises, edge, and multicloud resources to Azure, deploy Azure native services on those resources, and extend Azure services to the edge.  

We’re excited to make some new announcements that will help customers implement their adaptive cloud strategies. For VMware customers, we’re announcing the general availability of VMware vSphere enabled by Azure Arc. Azure Arc brings together Azure and the VMware vSphere infrastructure enabling VM administrators to empower their developers to use Azure technologies with their existing server-based workloads and new Kubernetes workloads all from Azure. Additionally, we’re delighted to share the preview of Azure IoT Operations enabled by Azure Arc. By using Azure IoT Operations, customers can greatly reduce the complexity and time it takes to build an end-to-end solution that empowers them to make near real-time decisions backed by AI-driven insights to run agile, resilient, and sustainable operations with both Microsoft and partner technologies.

Amplifying your impact with AI-enhanced operations

Every day, cloud administrators and IT professionals are being asked to do more. We consistently hear from customers that they’re tasked with a wider range of operations, collaborating and managing more users, supporting more complex needs to deliver on increasing customer demand and integrating more workloads into their cloud environment. 

That’s why we’re excited to introduce the public preview of Microsoft Copilot for Azure, a new solution built into Azure that helps simplify how you design, operate, or troubleshoot apps and infrastructure from cloud to edge. Learn how to apply for access to Microsoft Copilot for Azure to see how this new AI companion can help you generate deep insights instantly, discover new cloud functionality, and do complex tasks faster.

Enabling limitless innovation in the era of AI

Delivering on the promise of advanced AI for our customers requires high computing infrastructure, services, and expertise—things that can only be addressed with the scale and agility of the Microsoft Cloud. Our unique equipment and system designs help us and customers like you meet the challenges of the ever-changing technological landscape. From increasing the lifecycle of our hardware and running efficient supply chain operations to providing purpose-built infrastructure in this new era of AI, we can ensure we’re always here to bring your ideas to life in a safe and responsible way.

Learn more about the benefits of Azure infrastructure capabilities at Ignite

Attend these sessions at Ignite to learn more: 

Do more with Windows Server and SQL Server on Azure

Simplifying cloud operations with Microsoft Copilot for Azure

Unlock AI innovation with Azure AI infrastructure 

Check out these resources to help you get started:

Learn more about Azure Migrate and Modernize and Azure Innovate and how they can help you from migration to AI innovation.

Check out the new and free Azure Migrate application and code assessment feature to save on application migrations.

Find out how to take your AI ambitions from ideation to reality with Azure. 

Explore what’s next at Ignite.

The post Microsoft Azure delivers purpose-built cloud infrastructure in the era of AI appeared first on Azure Blog.
Quelle: Azure

Microsoft Azure AI, data, and application innovations help turn your AI ambitions into reality

Welcome to Microsoft Ignite 2023! The past year has been one of true transformation. Companies are seeing real benefits today and are eager to explore what’s next—including how they can do more with their data investments, build intelligent applications, and uncover what AI can do for their business.

We recently commissioned a study through IDC and uncovered insights into how AI is driving business results and economic impact for organizations worldwide. More than 2,000 business leaders surveyed confirmed they’re already using AI for employee experiences, customer engagement, and to bend the curve on innovation.  

The study illustrates the business value of AI but it really comes to life through the stories of how our customers and partners are innovating today. Customers like Heineken, Thread, Moveworks, the National Basketball Association (NBA), and so many more are putting AI technologies to work for their businesses and their own customers and employees. 

From modern data solutions uniquely suited for the era of AI, beloved developer tools, and application services, we’re building Microsoft Azure as the AI supercomputer for customers, no matter the starting point.

Learn More

Microsoft Ignite 2023 chevron_right

This week at Ignite, the pace of innovation isn’t slowing down. We’ll share more stories about how organizations are turning to new solutions to drive their business forward. We’re also announcing many new capabilities and updates to make it easier than ever to use your favorite tools, maximize existing investments, save time, and innovate on Azure as a trusted platform.

Modern data solutions to power AI transformation

Every intelligent app starts with data—and your AI is only as good as your data—so a modern data and analytics platform is increasingly important. The integration of data and AI services and solutions can be a unique competitive advantage because every organization’s data is unique.

Last year, we introduced the Microsoft Intelligent Data Platform as an integrated platform to bring together operational databases, analytics, and governance and enable you to integrate all your data assets seamlessly in a way that works for your business.

At Ignite this week, we are announcing the general availability of Microsoft Fabric, our most integrated data and AI solution yet, into the Intelligent Data Platform. Microsoft Fabric can empower you in ways that weren’t possible before with a unified data platform. This means you can bring AI directly to your data, no matter where it lives. This helps foster an AI-centered culture to scale the power of your data value creation so you can spend more time innovating and less time integrating.

EDP is a global energy company that aims to transform the world through renewable energy sources. They’re using Microsoft Fabric and OneLake to simplify data access across data storage, processing, visualization, and AI workflows. This allows them to fully embrace a data-driven culture where they have access to high-value insights and decisions are made with a comprehensive view of the data environment.

We’re also announcing Fabric as an open and extensible platform. We will showcase integrations with many of our partners like LSEG, Esri, Informatica, Teradata and SAS, who have been demonstrating the possibilities of bringing their product experiences as workloads into Fabric, widening their reach and breadth of capabilities.

Every organization is eager to save time and money as they transform. We’re announcing several new features and updates for Azure SQL that make Azure the ideal and most cost-effective place for your data. Updates include lower pricing for Azure SQL Database Hyperscale compute, Azure SQL Managed Instance free trial offer, and a wave of other new features. 

Lufthansa Technik AG has been running Azure SQL to support its application platform and data estate, leveraging fully managed capabilities to empower teams across functions. They’re joining on stage during a breakout session on cloud-scale databases, so you can learn more about their experience directly. 

Easily build, scale, and deploy multimodal generative AI experiences responsibly with Azure

The AI opportunity for businesses is centered on the incredible power of generative AI. We’re inspired by customers who are now nimbly infusing content generation capabilities to transform all kinds of apps into intuitive, contextual experiences that impress and captivate their own customers and employees.

Siemens Digital Industries is one company using Azure AI to enhance its manufacturing processes by enabling seamless communication on the shop floor. Their newest solution helps field engineers report issues in their native language, promoting inclusivity, efficient problem resolution, and faster response times. 

Today organizations need more comprehensive, unified tools to build for this next wave of generative AI-based applications. This is why we’re announcing new updates that push the boundaries of AI innovation and make it easier for customers to responsibly deploy AI at scale across their business.

Everything you need to build, test, and deploy AI innovations in one convenient location

At Ignite, we’re thrilled to introduce the public preview of Azure AI Studio, a groundbreaking platform for AI developers by Microsoft. Everything organizations need to tackle generative AI is now in one place: cutting-edge models, data integration for retrieval augmented generation (RAG), intelligent search capabilities, full-lifecycle model management, and content safety. 

We continue to expand choice and flexibility in generative AI models beyond Azure OpenAI Service. We announced the model catalog at Build and at Ignite, we’re announcing Model as a Service in managed API endpoint coming soon within the model catalog. This will enable pro developers to easily integrate new foundation models like Meta’s Llama 2, G42’s Jais, Command from Cohere and Mistral’s premium models into their applications as an API endpoint and fine-tune models with custom training data, without having to manage the underlying GPU infrastructure. This functionality will help eliminate the complexity for our customers and partners of provisioning resources and managing hosting. 

Large language models (LLM) orchestration and grounding RAG are top of mind as momentum for LLM-based AI applications grows. Prompt flow, an orchestration tool to manage prompt orchestration and LLMOps, is now in preview in Azure AI Studio and generally available in Azure Machine Learning. Prompt flow provides a comprehensive solution that simplifies the process of prototyping, experimenting, iterating, and deploying your AI applications.

We’re also announcing at Ignite that Azure AI Search, formerly Azure Cognitive Search, is now available in Azure AI Studio so everything remains in one convenient location for developers to save time and boost productivity.

Azure AI Content Safety is also available in Azure AI Studio so developers can easily evaluate model responses all in one unified development platform. We’re also announcing the preview of new features inside Azure AI Studio powered by Azure AI Content Safety to address harms and security risks that are introduced by large language models. The new features help identify and prevent attempted unauthorized modifications, and identify when large language models generate material that leverages third-party intellectual property and content. 

With Azure AI Content Safety, developers can monitor human and AI-generated content across languages and modalities and streamline workflows with customizable severity levels and built-in blocklists.

It’s great to see customers already leveraging this to build their AI solutions. In just six months, Perplexity brought Perplexity Ask, a conversational answer engine, to market with Azure AI Studio. They were able to streamline and expedite AI development, get to market faster, scale quickly to support millions of users, and cost-effectively deliver security and reliability.

If you’re creating a custom copilot, improving search, enhancing call centers, developing bots, or a blend of all of this, Azure AI Studio offers everything you need. You can check out Eric Boyd’s blog to learn more about Azure AI Studio.

Generative AI is now multi-modal

We are excited to enable a new chapter in the generative AI journey for our customers with GPT-4 Turbo with Vision, in preview, coming soon to the Azure OpenAI Service and Azure AI Studio. With GPT-4 Turbo with Vision, developers can deliver multi-modal capabilities in their applications. 

We are adding several new updates to Azure AI Vision. GPT-4 Turbo with Vision in combination with our Azure AI Vision service can see, understand, and make inferences like video analysis or video Q&A from visual inputs and associated text-based prompt instructions.

In addition to GPT-4 Turbo with Vision, we are happy to share other new innovations to Azure OpenAI Service including GPT-4 Turbo in preview and GPT-3.5 Turbo 16K 1106 in general availability coming at the end of November and image model DALL-E 3 in preview now.

Search in the era of AI

Effective retrieval techniques, like those powered by search, can improve the quality of responses and response latency. A common practice for knowledge retrieval (retrieval step in RAG), is to use vector search. Search can power effective retrieval techniques to vastly improve the quality of responses and reduce latency, which is essential for generative AI apps as they must be grounded on content from data, or websites, to augment responses generated by LLMs. 

Azure AI Search is a robust information retrieval and search platform that enables organizations to use their own data to deliver hyper-personalized experiences in generative AI applications. We’re announcing the general availability of vector search for fast, highly relevant results from data.

Vector search is a method of searching for information within various data types, including images, audio, text, video, and more. It’s one of the most critical elements of AI-powered, intelligent apps, and the addition of this capability is our latest AI-ready functionality to come to our Azure databases portfolio.

Semantic ranker, formerly known as semantic search, is also generally available and provides access to the same machine learning-powered search re-ranking technology used to power Bing. Your generative AI applications can deliver the highest quality responses to every user Q&A with a feature-rich vector database integrated with state-of-the-art relevance technology.

Accelerate your AI journey responsibly and with confidence

At Microsoft, we’re committed to safe and responsible AI. It goes beyond ethical values and foundational principles, which are critically important. We’re integrating this into the products, services, and tools we release so organizations can build on a foundation of security, risk management, and trust. 

We are pleased to announce new updates at Ignite to help customers pursue AI responsibly and with confidence.

Setting the standard for responsible AI innovation—expanding our Copilot Copyright Commitment

Microsoft has set the standard with services and tools like Azure AI Content Safety, the Responsible AI Dashboard, model monitoring, and our industry-leading commitment to defend and indemnify commercial customers from lawsuits for copyright infringement.   

Today, we are announcing the expansion of the Copilot Copyright Commitment, now called Customer Copyright Commitment (CCC), to customers using Azure OpenAI Service. As more customers build with generative AI inside their organizations, they are inspired by the potential of this technology and are eager to commercialize it externally.   

By extending the CCC to Azure OpenAI Service, Microsoft is broadening our commitment to defend our commercial customers and pay for any adverse judgments if they are sued for copyright infringement for using the outputs generated by Azure OpenAI Service. This benefit will be available starting December 1, 2023. 

 As part of this expansion, we’ve published new documentation to help Azure OpenAI Service customers implement technical measures and other best practices to mitigate the risk of infringing content. Customers will need to comply with the documentation to take advantage of the benefit. Azure OpenAI Service is a developer service and comes with a shared commitment to build responsibly.  We look forward to customers leveraging it as they build their own copilots. 

Announcing the Azure AI Advantage offer

We want to be your trusted partner as you deliver next-gen, transformative experiences with pioneering AI technology, a deeply integrated platform, and leading cloud security.  

Azure offers a full, integrated stack purpose-built for cloud-native, AI-powered applications, accelerating your time to market and giving you a competitive edge and superior performance. ​To help on that journey we are happy to introduce a new offer to help new and existing Azure AI and GitHub Copilot customers realize the value of Azure AI and Azure Cosmos DB together and get on the fast track to developing AI powered applications. You can learn more about the Azure AI Advantage offer and register here. 

Azure Cosmos DB and Azure AI combined deliver many benefits, including enhanced reliability of generative AI applications through the speed of Azure Cosmos DB, a world-class infrastructure and security platform to grow your business while safeguarding your data, and provisioned throughput to scale seamlessly as your application grows.

Azure AI services and GitHub Copilot customers deploying their AI apps to Azure Kubernetes Service may be eligible for additional discounts. Speak to your Microsoft representative to learn more. 

Empowering all developers with AI powered tools

There is so much in store this week at Ignite to improve the developer experience, save time, and increase productivity as they build intelligent applications. Let’s dive into what’s new.

Updates for Azure Cosmos DB—the database for the era of AI

For developers to deliver apps more efficiently and with reduced production costs, at Ignite we’re sharing new features in Azure Cosmos DB.

Now in preview, dynamic scaling provides developers new flexibility to scale databases up or down and brings cost savings to customers, especially those with operations around the globe. We’re also bringing AI deeper into the developer experience and increasing productivity with the preview of Microsoft Copilot for Azure enabling natural language queries in Azure Cosmos DB.  

Bond Brand Loyalty turned to Azure Cosmos DB to scale to more than two petabytes of transaction data while maintaining security and privacy for their own customers. On Azure, Bond built a modern offering to support extensive security configurations, reducing onboarding time for new clients by 20 percent.

We’re announcing two exciting updates to enable developers to build intelligent apps: general availability of both Azure Cosmos DB for MongoDB vCore and vector search in Azure Cosmos DB for MongoDB vCore.

Azure Cosmos DB for MongoDB vCore allows developers to build intelligent applications with full support for MongoDB data stored in Azure Cosmos DB, which unlocks opportunities for app development thanks to deep integration with other Azure services. That means developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and a familiar vCore architecture when migrating existing applications or building new ones. 

Vector search in Azure Cosmos DB for MongoDB vCore allows developers to seamlessly integrate data stored in Azure Cosmos DB into AI-powered applications, including those using Azure OpenAI Service embeddings. Built-in vector search enables you to efficiently store, index, and query high-dimensional vector data, and eliminates the need to transfer the data outside of your Azure Cosmos DB database.

PostgreSQL developers have used built-in vector search in Azure Database for PostgreSQL and Azure Cosmos DB for PostgreSQL since this summer. Now, they can take advantage of the public preview of Azure AI extension in Azure Database for PostgreSQL to build LLMs and rich generative AI solutions.

KPMG Australia used the vector search capability when they turned to Azure OpenAI Service and Azure Cosmos DB to build their own copilot application. The KymChat app has helped employees speed up productivity and streamline operations. The solution is also being made available to KPMG customers through an accelerator that combines KymChat’s use cases, features, and lessons learned, helping customers accelerate their AI journey.

Building cloud-native and intelligent applications

Intelligent applications combine the power of AI and cloud-scale data with cloud-native app development to create highly differentiated digital experiences. The synergy between cloud-native technologies and AI is a tangible opportunity for evolving traditional applications, making them intelligent, and delivering more value to end users. We’re dedicated to continually enhancing Azure Kubernetes Service to meet these evolving demands of AI for customers who are just getting started as well as those who are more advanced.

Customers can now run specialized machine learning workloads like LLMs on Azure Kubernetes Service more cost-effectively and with less manual configuration. The Kubernetes AI toolchain Operator automates LLMs deployment on AKS across available CPU and GPU resources by selecting optimally sized infrastructure for the model. It makes it possible to easily split inferencing across multiple lower-GPU-count virtural machines (VMs) thus increasing the number of Azure regions where workloads can run, eliminating wait times for higher GPU-count VMs, and lowering overall cost. Customers can also run preset models from the open source hosted on AKS, significantly reducing costs and overall inference service setup time while eliminating the need for teams to be experts on available infrastructure. 

Azure Kubernetes Fleet Manager is now generally available and enables multi-cluster and at-scale scenarios for Azure Kubernetes Service clusters. Fleet manager provides a global scale for admins to manage workload distribution across clusters and facilitate platform and application updates so developers can rest assured they are running on the latest and most secure software. 

We’ve also been sharing learnings about how to help engineering organizations enable their own developers to get started and be productive quickly, while still ensuring systems are secure, compliant, and cost-controlled. Microsoft is providing a core set of technology building blocks and learning modules to help organizations get started on their journey to establish a platform engineering practice. 

New Microsoft Dev Box capabilities to improve the developer experience

Maintaining a developer workstation that can build, run, and debug your application is critical to keeping up with the pace of modern development teams. Microsoft Dev Box provides developers with secure, ready-to-code developer workstations for hybrid teams of any size. 

We’re introducing new preview capabilities to give development teams more granular control over their images, the ability to connect to Hosted Networks to simplify connecting to your resources securely, and templates to make it easier to get up and running. Paired with new capabilities coming to Azure Deployment Environments, it’s easier than ever to deploy those projects to Azure.

Build upon a reliable and scalable foundation with .NET 8

.NET 8 is a big leap forward towards making .NET one of the best platforms to build intelligent cloud-native applications, with the first preview of .NET Aspire – an opinionated cloud ready stack for building observable, production ready, distributed cloud native applications. It includes curated components for cloud-native fundamentals including telemetry, resilience, configuration, and health checks. The stack makes it easier to discover, acquire, and configure essential dependencies for cloud-native applications on day 1 and day 100. 

.NET 8 is also the fastest version of .NET ever, with developer productivity enhancements across the stack – whether you are building for cloud, a full stack web app, a desktop or mobile app suing .NET MAUI, or integrating AI to build the next copilot for your app. These are available in Visual Studio, which also releases today.  

Azure Functions and Azure App Service have full support for .NET 8 both in Linux and Windows, and both Azure Kubernetes Service and Azure Container Apps also support .NET 8 today.  

There are no limits to your innovation potential with Azure

There’s so much rolling out this week with data, AI, and digital applications so I hope you’ll tune into the virtual Ignite experience and hear about the full slate of announcements and more about how you can put Azure to work for your business. 

This week’s announcements are proof of our commitment to helping customers take that next step of innovation and stay future-ready. I can’t wait to see how your creativity and new innovations unfold for your business. 

You can check out these resources to learn more about everything shared today. We hope you have a great Ignite week!

Attend these sessions to learn more about how Azure can help you, no matter the starting point: 

Keynote session with Scott Guthrie and friends: AI transformation for your organization with the Microsoft Cloud 

Make your data AI ready with Microsoft Fabric and Azure Databricks 

Build ISV apps with Microsoft Fabric in the Intelligent Data Platform 

AI and Kubernetes: A winning combination for Modern App Development 

Build your own Copilot with Azure AI Studio

What’s new in generative AI? 

Vector search and state of the art retrieval for Generative AI apps

Master Platform Engineering: Architecting Scalable and Resilient Systems 

Explore all the Microsoft Azure announcements in the Book of News.

Learn how Microsoft Azure delivers purpose-built cloud infrastructure in the era of AI

Explore Microsoft’s Responsible AI playbook to learn more about our approach, what we learned, and how it can apply to your business. 

Learn more about Azure Migrate and Modernize and Azure Innovate and how they can help you from migration to AI innovation. 

Get ready for what’s next by visiting the AI Learning and Community Hub on Microsoft Learn with AI skilling opportunities on Microsoft Learn.

 

 
The post Microsoft Azure AI, data, and application innovations help turn your AI ambitions into reality appeared first on Azure Blog.
Quelle: Azure

Advancing hybrid cloud to adaptive cloud with Azure

The pace of change in the world around us is incredible. Between the impact of new, transformational technology, fluctuations in the economic landscape, and the hybrid nature of the post-COVID-19 world—we see customers across every industry taking action to innovate and adapt at this important inflection point. Entering the era of AI, the pace of change will only accelerate.

Our customers constantly innovate to keep pace with rapid market shifts and technological advancements, but they require a common innovation and operations platform that spans business initiatives. They’re finding that in the rush to innovate, they’ve spun up different projects and initiatives throughout the organization—each with its own approach. Specifically, they are asking us for help in three large areas: 

Sprawling systems: Most companies are dealing with an explosion of resources. Servers and devices in the operational edge and IoT, in addition to multicloud deployments, can be overwhelming. Basic tasks like patching, configuring, and securing get exponentially harder with every new location and technology.

Siloed teams: Rapid innovation is happening in every business unit—usually in an uncoordinated way. Often, there’s little chance to share work or learnings. Compounding matters, IT, development, and operational technology (OT) teams also tend to run separate technology initiatives and roll out new technology in an uncoordinated manner resulting in duplicated effort and increased financial and security risk. Over time, the silos unintentionally entrench talents in a single project or tech stack, artificially limiting their impact.

Technical debt: Short-term solutions, without a comprehensive long-term strategy, often result in systems incompatibility that keeps valuable data trapped where it’s created and can’t be leveraged to improve the business.

The need for a unified platform and system to address these challenges is evident. We believe Azure is the platform that can help, and we have been investing in Azure Arc to solve these problems. We see an opportunity to do more by bringing together agility and intelligence so that our customers can proactively adapt to change, rather than react to it and maintain a competitive edge in a dynamic landscape. 

The adaptive cloud approach for Azure

We are excited about the momentum we have with Azure Arc. There are over 21,000 active customers, and we’re continuing to see excellent growth. With Azure Arc, we crossed the boundaries of existing market categories, whether that is hybrid, multicloud, edge, or IoT. Customers aspire to the next level of modularity, integration, and simplicity. We believe that our customers can achieve this aspiration with a new approach which we call adaptive cloud.

The adaptive cloud approach unifies siloed teams, distributed sites, and sprawling systems into a single operations, security, application, and data model, enabling organizations to leverage cloud-native and AI technologies to work simultaneously across hybrid, multicloud, edge, and IoT.

An adaptive cloud approach shifts organizations from a reactive posture to one of proactive evolution, enabling people to anticipate and act upon changes in market trends, customer needs, and technological advancements ahead of time. This strategic foresight enables businesses to pivot quickly, embrace continuous improvement, and integrate new technologies seamlessly. By building resilience into their operational models, businesses can optimize resource usage and mitigate security and business risks before they manifest.

Innovative Azure capabilities

Azure is adopting the adaptive cloud approach by building on the work we have started with Azure Arc, as an extension of Azure Resource Manager (ARM). Azure Resource Manager keeps track of a rich set of configurations, logs, and metrics for every resource in Azure. It is a single source of truth for your Azure investments. Azure Arc enables you to project hybrid, multicloud, edge, and IoT resources to Azure Resource Manager. Not only can you create a single source of truth, but you can easily apply cloud services across your globally distributed digital estate. For example, observability tools in Azure, like Azure Monitor, give you visibility across thousands of assets in one place. Our security offerings and features, like Microsoft Defender for Cloud, Microsoft Sentinel, or Azure Policy for security enforcement, enable you to develop and improve your security posture. You can accomplish a lot with Azure Arc today but you will be able to do even more as we are envisioning a world where you can leverage AI to amplify your impact across existing and new scenarios.

Figure 1: A mix of Azure and Azure Arc enabled servers in Microsoft Defender for Cloud

Operate with AI-enhanced central management and security

Over the last few years, we’ve been investing in making many core Azure management and security capabilities available through Azure Arc. Having central visibility and security is the most common scenario our customers are taking advantage of. Features and services like role-based access control, Azure Policy, Key Vault, Microsoft Defender for Cloud, Microsoft Sentinel, and Azure Monitor are all available today to use across your digital estate.

We just announced the preview of Microsoft Copilot for Azure. Going forward, Copilot will be able to reason over the information you put directly into Azure Resource Manager or other services, like Azure Monitor, regardless of the location of those resources.

From there, you will be able to transform the way you work across the digital estate. Troubleshooting, for example, can be tedious. Going through logs, sending emails, and reading documentation can be monotonous. Copilot enables you to analyze the resource metrics and explore resource configuration and status at scale. Copilot also enables deeper exploration and intelligent assessment, such as anomaly detection of unexpected changes, and provides recommendations to address the issues from cloud to edge.

Figure 2: Using Copilot to get status across heterogeneous environments during active troubleshooting.

For example, with Azure Stack HCI version 23H2 preview, you can use Microsoft Copilot for Azure (preview) to identify problems and get information about your Azure Stack HCI clusters. When you ask Copilot for information about your edge infrastructure, it automatically pulls context when possible, based on the current conversation or based on the page you’re viewing in the Azure portal.

At VMware Explore EU, we announced the general availability of VMware vSphere capabilities enabled by Azure Arc. Customers can simplify the management of their VMware vSphere resources with the Azure Resource Manager functionality.

AI-enhanced management significantly elevates IT, enabling teams to discover new capabilities and new scenarios, so they can focus on strategic tasks and less on administrative chores. Copilot is your universal AI assistant that facilitates a streamlined and consistent set of functionalities for collaboration and is integrated seamlessly with the Azure portal and a variety of management tools.

The World Bank is one of the companies whose vast and distributed operations really called for a central and standardized approach. They are among the world’s largest sources of funding and financial knowledge for 189 developing countries. The World Bank employs a diverse workforce representing more than 170 countries in more than 130 locations. Recognizing an opportunity to improve efficiency and reduce costs, they were looking for a cloud-based solution that would offer centralized monitoring, performance, resource consumption, and security management—all in a single package. They chose Azure Arc to build their solution—in particular, because of their investment in SQL Server.

“We wanted to implement Azure Arc so we could utilize all the features and manage all our on-premises and cloud servers, including AWS, from one location,” Kala Macha explains. “With Azure Arc, we can manage everything at the operating system level and on the SQL Server side as well—all from a single pane of glass. It’s made a huge difference in our efficiency.”

Rapidly develop and scale applications across boundaries

We aim to break free from outdated system limitations and make it effortless for you to adopt advanced, flexible technologies. The ability to easily use your choice of cloud-native tools, containers, and other services will help accelerate digital transformation throughout your organization. With a standardized application model, with Kubernetes and Azure services, you can scale applications from the massive cloud platforms to on-site production without complex rewrites. A unified application deployment strategy, complemented by streamlined DevOps integration processes promotes collaboration and efficiency.

Figure 3: Connect, manage, and operate Kubernetes from Azure

New applications are increasingly being packaged and distributed as container images, making Kubernetes clusters among the most popular Azure Arc-enabled resources. Today, you can use Azure Arc to connect Cloud Native Computing Foundation (CNCF) Kubernetes to Azure to be operated centrally and at scale. Azure Kubernetes Service (AKS) has been enabled by Azure Arc to provide options to run a managed Kubernetes solution at the edge, with built-in support for Linux and Windows containers.

At Ignite we announced more deployment options for AKS to run with significantly reduced overhead when creating clusters. Every AKS cluster provisioned through Azure Arc is automatically configured with the Kubernetes agents inside, enabling access to extensions like Microsoft Defender, Azure Monitor, GitOps, and others. You can easily provision and manage Kubernetes clusters in the Azure portal, directly from the Azure Kubernetes Services resource view, Azure CLI, or (ARM)/Bicep templates for automation. You can also provision AKS clusters from the Azure Stack HCI resource view, and in the future from third-party infrastructure that has been enabled using Azure Arc.

Recently we spoke with one of our cutting-edge customers who’s driving innovation at scale, DICK’S Sporting Goods, a leading retailer with over 800 locations. DICK’S Sporting Goods is enhancing in-store experiences through apps—such as one that analyzes golf and baseball bat swings and recommends products best suited to each individual athlete. By integrating their stores with Azure, they can swiftly roll out new features built in the cloud to customers everywhere. Watch here to learn more.

“Our unified store strategy is all about rapid innovation. It lets us respond to market shifts and customer feedback in minutes, ensuring our stores are always current,” says Jon Hathaway, Sr Director – Platform and Infrastructure.

Cultivate data and insights across physical operations

Physical operations environments, like factories and retail storefronts, are the backbone of many businesses. Given the dispersed nature of these sites, customers are eager to deploy AI on a global basis to enable higher levels of productivity, efficiency, and sustainability. A unified data foundation cultivates data and insights across physical operations, driving more efficient workflows, predictive insights, and resource optimization.

For customers across industries, the ability to connect the physical world to the digital world is a foundational step in the digital transformation journey. To help them scale the transformation of their physical operations, we are announcing a new offering in public preview called Azure IoT Operations. Enabled by Azure Arc, Azure IoT Operations expands on our IoT portfolio with a composable set of services that help organizations onboard IT and OT assets, capture, evolve, and unify insights, and take actions to drive transformation at scale.

Azure IoT Operations empowers our customers with a unified, enterprise-wide technology architecture and data plane that supports repeatable solution deployment and comprehensive AI-enhanced decision-making. It enables a cloud-to-edge data plane with local data processing and analytics to transfer clean, useful data to hyperscale cloud services such as Microsoft Fabric, Azure Event Grid, and Azure Digital Twins. A common data foundation is essential to democratize data, enable cross-team collaboration, and accelerate decision-making.

One of the customers we are working with on their physical operations data strategy is Grupo Bimbo. Bimbo Bakeries USA (BBU), part of Grupo Bimbo a multinational food company with 217 plants in 34 countries globally. BBU takes pride in its product, zealously safeguarding high standards and quality. To turn out millions of loaves every day, they depend on metrics that illustrate everything from machine speeds and temperatures to downtime. Bimbo is always looking for innovative ways to produce the quality products their customers expect from them. 

BBU is leveraging Azure IoT Operations to improve its current Industrial IoT (IIOT) solution and tackle the challenge of IT/OT convergence. Azure IoT Operations provides seamless data flow from process and equipment; everything from machine speeds and oven temperatures to equipment downtime. This new platform will enable robust data processing so that BBU can get visibility into near real-time production data that allows them to make timely adjustments, therefore maximizing production efficiencies.

The future is now

Customers can start applying the adaptive cloud approach to drive seamless transformation from cloud to edge today. Experience the latest offerings below and see them in action by visiting Azure Arc Jumpstart where you can learn and try many different scenarios.
The post Advancing hybrid cloud to adaptive cloud with Azure appeared first on Azure Blog.
Quelle: Azure

Microsoft is now a FinOps Certified Service Provider

In an era where cloud computing has become the backbone of modern business operations, efficient financial management is the linchpin that keeps organizations agile and cost-effective. The FinOps Framework has emerged as a powerful approach to optimize cloud costs, allowing organizations to efficiently manage their cloud expenditure. Today, we are thrilled to announce that Microsoft has achieved a milestone that reaffirms our commitment to empowering our customers and partners in their journey towards optimized cloud spending. We are now a FinOps Certified Service Provider. This certification is a testament to our unwavering dedication to providing you with the best-in-class solutions for managing your cloud finances and ensuring that your organization thrives in this era of digital transformation.

FinOps consulting journey at Microsoft

Our journey in FinOps consulting dates back to the early days of Microsoft Azure, where we embarked on a mission to assist organizations in navigating the complex landscape of cloud cost management. Over the years, we have had the privilege of collaborating with countless organizations, ensuring they unlock the full potential of their cloud investments. What truly excites us, however, is the remarkable momentum that the FinOps Foundation has generated. This foundation has played a pivotal role in cultivating a vibrant and inclusive community of FinOps professionals, united by a shared passion for optimizing cloud expenditures.

Together with this dynamic community, we are poised to take the world of FinOps to the next level. Our continued collaboration, knowledge-sharing, and dedication to the cause will not only enhance our collective understanding of cloud financial management but also drive innovation and excellence in this critical domain. With the power of collaboration and the momentum of the FinOps community, we are prepared to shape the future of FinOps, making it more accessible, efficient, and beneficial for all.

At Microsoft, our commitment to you extends throughout the entire service lifecycle. Whether you are Unified Enterprise Support customer, receiving Proactive Services, or a Microsoft Industry Solutions Delivery (ISD) customer, receiving modernization and enabling innovation for the Microsoft cloud, we are here to provide the expertise and guidance you need to meet your FinOps goals. 

Your goals may by focused on enablement or long-term optimization. We receive many questions from our customers that correspond to each of these goal categories:

Enablement:

“I’m looking to improve our financial forecast for the upcoming year.”—Chief Financial Officer.

“I’ve been meaning to make our cloud spending more efficient but haven’t had the time.”—Chief Technical Officer.

“I’m setting our unit’s KPIs and want to make our operations for the coming quarter leaner.”—Business Unit Lead.

“I need to support our leadership in achieving our quarterly goals and make operations more efficient.”—Product/Application Owner.

Long-term optimization:

“I’m concerned about the economic downturn and need to improve our bottom line.”—Chief Financial Officer.

“I need to reduce my operational cost so that I can free up money for innovative projects.”—Chief Technology Officer.

“I need to make sure our group’s strategy is aligned to company goals.”—Business Unit Lead.

“I work closely with the product and am responsible for the changes.”—Product/Application Owner.

With these questions and requirements in mind, we have developed a series of offerings that provide the solutions.

FinOps solution offerings at Microsoft

Our Unified Enterprise Support currently has three FinOps offerings:

FinOps Introduction

FinOps Assessment

FinOps Operations for Azure

Azure Cost Management Tool Chain

Azure Billing Mechanics

Azure Cost Management Mechanics

Azure Cost Optimization Opportunities

Our Industry Solutions Delivery, Journey to FinOps offering helps our customers optimize their existing landscape, establish a cost-conscious culture, and supporting governance controls to maximize the value of their Azure spend. This offer helps our customers:

Understand the underlying drivers of cost by cloud resource types.

Uncover the link between current cost and performance and cost optimization levers.

Achieve tangible impact/savings through a systematic and continuous cost optimization process while aligning with performance, scalability, and stability goals.

Develop or accelerate a cost-conscious organization.

You can also leverage publicly available FinOps information including:

Review your capability using the FinOps Assessments.

Learn more through the FinOps documentation available.

We’ll share more details on these and other service offerings in future blog posts.

The Microsoft vision for the future of FinOps

Looking forward, we are excited to share our vision for the future of FinOps. Engaging with the FinOps community through the FinOps Foundation Slack and active participation in working groups is a vital part of our strategy. Some of the working groups we actively contribute to include FinOps Open Cost and Usage Specification (FOCUS), aimed at building and maintaining a common specification for cloud cost, usage, and billing data) and FinOps Champion (focused on creating a FinOps Champion Program). These initiatives demonstrate our commitment to shaping the future of cloud cost management.

We are continuously exploring new ways to enhance your FinOps experience. Our goal is to bring you the latest tools, best practices, and thought leadership that will empower you in the ever-evolving cloud ecosystem. As the cloud landscape continues to evolve, we are dedicated to ensuring that your organization remains at the forefront of FinOps, equipped with the knowledge and tools needed to thrive in this dynamic environment.

Microsoft as a FinOps Certified Service Provider

Microsoft’s certification as a FinOps Certified Service Provider is a significant milestone in the realm of cloud cost management. It highlights the growing importance of FinOps in the cloud industry and underscores the critical role of financial discipline in optimizing cloud spending. By achieving this certification, Microsoft has positioned itself as a leader in cloud cost management, benefiting its customers, the industry, and cloud users around the world.

As organizations continue to embrace cloud computing, the need for effective cloud cost management will only grow. With the Microsoft’s commitment to FinOps, businesses can expect greater control over their cloud expenses, ensuring that their cloud investments align with their financial goals and operational needs. The FinOps Foundation recently launched the State of FinOps Survey for 2024 to collect industry data to help organizations better understand FinOps trends and common challenges faced. Please consider taking the time to complete this survey and check out past years results.

Your success is our utmost priority, and we encourage you to take action today. Reach out to your dedicated account representative to discover how we can help you achieve your FinOps objectives. Additionally, we invite you to review our publicly available FinOps documentation, which is a valuable resource for in-depth insights into optimizing your cloud finances. You can also actively engage with us in the FinOps Foundation community, where you can connect with fellow professionals, share valuable insights, and stay updated on the latest industry trends.

What’s next for FinOps?

We look forward to the opportunity to meet you in person at Microsoft Ignite and the FinOps Foundation Seattle Roadshow on November 15, 2023. These events provide an excellent platform to network, share experiences, and continue building a brighter and more cost-efficient cloud future together. Your journey to optimized cloud spending starts here, and we are here to support you every step of the way.
The post Microsoft is now a FinOps Certified Service Provider appeared first on Azure Blog.
Quelle: Azure

Come build with us: Microsoft and OpenAI partnership unveils new AI opportunities

At OpenAI’s first DevDay Conference on November 6, 2023, Microsoft Chairman and CEO Satya Nadella made a surprise appearance during OpenAI CEO Sam Altman’s keynote to deliver a powerful message: “Our job number one is to build the best systems, so you can build the best models and deliver those to developers.” This was a testament to the deep partnership between Microsoft and OpenAI. We’re excited about the latest announcements from OpenAI’s first DevDay event and want to highlight the opportunities it presents for all AI builders.

New models: GPT-4 Turbo on Azure OpenAI Service

We are very enthusiastic about all the new models introduced, including GPT-3.5 Turbo, and updates to models including DALL-E 3, and Whisper 3. Among them, the eagerly awaited GPT-4 Turbo offers lower pricing, extended prompt length, and structured JSON formatting with improved efficiency and control. We’re looking forward to making these great Turbo models available on Azure OpenAI Service by the end of this year in keeping with our standard practice of bringing new model innovation from our partners at OpenAI to the Azure OpenAI Service.

Increasing access for all AI Builders

OpenAI’s announcement of lower pricing is significant. It will make the models more accessible and increase their utilization, allowing a broader range of applications to harness their power and ushering in a new era of generative AI. On Azure OpenAI Service, token pricing for the new models will be at parity with OpenAI’s prices.

And in an exciting development, Microsoft made GitHub Enterprise available to all DevDay conference in-person attendees to use for free for 90 days. GitHub Enterprise is a powerful tool for developers, assisting in code completion and development. Its integration with Microsoft’s ecosystem aligns with the mission of helping developers easily bring ideas to life on Azure.

GPTs: New ways to create and monetize

GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others. No coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Just like with plug-ins, we are looking forward to building deep ecosystem support for GPTs, which we’ll share more on next week at our Microsoft Ignite conference.

Microsoft and OpenAI partnership

OpenAI’s introduction of a Custom Models program will be of particular interest to enterprises, and Microsoft will continue to offer the convenience of integrating OpenAI’s services seamlessly within Microsoft’s existing ecosystem and support infrastructure, providing a comprehensive solution for all enterprise needs.

Sam Altman, OpenAI’s CEO, echoed the sentiment of a strong and productive partnership with Microsoft. “I think we have the best partnership in tech,” Altman told Nadella onstage.

Nadella went on to talk about the companies’ alignment. “Our mission is to empower every person and every organization on the planet to achieve more. And to me, ultimately, AI is only going to be useful if it truly does empower…it’s about being able to get the benefits of AI broadly disseminated to everyone,” Nadella said.

With these announcements, developers and enterprises are now poised to explore new horizons, empowered by the combined strengths of Microsoft and OpenAI, and the limitless possibilities of generative AI.

Get started with Azure OpenAI Service today

Apply now for access to Azure OpenAI Service. 

Review the new documentation for Azure OpenAI Service.

Explore the playground and customization in Azure AI Studio.

Learn more about Data, Privacy, and Security for Azure OpenAI Service.

Bookmark the What’s New page.

The post Come build with us: Microsoft and OpenAI partnership unveils new AI opportunities appeared first on Azure Blog.
Quelle: Azure

Azure sets a scale record in large language model training

Azure empowers intelligent services like Microsoft Copilot, Bing, and Azure OpenAI Service that have captured our imagination in recent days. These services, facilitating various applications like Microsoft Office 365, chatbots, and search engines with generative AI, owe their magic to large language models (LLMs). While the latest LLMs are transcendental, bringing a generational change in how we apply artificial intelligence in our daily lives and reason about its evolution, we have merely scratched the surface. Creating more capable, fair, foundational LLMs that consume and present information more accurately is necessary.  

How Microsoft maximizes the power of LLMs

However, creating new LLMs or improving the accuracy of existing ones is no easy feat. To create and train improved versions of LLMs, supercomputers with massive computational capabilities are required. It is paramount that both the hardware and software in these supercomputers are utilized efficiently at scale, not leaving performance on the table. This is where the sheer scale of the supercomputing infrastructure in Azure cloud shines and setting a new scale record in LLM training matters. 

Figure 1: Scale records on the model GPT-3 (175 billion parameters) from MLPerf Training v3.0 in June 2023 (3.0-2003) and Azure on MLPerf Training v3.1 in November 2023 (3.1-2002). 

Customers need reliable and performant infrastructure to bring the most sophisticated AI use cases to market in record time. Our objective is to build state-of-the-art infrastructure and meet these demands. The latest MLPerf™ 3.1 Training results1 are a testament to our unwavering commitment to building high-quality and high-performance systems in the cloud to achieve unparalleled efficiency in training LLMs at scale. The idea here is to use massive workloads to stress every component of the system and accelerate our build process to achieve high quality.

The GPT-3 LLM model and its 175 billion parameters were trained to completion in four minutes on 1,344 ND H100 v5 virtual machines (VMs), which represent 10,752 NVIDIA H100 Tensor Core GPUs, connected by the NVIDIA Quantum-2 InfiniBand networking platform (as shown in Figure 1). This training workload uses close to real-world datasets and restarts from 2.4 terabytes of checkpoints acting closely a production LLM training scenario. The workload stresses the H100 GPUs Tensor Cores, direct-attached Non-Volatile Memory Express disks, and the NVLink interconnect that provides fast communication to the high-bandwidth memory in the GPUs and cross-node 400Gb/s InfiniBand fabric. 

“Azure’s submission, the largest in the history of MLPerf Training, demonstrates the extraordinary progress we have made in optimizing the scale of training. MLCommons’ benchmarks showcase the prowess of modern AI infrastructure and software, underlining the continuous advancements that have been achieved, ultimately propelling us toward even more powerful and efficient AI systems.”—David Kanter, Executive Director of MLCommons 

Microsoft’s commitment to performance

In March 2023, Microsoft introduced the ND H100 v5-series which completed training a 350 million parameter Bidirectional Encoder Representations from Transformers (BERT) language model in 5.4 minutes, beating our existing record. This resulted in a four times improvement in time to train BERT within just 18 months, highlighting our continuous endeavor to bring the best performance to our users.

Figure 2: Relative size of the models BERT (350 million parameters) and GPT-3 (175 billion parameters) from MLPerf Training v3.1.  

Today’s results are with GPT-3, a large language model in the MLPerf Training benchmarking suite, featuring 175 billion parameters, a remarkable 500 times larger than the previously benchmarked BERT model (figure 2). The latest training time from Azure reached a 2.7x improvement compared to the previous record from MLPerf Training v3.0. The v3.1 submission underscores the ability to decrease training time and cost by optimizing a model that accurately represents current AI workloads.

The power of virtualization

NVIDIA’s submission to the MLPerf Training v3.1 LLM benchmark on 10,752 NVIDIA H100 Tensor Core GPUs achieved a training time of 3.92 minutes. This amounts to just a 2 percent increase in the training time in Azure VMs compared to the NVIDIA bare-metal submission, which has the best-in-class performance of virtual machines across all offerings of HPC instances in the cloud (figure 3).

Figure 3: Relative training times on the model GPT-3 (175 billion parameters) from MLPerf Training v3.1 between the NVIDIA submission on the bare-metal platform (3.1-2007) and Azure on virtual machines (3.1-2002). 

The latest results in AI Inferencing on Azure ND H100 v5 VMs show leadership results as well, as shown in MLPerf Inference v3.1. The ND H100 v5-series delivered 0.99x-1.05x relative performance compared to the bare-metal submissions on the same NVIDIA H100 Tensor Core GPUs (figure 4), echoing the efficiency of virtual machines.

Figure 4: Performance of the ND H100 v5-series (3.1-0003) compared to on-premises and bare metal offerings of the same NVIDIA H100 Tensor Core GPUs (3.1-0107 and 3.1-0121). All the results were obtained with the GPT-J benchmark from MLPerf Inference v3.1, scenarios: Offline and Server, accuracy: 99 percent.

In conclusion, created for performance, scalability, and adaptability, the Azure ND H100 v5-series offers exceptional throughput and minimal latency for both training and inferencing tasks in the cloud and offers the highest quality infrastructure for AI.

Learn more about Azure AI Infrastructure

ND H100 v5 

Azure AI Infrastructure

References

MLCommons® is an open engineering consortium of AI leaders from academia, research labs, and industry. They build fair and useful benchmarks that provide unbiased evaluations of training and inference performance for hardware, software, and services—all conducted under prescribed conditions. MLPerf™ Training benchmarks consist of real-world compute-intensive AI workloads to best simulate customer’s needs. Tests are transparent and objective, so technology decision-makers can rely on the results to make informed buying decisions. 

The post Azure sets a scale record in large language model training appeared first on Azure Blog.
Quelle: Azure

Evolving Microsoft Azure Data Manager for Agriculture to transform data into intuitive insights

The quiet revolution of data and analytics in agriculture 

As AGRITECHNICA 2023—the world’s leading trade fair for agricultural machinery—makes a triumphant return after nearly four years, over 450,000 attendees from 130 countries will come together to witness the latest and greatest agriculture innovations firsthand. However, not all of these breakthrough innovations take up large exhibition spaces. Some are quietly revolutionizing the industry through data and analytics, equipping farmers with tools for smarter, data-driven decision-making.  

These data-driven tools—including transformative AI that is reshaping industries—depend on clean, unified data. That’s why we announced Microsoft Azure Data Manager for Agriculture in March 2023, a data platform that leverages industry-specific data connectors and capabilities to connect and unify farm data from disparate sources. With Azure Data Manager for Agriculture, organizations can leverage high-quality datasets for digital agriculture solutions, allowing customers and partners to focus on product innovation rather than data management.

Today, alongside Bayer at the AGRITECHNICA 2023 conference, we’re thrilled to announce the latest updates to Azure Data Manager for Agriculture that are ushering in an era of AI in agriculture.

A growing ecosystem of partners and data with Microsoft Fabric integrations

To start, Azure Data Manager for Agriculture is evolving to include new integrations with Microsoft Fabric. This begins with the inclusion of Microsoft Fabric Data Factory, bringing cloud-scale data movement and transformation to agriculture-specific scenarios. Now, third-party data partners can build connectors to ingest data from more sources into a unified database. Leveraging these Fabric integrations, Microsoft and our partners are expanding Azure Data Manager for Agriculture with more agriculture-specific connectors and capabilities so that insights are no longer limited by specific data types and sources.  

We’ve also built connector patterns that others can use as references and expanded the common data model to incorporate geospatial data, making more data integrations seamless and efficient. Given the importance of the ability to search through time and space when looking at observation data, geospatial ground truth data has been elevated to a first-class component.  

We are also excited to expand support for Bayer’s Climate FieldView as a built-in data source. Once initiated by a farmer, Azure Data Manager for Agriculture provides a straightforward path to retrieving both historic and up-to-date activity files, from which further aggregated insights can be derived. Users can leverage auto-sync planting, application, and harvest activity files from Climate FieldView accounts directly into Azure Data Manager for Agriculture.  

Microsoft and Bayer: Leveraging generative AI to enable interaction with data through language

But it doesn’t stop at data. We believe access to insights and data should be more interactive, intuitive, and entirely human-centric. Imagine, as a farmer, being able to simply ask the question: “Which fields had rain last night?” or “How many acres have been harvested?” and receiving a precise and accurate answer, immediately.  

With new large language model APIs in Azure Data Manager for Agriculture, generative AI in agriculture is now a reality. The large language model APIs enable the seamless retrieval of data mapped to farm operations, sensors, weather, and imagery so that farming-related context and insights can be queried in a conversational context. These capabilities enable others to build their own agriculture copilots that deliver insights to customers and farmers about disease, yield, labor needs, harvest windows, and more—leveraging actual planning and observational data.  

Bayer is the first partner bringing these large language model capabilities to life with a copilot for agriculture. In its early stages, Bayer is presenting an agriculture copilot and testing multiple scenarios with internal teams to discover where large language model capabilities can add value through the ability to interact with agronomic data using natural language.

The Bayer copilot for agriculture is a querying system that helps end users like dealers and farmers get actionable insights from their farm and environment data. Users ask the AI-powered chatbot about different scenarios and the copilot provides accurate responses almost instantly.

Building on a foundation of innovation 

Azure Data Manager for Agriculture has already enabled incredible evolutions in Agriculture with partners like Accenture, Bayer, and Land O’Lakes.  

By integrating disparate data sources and streamlining them into a unified interface, Accenture’s Farm of the Future empowers farmers to track and manage their sustainability initiatives more effectively. Built on Microsoft Azure Data Manager for Agriculture, the solution provides comprehensive oversight of a farm’s sustainability practices. Advanced analytics deliver actionable insights that help farmers optimize resource allocation, minimize environmental impact, and maximize overall agricultural productivity and profitability.  

Bayer’s Climate FieldView platform uses Azure Data Manager for Agriculture’s satellite and weather pipelines and common data model to enable insights on potential yield-limiting factors in growers’ fields. Thanks to Microsoft Azure Data Manager for Agriculture, Bayer can remain focused on building intelligent solutions for growers, rather than investing time and resources in data management. 

Land O’Lakes is using Azure Data Manager for Agriculture to reduce time spent on data integrations, thereby cutting down engineering efforts and costs.

“Our job is to bring all the information together to make sense of it. Azure Data Manager for Agriculture is helping us do that.”
Tom Ryan, President at Truterra, the sustainability division of Land O’Lakes

Land O’Lakes data scientists are now able to derive better insights through comprehensive analytics and intelligent modeling—ultimately supporting more efficient, sustainable farming.  

Learn more about our agriculture solutions

If you’re interested in learning more about our commitment to innovation in agriculture, come visit us at Bayer’s booth (Hall 8, Booth C15) at AGRITECHNICA 2023 in Hanover, Germany the week of November 12 to 18, 2023. We’ll be presenting several sessions, including:  

November 12, 2023—Recent advancements in large language modelling applied to agricultureLearn how AI technology can be applied to agriculture through an improved and efficient enhancement to analytics processing—offering increased insights at the speed of a question. This session will discuss what’s available today and what the near-term future holds.

November 13, 2023—Advancements in technology targeting the Agriculture and food value chain Hear various organizational perspectives about recent advancements in agriculture that support data interoperability, improved transparency across agricultural value chains, accelerated farm and food innovation, and the partnerships driving this change. 

For additional information, visit the Microsoft Azure Data Manager for Agriculture website. The future is bright, as generative AI and other analytics solutions enable insights around optimizing resource allocation, minimizing environmental impact, maximizing agricultural productivity, and much more.  
The post Evolving Microsoft Azure Data Manager for Agriculture to transform data into intuitive insights appeared first on Azure Blog.
Quelle: Azure

Microsoft Cost Management updates—November 2023

Whether you’re a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you’re spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We’re always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less.

In this month’s edition, we will cover some of the updates from the previous month and give you a preview of what we are planning for Microsoft Ignite 2023. Enjoy reading!

Introducing cost views for Azure Kubernetes service (AKS)

Improvements to cost management exports

Pricing updates on Azure.com

What’s new in Cost Management Labs

New ways to save money with Microsoft Cloud

Documentation updates

What’s next

Introducing cost views for Azure Kubernetes service (AKS)

We know that Kubernetes come with a lot of cost benefits and continues to gain popularity. Teams run their applications on a single cluster sharing the infrastructure costs and scaling capacity only when load demands. Talking to our customers, we understand that while this shared computing model is great for cost savings, it poses challenges for them to gain visibility into granular costs based on Kubernetes’ entities. As an example, teams running multiple applications on a cluster have visibility into infrastructure costs of the cluster using cost analysis or cost reports, they however lack visibility into costs incurred by each application making cost allocation and optimization very difficult.

To address these challenges, we are thrilled to announce that the preview of the Azure Kubernetes Service (AKS) cost views will be available starting on November 14, 2023. You will gain visibility into the costs of namespaces and an aggregated view of the costs of assets within the cluster. These views will be available in Cost analysis within the Azure portal, an experience which you are already familiar with. It will make analyzing and allocating costs for your cluster much easier.

This is just the beginning of our journey to enable granular visibility into AKS costs, we will continue to add additional capabilities for AKS cost management.

Please look for an Azure update regarding this launch in the week of November 13, 2023.

Improvements to cost management exports 

Announcing an improved exports experience designed to streamline your FinOps practice. With automatic exports of additional cost-impacting datasets, the updated exports are optimized to handle large datasets while enhancing the user experience.   

You can export additional datasets, including price sheets, reservation recommendations, reservation details, and reservation transactions. Furthermore, you can download cost and usage details using the open-source FinOps Open Cost and Usage Specification (FOCUS) format, which combines actual and amortized costs and reduces data processing times and storage and compute costs.  

With the enhanced user interface, you can easily create multiple exports for various cost management datasets using a single, simplified create experience. Moreover, you have the option to selectively rerun an existing Export job for a historical date range instead of creating a new one-time export of the required date range.  

FinOps datasets can be large and challenging to manage. You can easily handle large datasets through features like file partitioning, that breaks the file into manageable chunks and file overwrite for daily exports, which replaces the previous day’s file with an updated file each day and the upcoming support for Parquet format and file compression. These optimizations improve file manageability, reduce download latency, and help you save on storage and network charges.  

You can choose the latest or any of the previous dataset schema versions during the export creation. This ensures that the data processing layers that you have built on top of these datasets can be reused without compromising on the latest API functionality.  

You can enhance security and compliance by configuring exports to storage accounts behind a firewall which provides access control for the public endpoint of the storage account.  

The exports preview will be rolled out over the coming months. If you’re interested in getting access to the upcoming preview, sign up today. Also, stay tuned for updates in Azure updates to remain informed about the latest enhancements and features. 

Pricing updates on Azure

Many new additions have made it into our Azure pricing experiences, and we’re thrilled to share with you some notable updates that will make it even easier for you to estimate the costs of your solutions. Here are some highlights:

Azure Container Apps now includes Savings Plan, giving you more options to save on your costs.

We’re excited to offer limited-time discounts on select Linux Dv5 and Ev5 virtual machine (VM) series in Sweden Central region and Dv3 VMs in United States West region, with up to 50 percent savings on one-year reserved pricing.

We’ve launched pricing pages for two new services: Microsoft Playwright Testing and HDInsight on AKS. Additionally, the Form Recognizer page has been rebranded to Azure AI Document Intelligence.

We’ve added four new services to the Azure Pricing Calculator, including Dev Box, Microsoft Playwright Testing, Azure AI Content Safety, and Microsoft Graph Data Connect.

Managed Lustre and the VM series’ Bpsv2, Bs v2, and Basv2 series now show their new generally available prices.

Azure Communication Services has revamped their pricing page and added pricing for several new services, including Advanced Messaging (in preview), Job routing, and number lookup.

Many services have exciting updates too, including Azure AI Speech (new Text Batch pricing offers), VMWare Solutions (five year reserved instance (RI) pricing added), Azure Notification Hubs (new availability zone feature), App Service (new Isolated v2 Memory Optimized SKUs), Microsoft Defender (Malware scanning add on offer), Azure Open AI (new AI models added), Netapp Files (cool tier added), Ubuntu Pro Linux (new license offer), Event Grid (added Standard (namespace) capabilities), Express Route (added the Traffic Collector offer), Cache for Redis (new Enterprise SKUs), Azure Arc (new Extended Security updates), Data lake Storage Gen2 (added GZRS and RA-GZRS Redundancies), SQL DB (a new Freemium offer), Microsoft Fabric (new “OneLake” storage offer), Databricks (new pre-purchase plan and new compute SKUs available for Photon jobs), API Management (new Standard v2 offers), Azure Monitor (new Commitment Tiers), Language Service (new Customer NER and Text Analytics for health Offers), and Cosmos DB (new multi-node storage options).

We’re committed to continually improving our pricing tools to make them more accessible and user-friendly for our customers. We hope these changes will help you estimate the costs for your Azure solutions more accurately. As always, we welcome your feedback and suggestions for future improvements. Thank you for choosing Azure as your cloud platform!

What’s new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what’s coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Update: Drill down in Cost analysis smart views—Now enabled by default in the public portal.  

Drill into your cost data with one click using Cost analysis smart views. You can drill into a row to view the full details, view related resources from the context menu (three dots), open the resource to manage it from the Go to menu, remove filters using the Customize command, and use the Back command to undo a change. You can enable this option from the Try preview menu.   

Currency selection in Cost analysis smart views View your non-USD charges in USD or switch between the currencies you have charges in to view the total cost for that currency only. To change currency, select Customize at the top of the view and select the currency you would like to apply. Currency selection is not applicable to those with only USD charges. Currency selection is enabled by default in Labs. If not enabled for you, enable currency selection from the Try preview menu. 

Streamlined Cost Management menuOrganize Cost Management tools into related sections for reporting, monitoring, optimization, and configuration settings. You can enable this option from the Try preview menu. 

Recommendations view View a summary of cost recommendations that help you optimize your Azure resources in the cost analysis preview. You can opt in using the Try preview menu. 

Forecast in Cost analysis smart viewsShow your forecast cost for the period at the top of Cost analysis preview. You can opt in using Try preview.

Group related resources in Cost analysis smart views Group related resources, like disks under virtual machines or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID. 

Charts in Cost analysis smart viewsView your daily or monthly cost over time in Cost analysis smart views. You can opt in using Try preview.

View cost for your resourcesThe cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource. 

Try Cost Management labs today.

New ways to save money in the Microsoft Cloud

Here are new and updated offers you might be interested in for your cost optimization needs.

Generally available: Zone Redundant Storage for Azure Disks is now available in more regions.

Generally Available: Azure Dedicated Host – Resize.

Public preview: Announcing the new Azure Bastion Developer SKU.

The availability of Azure compute reservations exchanges extended until at least July 1st, 2024.

Azure Container Apps is now eligible for Azure savings plan for compute.

Documentation updates

Here are a few documentation updates for cost management you might be interested in:

Savings plan scopes.

Buy an Azure savings plan.

Understand usage details fields.

Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview.

Transfer Azure product billing ownership to your Microsoft Partner Agreement (MPA).

Azure product transfer hub—Microsoft Cost Management.

Self-service exchanges and refunds for Azure Reservations.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

What’s next?

These are just a few of the big updates from last month. Don’t forget to check out the previous Microsoft Cost Management updates. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow Microsoft Cost Management on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

Best wishes from the Microsoft Cost Management team.
The post Microsoft Cost Management updates—November 2023 appeared first on Azure Blog.
Quelle: Azure

Fostering AI infrastructure advancements through standardization

Since joining in 2014, Microsoft has actively engaged with the Open Compute Project (OCP) community to apply the benefits of open-source collaboration to hardware, resulting in hyperscale innovation across the industry. At last year’s OCP Global Summit, we introduced Project Caliptra, a new security offering in partnership with industry leaders including Google, AMD, and NVIDIA, and a new modular chassis design (Mt. Shasta) to bring form factor, power, and management interface into one converged design. These built upon many other contributions in the areas of rack-level architecture, security, and systems-level design.

With the rise of generative AI, the computing industry now faces a significant challenge: to evolve our underlying building blocks to meet the increasing infrastructure demands. At this year’s OCP Global Summit, Microsoft will share our latest contributions to supercomputing architecture and hardware intended to support this new era through standardization and innovation.

GPU and accelerator standardization for rapid adoption in hyperscaler fleets

Thanks to the growing number of generative AI applications, datacenters have increasingly adopted graphics processing units (GPUs) and accelerators. The resulting range of new products and corresponding integration requirements have created a new need for hyperscalers to invest in bespoke processes and tools to adapt various AI hardware for their fleets. We are pleased to join a collaborative effort with AMD, Google, Meta, and NVIDIA to create OCP standard requirements for GPU management to address this challenge.

Standardizing allows suppliers to seamlessly collaborate with hyperscalers and enables them to host various suppliers in their datacenters within an accelerated timeframe. This new OCP initiative focuses on two models of accelerators and GPU cards: Universal Base Board and Discrete. Initial specifications have been driven via different OCP workgroups focused on GPU firmware update requirements, interfaces, and Reliability, Availability and Serviceability (RAS) requirements for hardware.

This is a pioneering approach to viewing compliance as a fundamental catalyst for driving innovation via standardized requirements in a common OCP tool, which provides acceptance testing for accelerator management in cloud datacenter. 

Optimizing AI performance and efficiency with MX data formats

As AI continues to be applied to every aspect of our lives, the need for more efficient, scalable, and cost-effective AI systems is evident. This includes optimization across the AI stack, including advancements in narrow-precision AI data formats to address the rapidly growing complexity and requirements of current AI models. Advances in AI hardware technology such as these narrow-precision formats and associated optimized algorithms create opportunities like never before to address fundamental challenges in maintaining scalable and sustainable AI solutions.

Earlier this year, Microsoft partnered with AMD, Arm, Intel, Meta, NVIDIA and Qualcomm to form the Microscaling Formats (MX) Alliance with the goal of creating and standardizing next-generation 6- and 4-bit data types for AI training and inferencing. Building on years of design space exploration and research at Microsoft, Microscaling technology enables sub 8-bit formats while also enhancing the strength and ease-of-use of existing 8-bit formats such as FP8 and INT8. These advancements also help contribute to broader sustainability goals like reducing the environmental impact of AI technologies as demand continues to grow by improving the energy efficiency of AI in datacenters as well as on many AI endpoints. 

The Microscaling Specification v1.0 released through OCP introduces four common data formats (MXFP8, MXFP6, MXFP4, and MXINT8) that are compatible with current AI stacks, support implementation flexibility across both hardware and software, and enable fine-grain Microscaling at the hardware level. Extensive studies from Microsoft’s AI team confirm that MX formats can be easily deployed for many diverse, real-world cases such as language models, computer vision, and recommender systems. MX technology also enables LLM pre-training at 6- and 4-bit precisions without modifications to conventional training recipes. In addition to the initial specification, a whitepaper and emulation libraries have also been published with more details. 

OCP-SAFE: Strengthening datacenter security and transparency

Today’s datacenter infrastructure includes a diverse array of processing devices and peripherals that run firmware. Ensuring the security of this firmware is of paramount importance, demanding rigorous verification of the code quality and supply chain provenance. 

To meet the unique security demands of Cloud Service Providers and other market segments, many datacenter providers have opted for in-house or third-party security audits on device firmware. However, this approach often confines security assurances to individual cloud providers. 

To address this challenge, Microsoft and Google collaborated with OCP to introduce the OCP Security Appraisal Framework Enablement (OCP-SAFE). This framework standardizes security requirements and integrates Security Review Providers (SRP) to offer independent assurances, empowering hardware manufacturers to meet security standards across markets while enhancing product quality. 

OCP-SAFE also opens doors for end-users by providing concise assessment results, eliminating barriers to obtaining hardware security assurance. Datacenter operators and consumers alike can utilize these assessments to make informed deployment decisions about the security of components. Several companies, including AMD and SK-Hynix, have embraced OCP-SAFE, publishing concise security audits.  

For more information on OCP-SAFE review, visit our technical blog.  

We welcome attendees of this year’s OCP Global Summit to visit Microsoft at booth #B7 to explore our latest cloud hardware demonstrations featuring contributions with partners in the OCP community, including:  

Virtual Client library for Azure: an open source, standardized library of industry benchmarks and cloud customer workloads from Microsoft .

Caliptra 1.0: The newest design for our specification of Caliptra, an open source, reusable silicon IP lock for Root of Trust for Measurement (RTM).

Shasta Open Rack V3 Modular Chassis: The latest open source modular chassis design for the Shasta Open Rack.

QSFPDD 1.6T: A new backwards-compatible form factor specification providing aggregate bandwidth capacity of 1.6 Tbps and mated performance at 224 Gbps using PAM4.

Connect with Microsoft at the OCP Global Summit 2023 and beyond:

Visit Microsoft at the OCP Global Summit at booth #B7.

Check out sessions delivered by Microsoft and partners at OCP’s 2023 Global Summit.

Take a virtual tour of Microsoft datacenters.

Learn more about Microsoft’s global infrastructure.

Learn more about cloud hardware innovation at Microsoft.

The post Fostering AI infrastructure advancements through standardization appeared first on Azure Blog.
Quelle: Azure

Introducing Azure Bastion Developer: Secure and cost-effective access to your Azure Virtual Machines

Microsoft Azure is constantly evolving to meet the needs of its growing user base. In response to the feedback and requirements of developers, we have announced a new SKU for Azure Bastion: Bastion Developer. This service, now in public preview, will be a game-changer for developers seeking secure, cost-effective, and hassle-free connectivity to their Azure Virtual Machines. In this blog post, we’ll explore what Azure Bastion Developer is, the problems this new SKU addresses, and why it’s a must-try solution for developers.

What is Azure Bastion Developer?

Azure Bastion Developer is a new low-cost, zero-configuration, always-on SKU of the Azure Bastion service. Its primary mission is to provide secure-by-default Remote Desktop Protocol (RDP) and Secure Shell (SSH) access to Azure Virtual Machines, allowing users to establish secure connections to a single Virtual Machine at a time without the need for additional network configurations or public IP addresses on Virtual Machines. This service is designed to simplify and enhance the process of accessing your Azure Virtual Machines by eliminating the complexities, high costs, and security concerns often associated with alternative methods.

Addressing developer pain points

Azure Bastion Developer has been developed with the aim of addressing three common issues that developers encounter when connecting to Azure Virtual Machines:

1. Discovery

When developers create standalone Virtual Machines, they may not actively seek out Azure Bastion, and it might not be readily apparent during the Virtual Machine creation process. While IT professionals are familiar with the concept of a bastion host or jump-box server, the average Azure user may not be. This could lead to the use of less secure public IP-based access methods. Azure Bastion Developer solves this problem by providing secure and seamless access directly in the Virtual Machine blade. In the coming months, Bastion Developer will populate as the recommended connectivity option in the Virtual Machine connect experience for available regions.

2. Usability

Setting up Azure Bastion has traditionally required users to deploy a new resource and follow a series of configuration steps, including the creation of a dedicated subnet. While these steps might be manageable for technically savvy users, they can be complex and time-consuming for many. Azure Bastion Developer simplifies the process by offering an easy-to-use, zero-configuration solution. Users can opt-in to use it during Virtual Machine connection, making secure access a breeze.

3. Cost

Azure Bastion Basic, while a powerful tool, may be a potentially expensive choice for developers who spend a few hundred dollars or less in Azure each month, leading them to connect with less secure public IP based options. Azure Bastion Developer addresses this concern by providing an option that comes at a more affordable price point than public IP. This cost-effective pricing will make Azure Bastion Developer the default private connectivity option in Azure, enabling developers to enjoy secure access without breaking the bank. The public preview of Bastion Developer will be free with more details on pricing when generally available.

Connectivity Options with Azure Bastion Developer

Portal-based access (public preview). Bastion Developer will offer support for RDP connections for Windows Virtual Machines and SSH connections for Linux Virtual Machines in the Azure portal.

Native client-based access for SSH (roadmap). Bastion Developer will offer support for SSH connections for Linux Virtual Machines via Azure Command Line Interface (CLI) in the coming months.

Feature comparison of Azure Bastion offerings

Bastion Developer will be a lightweight SKU of the Bastion service, allowing a single connection per user directly through the Virtual Machines connect experience. Bastion Developer is ideal for Dev/Test users who want to securely connect to their Virtual Machines without the need for additional features or scaling. The feature matrix below outlines the differences between Bastion Developer and Bastion Basic and Standard SKUs.

FeaturesDeveloperBasicStandardPrivate connectivity to Virtual MachinesYesYesYesDedicated host agentNoYesYesSupport for multiple connections per userNoYesYesLinux Virtual Machine private key in AKVNoYesYesSupport for Network Security GroupsNoYesYesAudit loggingNoYesYesKerberos supportNoYesYesVNET peering supportNoYesYesHost scaling (2-50 instances)NoNoYesCustom port and protocolNoNoYesNative SSH support via Azure CLIRoadmapRoadmapYesNative RDP support via Azure CLINoNoYesAzure Active Directory login for RDP/SSH via native clientNoNoYesIP-based connectionNoNoYesShareable linksNoNoYes

How to get started

We invite you to preview Azure Bastion Developer in your cloud environment.

Navigate to the Azure portal.

Deploy a Windows or Linux Virtual Machine in one of the regions below. Note that Bastion Developer is currently only available in the following regions:

Central United States EUAP

East United States 2 EUAP

West Central United States

North Central United States

West United States

North Europe

Navigate to the Bastion tab in the Virtual Machine blade and click Deploy Bastion Developer. (Bastion Basic and Standard deployments will be moved under “Dedicated Deployment Options”).

Once your Bastion Developer resource is deployed, enter your Virtual Machine username and password and select Connect to securely connect to your Virtual Machine in the browser.

Learn to configure Bastion Developer.

Learn more about Azure Bastion Developer

Azure Bastion Developer is a groundbreaking solution that simplifies secure access to Virtual Machines for developers. By addressing the common issues of discovery, usability, and cost, Microsoft Azure is once again demonstrating its commitment to user satisfaction and innovation. With Azure Bastion Developer, you can enjoy secure-by-default access to your Azure Virtual Machines without the complexity and high costs associated with traditional solutions. Try it out today and experience a new level of convenience and security in your Azure development workflow.
The post Introducing Azure Bastion Developer: Secure and cost-effective access to your Azure Virtual Machines appeared first on Azure Blog.
Quelle: Azure