New models added to the Phi-3 family, available on Microsoft Azure

Read more announcements from Azure at Microsoft Build 2024: New ways Azure helps you build transformational AI experiences and The new era of compute powering Azure AI solutions.

At Microsoft Build 2024, we are excited to add new models to the Phi-3 family of small, open models developed by Microsoft. We are introducing Phi-3-vision, a multimodal model that brings together language and vision capabilities. You can try Phi-3-vision today.

Phi-3-small and Phi-3-medium, announced earlier, are now available on Microsoft Azure, empowering developers with models for generative AI applications that require strong reasoning, limited compute, and latency bound scenarios. Lastly, previously available Phi-3-mini, as well as Phi-3-medium, are now also available through Azure AI’s models as a service offering, allowing users to get started quickly and easily.

The Phi-3 family

Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks. They are trained using high quality training data, as explained in Tiny but mighty: The Phi-3 small language models with big potential. The availability of Phi-3 models expands the selection of high-quality models for Azure customers, offering more practical choices as they compose and build generative AI applications.

Phi-3-vision

Bringing together language and vision capabilities

Try it today

There are four models in the Phi-3 model family; each model is instruction-tuned and developed in accordance with Microsoft’s responsible AI, safety, and security standards to ensure it’s ready to use off-the-shelf.

Phi-3-vision is a 4.2B parameter multimodal model with language and vision capabilities.

Phi-3-mini is a 3.8B parameter language model, available in two context lengths (128K and 4K).

Phi-3-small is a 7B parameter language model, available in two context lengths (128K and 8K).

Phi-3-medium is a 14B parameter language model, available in two context lengths (128K and 4K).

Find all Phi-3 models on Azure AI and Hugging Face.

Phi-3 models have been optimized to run across a variety of hardware. Optimized variants are available with ONNX Runtime and DirectML providing developers with support across a wide range of devices and platforms including mobile and web deployments. Phi-3 models are also available as NVIDIA NIM inference microservices with a standard API interface that can be deployed anywhere and have been optimized for inference on NVIDIA GPUs and Intel accelerators.

It’s inspiring to see how developers are using Phi-3 to do incredible things—from ITC, an Indian conglomerate, which has built a copilot for Indian farmers to ask questions about their crops in their own vernacular, to the Khan Academy, who is currently leveraging Azure OpenAI Service to power their Khanmigo for teachers pilot and experimenting with Phi-3 to improve math tutoring in an affordable, scalable, and adaptable manner. Healthcare software company Epic is looking to also use Phi-3 to summarize complex patient histories more efficiently. Seth Hain, senior vice president of R&D at Epic explains, “AI is embedded directly into Epic workflows to help solve important issues like clinician burnout, staffing shortages, and organizational financial challenges. Small language models, like Phi-3, have robust yet efficient reasoning capabilities that enable us to offer high-quality generative AI at a lower cost across our applications that help with challenges like summarizing complex patient histories and responding faster to patients.”

Digital Green, used by more than 6 million farmers, is introducing video to their AI assistant, Farmer.Chat, adding to their multimodal conversational interface. “We’re excited about leveraging Phi-3 to increase the efficiency of Farmer.Chat and to enable rural communities to leverage the power of AI to uplift themselves,” said Rikin Gandhi, CEO, Digital Green.

Bringing multimodality to Phi-3

Phi-3-vision is the first multimodal model in the Phi-3 family, bringing together text and images, and the ability to reason over real-world images and extract and reason over text from images. It has also been optimized for chart and diagram understanding and can be used to generate insights and answer questions. Phi-3-vision builds on the language capabilities of the Phi-3-mini, continuing to pack strong language and image reasoning quality in a small model.

Phi-3-vision can generate insights from charts and diagrams:

Groundbreaking performance at a small size

As previously shared, Phi-3-small and Phi-3-medium outperform language models of the same size as well as those that are much larger.

Phi-3-small with only 7B parameters beats GPT-3.5T across a variety of language, reasoning, coding, and math benchmarks.1

The Phi-3-medium with 14B parameters continues the trend and outperforms Gemini 1.0 Pro.2

Phi-3-vision with just 4.2B parameters continues that trend and outperforms larger models such as Claude-3 Haiku and Gemini 1.0 Pro V across general visual reasoning tasks, OCR, table, and chart understanding tasks.3

All reported numbers are produced with the same pipeline to ensure that the numbers are comparable. As a result, these numbers may differ from other published numbers due to slight differences in the evaluation methodology. More details on benchmarks are provided in our technical paper.

See detailed benchmarks in the footnotes of this post.

Prioritizing safety

Phi-3 models were developed in accordance with the Microsoft Responsible AI Standard and underwent rigorous safety measurement and evaluation, red-teaming, sensitive use review, and adherence to security guidance to help ensure that these models are responsibly developed, tested, and deployed in alignment with Microsoft’s standards and best practices.

Phi-3 models are also trained using high-quality data and were further improved with safety post-training, including reinforcement learning from human feedback (RLHF), automated testing and evaluations across dozens of harm categories, and manual red-teaming. Our approach to safety training and evaluations are detailed in our technical paper, and we outline recommended uses and limitations in the model cards.

Finally, developers using the Phi-3 model family can also take advantage of a suite of tools available in Azure AI to help them build safer and more trustworthy applications.

Choosing the right model

With the evolving landscape of available models, customers are increasingly looking to leverage multiple models in their applications depending on use case and business needs. Choosing the right model depends on the needs of a specific use case.

Small language models are designed to perform well for simpler tasks, are more accessible and easier to use for organizations with limited resources, and they can be more easily fine-tuned to meet specific needs. They are well suited for applications that need to run locally on a device, where a task doesn’t require extensive reasoning and a quick response is needed.

The choice between using Phi-3-mini, Phi-3-small, and Phi-3-medium depends on the complexity of the task and available computational resources. They can be employed across a variety of language understanding and generation tasks such as content authoring, summarization, question-answering, and sentiment analysis. Beyond traditional language tasks these models have strong reasoning and logic capabilities, making them good candidates for analytical tasks. The longer context window available across all models enables taking in and reasoning over large text content—documents, web pages, code, and more.

Phi-3-vision is great for tasks that require reasoning over image and text together. It is especially good for OCR tasks including reasoning and Q&A over extracted text, as well as chart, diagram, and table understanding tasks.

Get started today

To experience Phi-3 for yourself, start with playing with the model on Azure AI Playground. Learn more about building with and customizing Phi-3 for your scenarios using the Azure AI Studio.

Footnotes

1Table 1: Phi-3-small with only 7B parameters

2Table 2: Phi-3-medium with 14B parameters

3Table 3: Phi-3-vision with 4.2B parameters

The post New models added to the Phi-3 family, available on Microsoft Azure appeared first on Azure Blog.
Quelle: Azure

From code to production: New ways Azure helps you build transformational AI experiences

We’re witnessing a critical turning point in the market as AI moves from the drawing boards of innovation into the concrete realities of everyday life. The leap from potential to practical application marks a pivotal chapter, and you, as developers, are key to bringing it to bear.

The news at Build is focused on the top demands we’ve heard from all of you as we’ve worked together to turn this promise of AI into reality:

Empowering every developer to move with greater speed and efficiency, using the tools you already know and love.

Expanding and simplifying access to the AI, data—application platform services you need to be successful so you can focus on building transformational AI experiences.

And, helping you focus on what you do best—building incredible applications—with responsibility, safety, security, and reliability features, built right into the platform. 

I’ve been building software products for more than two decades now, and I can honestly say there’s never been a more exciting time to be a developer. What was once a distant promise is now manifesting—and not only through the type of apps that are possible, but how you can build them.

With Microsoft Azure, we’re meeting you where you are today—and paving the way to where you’re going. So let’s jump right into some of what you’ll learn over the next few days. Welcome to Microsoft Build 2024!

Create the future with Azure AI: offering you tools, model choice, and flexibility  

The number of companies turning to Azure AI continues to grow as the list of what’s possible expands. We’re helping more than 50,000 companies around the globe achieve real business impact using it—organizations like Mercedes-Benz, Unity, Vodafone, H&R Block, PwC, SWECO, and so many others.  

To make it even more valuable, we continue to expand the range of models available to you and simplify the process for you to find the right models for the apps you’re building. You can learn more about all Azure AI updates we’re announcing this week over on the Tech Community blog. 

Azure AI Studio, a key component of the copilot stack, is now generally available. The pro-code platform empowers responsible generative AI development, including the development of your own custom copilot applications. The seamless development approach includes a friendly user interface (UI) and code-first capabilities, including Azure Developer CLI (AZD) and AI Toolkit for VS Code, enabling developers to choose the most accessible workflow for their projects.

Developers can use Azure AI Studio to explore AI tools, orchestrate multiple interoperating APIs and models; ground models using their data using retrieval augmented generation (RAG) techniques; test and evaluate models for performance and safety; and deploy at scale and with continuous monitoring in production.

Empowering you with a broad selection of small and large language models  

Our model catalog is the heart of Azure AI Studio. With more than 1,600 models available, we continue to innovate and partner broadly to bring you the best selection of frontier and open large language models (LLMs) and small language models (SLMs) so you have flexibility to compare benchmarks and select models based on what your business needs. And, we’re making it easier for you to find the best model for your use case by comparing model benchmarks, like accuracy and relevance.

I’m excited to announce OpenAI’s latest flagship model, GPT-4o, is now generally available in Azure OpenAI Service. This groundbreaking multimodal model integrates text, image, and audio processing in a single model and sets a new standard for generative and conversational AI experiences. Pricing for GPT-4o is $5/1M Tokens for input and $15/1M Tokens for output.

Earlier this month, we enabled GPT-4 Turbo with Vision through Azure OpenAI Service. With these new models developers can build apps with inputs and outputs that span across text, images, and more, for a richer user experience. 

We’re announcing new models through Models-as-a-Service (MaaS) in Azure AI Studio leading Arabic language model Core42 JAIS and TimeGen-1 from Nixtla are now available in preview. Models from AI21, Bria AI, Gretel Labs, NTT DATA, Stability AI as well as Cohere Rerank are coming soon.  

Phi-3: Redefining what’s possible with SLMs

At Build we’re announcing Phi-3-small, Phi-3-medium, and Phi-3-vision, a new multimodal model, in the Phi-3 family of AI small language models (SLMs), developed by Microsoft. Phi-3 models are powerful, cost-effective and optimized for resource constrained environments including on-device, edge, offline inference, and latency bound scenarios where fast response times are critical. 

Introducing Phi-3: Groundbreaking performance at a small size

Sized at 4.2 billion parameters, Phi-3-vision supports general visual reasoning tasks and chart/graph/table reasoning. The model offers the ability to input images and text, and to output text responses. For example, users can ask questions about a chart or ask an open-ended question about specific images. Phi-3-mini and Phi-3-medium are also now generally available as part of Azure AI’s MaaS offering.

In addition to new models, we are adding new capabilities across APIs to enable multimodal experiences. Azure AI Speech has several new features in preview including Speech analytics and Video translation to help developers build high-quality, voice-enabled apps. Azure AI Search now has dramatically increased storage capacity and up to 12X increase in vector index size at no additional cost to run RAG workloads at scale.

Azure AI Studio

Get everything you need to develop generative AI applications and custom copilots in one platform

Try now

Bring your intelligent apps and ideas to life with Visual Studio, GitHub, and the Azure platform

The tools you choose to build with should make it easy to go from idea to code to production. They should adapt to where and how you work, not the other way around. We’re sharing several updates to our developer and app platforms that do just that, making it easier for all developers to build on Azure. 

Access Azure services within your favorite tools for faster app development

By extending Azure services natively into the tools and environments you’re already familiar with, you can more easily build and be confident in the performance, scale, and security of your apps.  

How to choose the right approach for your AI transformation

Learn more

We’re also making it incredibly easy for you to interact with Azure services from where you’re most comfortable: a favorite dev tool like VS Code, or even directly on GitHub, regardless of previous Azure experience or knowledge. Today, we’re announcing the preview of GitHub Copilot for Azure, extending GitHub Copilot to increase its usefulness for all developers. You’ll see other examples of this from Microsoft and some of the most innovative ISVs at Build, so be sure to explore our sessions.  

Also in preview today is the AI Toolkit for Visual Studio Code, an extension that provides development tools and models to help developers acquire and run models, fine-tune them locally, and deploy to Azure AI Studio, all from VS Code.  

Updates that make cloud native development faster and easier

.NET Aspire has arrived! This new cloud-native stack simplifies development by automating configurations and integrating resilient patterns. With .NET Aspire, you can focus more on coding and less on setup while still using your preferred tools. This stack includes a developer dashboard for enhanced observability and diagnostics right from the start for faster and more reliable app development. Explore more about the general availability of .NET Aspire on the DevBlogs post.   

We’re also raising the bar on ease of use in our application platform services, introducing Azure Kubernetes Services (AKS) Automatic, the easiest managed Kubernetes experience to take AI apps to production. In preview now, AKS Automatic builds on our expertise running some of the largest and most advanced Kubernetes applications in the world, from Microsoft Teams to Bing, XBox online services, Microsoft 365 and GitHub Copilot to create best practices that automate everything from cluster set up and management to performance and security safeguards and policies.

As a developer you now have access to a self-service app platform that can move from container image to deployed app in minutes while still giving you the power of accessing the Kubernetes API. With AKS Automatic you can focus on building great code, knowing that your app will be running securely with the scale, performance and reliability it needs to support your business.

Data solutions built for the era of AI

Developers are at the forefront of a pivotal shift in application strategy which necessitates optimizations at every tier of an application—including databases—since AI apps require fast and frequent iterations to keep pace with AI model innovation. 

We’re excited to unveil new data and analytics features this week designed to assist you in the critical aspects of crafting intelligent applications and empowering you to create the transformative apps of today and tomorrow.

Enabling developers to build faster with AI built into Azure databases 

Vector search is core to any AI application so we’re adding native capabilities to Azure Cosmos DB with Azure Cosmos DB for NoSQL. Powered by DiskANN, a powerful algorithm library, this makes Azure Cosmos DB the first cloud database to offer lower latency vector search at cloud scale without the need to manage servers. 

Azure Cosmos DB

The database for the era of AI

Learn more

We’re also announcing the availability of Azure Database for PostgreSQL extension for Azure AI to make bringing AI capabilities to data in PostgreSQL data even easier. Now generally available, this enables developers who prefer PostgreSQL to plug data directly into Azure AI for a simplified path to leverage LLMs and build rich PostgreSQL generative AI experiences.   

Embeddings enable AI models to better understand relationships and similarities between data, which is key for intelligent apps. Azure Database for PostgreSQL in-database embedding generation is now in preview so embeddings can be generated right within the database—offering single-digit millisecond latency, predictable costs, and the confidence that data will remain compliant for confidential workloads. 

Making developer life easier through in-database Copilot capabilities

These databases are not only helping you build your own AI experiences. We’re also applying AI directly in the user experience so it’s easier than ever to explore what’s included in a database. Now in preview, Microsoft Copilot capabilities in Azure SQL DB convert queries into SQL language so developers can use natural language to interact with data. And, Copilot capabilities are coming to Azure Database for MySQL to provide summaries of technical documentation in response to user questions—creating an all-around easier and more enjoyable management experience.

Microsoft Copilot capabilities in the database user experience

Microsoft Fabric updates: Build powerful solutions securely and with ease

We have several Fabric updates this week, including the introduction of Real-Time Intelligence. This completely redesigned workload enables you to analyze, explore, and act on your data in real time. Also coming at Build: the Workload Development Kit in preview, making it even easier to design and build apps in Fabric. And our Snowflake partnership expands with support for Iceberg data format and bi-directional read and write between Snowflake and Fabric’s OneLake. Get the details and more in Arun Ulag’s blog: Fuel your business with continuous insights and generative AI. And for an overview of Fabric data security, download the Microsoft Fabric Microsoft Fabric security whitepaper.

Spend a day in the life of a piece of data and learn exactly how it moves from its database home to do more than ever before with the insights of Microsoft Fabric, real-time assistance by Microsoft Copilot, and the innovative power of Azure AI.  

Build on a foundation of safe and responsible AI

What began with our principles and a firm belief that AI must be used responsibly and safely has become an integral part of the tooling, APIs, and software you use to scale AI responsibly. Within Azure AI, we have 20 Responsible AI tools with more than 90 features. And there’s more to come, starting with updates at Build.

New Azure AI Content Safety capabilities

We’re equipping you with advanced guardrails that help protect AI applications and users from harmful content and security risks and this week, we’re announcing new  feature for Azure AI Content Safety. Custom Categories are coming soon so you can create custom filters for specific content filtering needs. This feature also includes a rapid option, enabling you to deploy new custom filters within an hour to protect against emerging threats and incidents.  

Prompt Shields and Groundedness Detection are both available in preview now in Azure OpenAI Service and Azure AI Studio help fortify AI safety. Prompt shields mitigate both indirect and jailbreak prompt injection attacks on LLMs, while Groundedness Detection enables detection of ungrounded materials or hallucinations in generated responses.  

Features to help secure and govern your apps and data

Microsoft Defender for Cloud now extends its cloud-native application protection to AI applications from code to cloud. And, AI security posture management capabilities enable security teams to discover their AI services and tools, identify vulnerabilities, and proactively remediate risks. Threat protection for AI workloads in Defender for Cloud leverages a native integration with Azure AI Content Safety to enable security teams to monitor their Azure OpenAl applications for direct and in-direct prompt injection attacks, sensitive data leaks and other threats so they can quickly investigate and respond.

With easy-to-use APIs, app developers can easily integrate Microsoft Purview into line of business apps to get industry-leading data security and compliance for custom-built AI apps. You can empower your app customers and respective end users to discover data risks in AI interactions, protect sensitive data with encryption, and govern AI activities. These capabilities are available for Copilot Studio in public preview and soon (coming in July) will be available in public preview for Azure AI Studio, and via the Purview SDK, so developers can benefit from the data security and compliance controls for their AI apps built on Azure AI.  Read more here.  

Two final security notes. We’re also announcing a partnership with HiddenLayer to scan open models that we onboard to the catalog, so you can verify that the models are free from malicious code and signs of tampering before you deploy them. We are the first major AI development platform to provide this type of verification to help you feel more confident in your model choice. 

Second, Facial Liveness, a feature of the Azure AI Vision Face API which has been used by Windows Hello for Business for nearly a decade, is now available in preview for browser. Facial Liveness is a key element in multi-factor authentication (MFA) to prevent spoofing attacks, for example, when someone holds a picture up to the camera to thwart facial recognition systems. Developers can now easily add liveness and optional verification to web applications using Face Liveness, with the Azure AI Vision SDK, in preview.

Our belief in the safe and responsible use of AI is unwavering. You can read our recently published Responsible AI Transparency Report for a detailed look at Microsoft’s approach to developing AI responsibly. We’ll continue to deliver more innovation here and our approach will remain firmly rooted in principles and put into action with built-in features.

Move your ideas from a spark to production with Azure

Organizations are rapidly moving beyond AI ideation and into production. We see and hear fresh examples every day of how our customers are unlocking business challenges that have plagued industries for decades, jump-starting the creative process, making it easier to serve their own customers, or even securing a new competitive edge. We’re curating an industry-leading set of developer tools and AI capabilities to help you, as developers, create and deliver the transformational experiences that make this all possible.

Learn more at Microsoft Build 2024

Join us at Microsoft Build 2024 to experience the keynotes and learn more about how AI could shape your future.

Enhance your AI skills.

Discover innovative AI solutions through the Microsoft commercial marketplace.

Read more about Microsoft Fabric updates: Fuel your business with continuous insights and generative AI.

Read more about Azure Infrastructure: Unleashing innovation: How Microsoft Azure powers AI solutions.

Try Microsoft Azure for free

The post From code to production: New ways Azure helps you build transformational AI experiences appeared first on Azure Blog.
Quelle: Azure

Unleashing innovation: The new era of compute powering Azure AI solutions

As AI continues to transform industries, Microsoft is expanding its global cloud infrastructure to meet the needs of developers and customers everywhere. At Microsoft Build 2024, we’re unveiling our latest progress in developing tools and services optimized for powering your AI solutions. Microsoft’s cloud infrastructure is unique in how it provides choice and flexibility in performance and power for customers’ unique AI needs, whether that’s doubling deployment speeds or lowering operating costs.

That’s why we’ve enhanced our adaptive, powerful, and trusted platform with the performance and resilience you’ll need to build intelligent AI applications. We’re delivering on our promise to support our customers by providing them with exceptional cost-performance in compute and advanced generative AI capabilities.

Try Microsoft Azure for free >

Powerful compute for general purpose and AI workloads

Microsoft has the expertise and scale to run the AI supercomputers that power some of the world’s biggest AI services, such as Microsoft Azure OpenAI Service, ChatGPT, Bing, and more. Our focus as we continue to expand our AI infrastructure is on optimizing performance, scalability, and power efficiency.

Microsoft takes a systems approach to cloud infrastructure, optimizing both hardware and software to efficiently handle workloads at scale. In November 2023, Microsoft introduced its first in-house designed cloud compute processor, Azure Cobalt 100, which enables general-purpose workloads on the Microsoft Cloud. We are announcing the preview of Azure virtual machines built to run on Cobalt 100 processors. Cobalt 100-based virtual machines (VMs) are Azure’s most power efficient compute offering, and deliver up to 40% better performance than our previous generation of Arm-based VMs. And we’re delivering that same Arm-based performance and efficiency to customers like Elastic, MongoDB, Siemens, Snowflake, and Teradata. The new Cobalt 100-based VMS are expected to enhance efficiency and performance for both Azure customers and Microsoft products. Additionally, IC3, the platform that powers billions of customer conversations in Microsoft Teams, is adopting Cobalt 100 to serve its growing customer base more efficiently, achieving up to 45% better performance on Cobalt 100 VMs.

We’re combining the best of industry and the best of Microsoft in our AI infrastructure. Alongside our custom Azure Cobalt 100 and Maia series and silicon industry partnerships, we’re also announcing the general availability of the ND MI300X VM series, where Microsoft is the first cloud provider to bring AMD’s most powerful Instinct MI300X Accelerator to Azure. With the addition of the ND MI300X VM combining eight AMD MI300X Instinct accelerators, Azure is delivering customers unprecedented cost-performance for inferencing scenarios of frontier models like GPT-4. Our infrastructure supports different scenarios for AI supercomputing, such as building large models from scratch, running inference on pre-trained models, using model as a service providers, and fine-tuning models for specific domains.

Azure Migrate and Modernize

Curated resources and expert help to migrate or modernize your on-premises infrastructure

Get started

One of Microsoft’s advantages in AI is our ability to combine thousands of virtual machines with tens of thousands of GPUs with the best of InfiniBand and Ethernet based networking topologies for supercomputers in the cloud that can run large scale AI workloads to lower costs. With a diversity of silicon across AMD, NVIDIA, and Microsoft’s Maia AI accelerators, Azure’s AI infrastructure delivers the most complete compute platform for AI workloads. It is this combination of advanced AI accelerators, datacenter designs, and optimized compute and networking topology that drive cost efficiency per workload. That means whether you use Microsoft Copilot or build your own copilot apps, the Azure platform ensures you get the best AI performance with optimized cost.

Microsoft is further extending our cloud infrastructure with the Azure Compute Fleet, a new service that simplifies provisioning of Azure compute capacity across different VM types, availability zones, and pricing models to more easily achieve desired scale, performance, and cost by enabling users to control VM group behaviors automatically and programmatically. As a result, Compute Fleet has the potential to greatly optimize your operational efficiency and increase your core compute flexibility and reliability for both AI and general-purpose workloads together at scale.

AI-enhanced central management and security

As businesses continue to expand their computing estate, managing and governing the entire infrastructure can become overwhelming. We keep hearing from developers and customers that they spend more time searching for information and are less productive. Microsoft is focused on simplifying this process through AI-enhanced central management and security. Our adaptive cloud approach takes innovation to the next level with a single, intelligent control plane that spans from cloud to edge, making it easier for customers to manage their entire computing estate in a consistent way. We’re also aiming to improve your experience with managing these distributed environments through Microsoft Copilot in Azure.

We created Microsoft Copilot in Azure to act as an AI companion, helping your teams manage operations seamlessly across both cloud and edge environments. By using natural language, you can ask Copilot questions and receive personalized recommendations related to Azure services. Simply ask, “Why is my app slow?” or “How do I fix this error?” and Copilot will navigate a customer through potential causes and fixes.

Microsoft Copilot in Azure

Manage operations from cloud to edge with an AI assistant

Learn more

Starting today, we will be opening the preview of Copilot in Azure to all customers over the next couple of weeks. With this update, customers can choose to have all their users access Copilot or grant access to specific users or groups within a tenant. With this flexibility to manage Copilot, you can tailor your approach and control which groups of users or departments within your organization have access to it. You can feel secure knowing you can deploy and use the tool in a controlled manner, ensuring it aligns with your organization’s operational standards and security policies.

We’re continually enhancing Copilot and making the product better with every release to help developers be more productive. One of the ways we’ve simplified the developer’s experience is by making databases and analytics services easier to configure, manage, and optimize through AI-enhanced management. Several new skills are available for Azure Kubernetes Service (AKS) in Copilot for Azure that simplify common management tasks, including the ability to configure AKS backups, change tiers, locate YAML files for editing, and construct kubectl commands.

We’ve also added natural language to SQL conversion and self-help for database administration to support your Azure SQL database-driven applications. Developers can ask questions about their data in plain text, and Copilot generates the corresponding T-SQL query. Database administrators can independently manage databases, resolve issues, and learn more about performance and capabilities. Developers benefit from detailed explanations of the generated queries, helping them write code faster.

Lastly, you’ll notice a few new security enhancements to the tool. Copilot now includes Microsoft Defender for Cloud prompting capabilities to streamline risk exploration, remediation, and code fixes. Defender External Attack Surface Management (EASM) leverages Copilot to help surface risk-related insights and convert natural language to corresponding inventory queries across data discovered by Defender EASM. These features make database queries more user-friendly, enabling our customers to use natural language for any related queries. We’ll continue to expand Copilot capabilities in Azure so you can be more productive and focused on writing code.

Cloud infrastructure built for limitless innovation

Microsoft is committed to helping you stay ahead in this new era by giving you the power, flexibility, and performance you need to achieve your AI ambitions. Our unique approach to cloud and AI infrastructure helps us and developers like you meet the challenges of the ever-changing technological landscape head-on so you can continue working efficiently while innovating at scale.

Discover new ways to transform with AI

Learn how Azure helps build AI experiences

Read more about AI-powered analytics

Key Microsoft Build sessions

BRK126: Adaptive cloud approach: Build and scale apps from cloud to edge

BRK124: Building AI applications that leverage your data in object storage

BRK 129: Building applications at hyper scale with the latest Azure innovations

BRK133: Unlock potential on Azure with Microsoft Copilot

BRK127: Azure Monitor: Observability from code to cloud

The post Unleashing innovation: The new era of compute powering Azure AI solutions appeared first on Azure Blog.
Quelle: Azure

Enhance your security capabilities with Azure Bastion Premium

At Microsoft Azure, we are unwavering in our commitment to providing robust and reliable networking solutions for our customers. In today’s dynamic digital landscape, seamless connectivity, uncompromising security, and optimal performance are non-negotiable. As cyber threats have grown more frequent and severe, the demand for security in the cloud has increased drastically. As a response to this, we are announcing a new SKU for Microsoft Azure Bastion—Azure Bastion Premium. This service, now in public preview, will provide advanced recording, monitoring, and auditing capabilities for customers handling highly sensitive workloads. In this blog post, we’ll explore what Azure Bastion Premium is, the benefits this SKU offers, and why it is a must-use for customers with highly regulated security policies.

Azure Bastion

Protect your virtual machines with more secure remote access

Discover solutions

What is Azure Bastion Premium?

Azure Bastion Premium is a new SKU for customers that handle highly sensitive virtual machine workloads. Its mission is to offer enhanced security features that ensure customer virtual machines are connected securely and to monitor virtual machines for any anomalies that may arise. Our first set of features will focus on ensuring private connectivity and graphical recordings of virtual machines connected through Azure Bastion.

Two key security advantages

Enhanced security: With the existing Azure Bastion SKUs, customers can protect their virtual machines by using the Azure Bastion’s public IP address as the point of entry to their target virtual machines. However, Azure Bastion Premium SKU takes security to the next level by eliminating the public IP. Instead of relying on the public IP address, customers can now connect to a private endpoint on Azure Bastion. As a result, this approach eliminates the need to secure a public IP address, effectively reducing one point of attack.

Virtual machine monitoring: Azure Bastion Premium SKU allows customers to graphically record their virtual machine sessions. Customers can retain virtual machine sessions in alignment to their internal policies and compliance requirements. Additionally, keeping a record of virtual machine sessions allows customers to identify anomalies or unexpected behavior. Whether it is unusual activity, security breaches, or data exfiltration, having a visual record opens the door to investigations and mitigations.

Features offered in Azure Bastion Premium

Graphical session recordingGraphical session recording allows Azure Bastion to graphically record all virtual machine sessions that connect through the enabled Azure Bastion. These recordings are stored in a customer-designated storage account and can be viewed directly in the Azure Bastion resource blade. We see this feature as a value add to customers that want an additional layer of monitoring on their virtual machine sessions. With this feature enabled, if an anomaly within the virtual machine session happens, customers can go back and review the recording to see what exactly happened within the session. For other customers that have data retention policies, session recording will keep a complete record of all recorded sessions. Customers can maintain access and control over the recordings within their storage account to keep it compliant to their policies.Setting up session recording is extremely easy and intuitive. All you need is a designated container within a storage account, a virtual machine, and Azure Bastion to connect to. For more information about setting up and using session recording, see our documentation.

Private Only Azure BastionIn Azure Bastion’s current SKUs that are generally available, inbound connection to the virtual network where Azure Bastion has been provisioned is only available through a public IP address. With Private Only Azure Bastion, we are enabling customers to connect inbound to their Azure Bastion through a private IP address. We see this offering as a must-have feature for customers who want to minimize the use of public endpoints. For customers who have strict policies surrounding the use of public endpoints, Private Only Azure Bastion ensures that Azure Bastion is a compliant service under organizational policies. For other customers that have on-premises machines trying to connect to Azure, utilizing Private Only Azure Bastion with ExpressRoute private peering will enable private connectivity from their on-premise machines straight to their Azure virtual machines.Setting up Private Only Azure Bastion is very easy. When you create a Azure Bastion, under Configure IP address, select Private IP address instead of Public IP address and then click Review + create.Note: Private Only Azure Bastions can only be created with net-new Azure Bastions, not with pre-existing Azure Bastions.

Feature comparison of Azure Bastion offerings

FeaturesDeveloperBasicStandardPremiumPrivate connectivity to virtual machinesYesYesYesYesDedicated host agentNoYesYes           YesSupport for multiple connections per userNoYesYesYesLinux Virtual Machine private key in AKVNoYesYesYesSupport for network security groupsNoYesYesYesAudit loggingNoYesYesYesKerberos supportNoYesYesYesVNET peering supportNoNoYesYesHost scaling (2 to 50 instances)NoNoYesYesCustom port and protocolNoNoYesYesNative RDP/SSH client through Azure CLINoNoYesYesAAD login for RDP/SSH through native clientNoNoYesYesIP-based connectionNoNoYesYesShareable links NoNoYesYesGraphical session recordingNoNoNoYesPrivate Only Azure BastionNoNoNoYes

How to get started

Navigate to the Azure portal.

Deploy Azure Bastion configured manually to include Premium SKU.

Under Configure IP Address, there is the option to enable Azure Bastion on a public or private IP address (Private Only Azure Bastion).

In the Advanced tab, there is a checkbox for Session recording (Preview).

Stay updated on the latest

Our commitment extends beyond fulfilling network security requirements; we are committed to collaborating with internal teams to integrate our solution with other products within our security portfolio. As upcoming features and integrations roll out in the coming months, we are confident that Azure Bastion will seamlessly fit into the “better together” narrative, effectively addressing customer needs related to virtual machine workload security.
The post Enhance your security capabilities with Azure Bastion Premium appeared first on Azure Blog.
Quelle: Azure

Announcing Docker Desktop Support for Windows on Arm: New AI Innovation Opportunities

Docker Desktop now supports running on Windows on Arm (WoA) devices. This exciting development was unveiled during Microsoft’s “Introducing the Next Generation of Windows on Arm” session at Microsoft Build. Docker CTO, Justin Cormack, highlighted how this strategic move will empower developers with even more rapid development capabilities, leveraging Docker Desktop on Arm-powered Windows devices.

The Windows on Arm platform is redefining performance and user experience for applications. With this integration, Docker Desktop extends its reach to a new wave of hardware architectures, broadening the horizons for containerized application development.

Justin Cormack announcing Docker Desktop support for Windows on Arm devices with Microsoft Principal TPM Manager Jamshed Damkewala in the Microsoft Build session “Introducing the next generation of Windows on Arm.” 

Docker Desktop support for Windows on Arm

Read on to learn why Docker Desktop support for Windows on Arm is a game changer for developers and organizations.

Broader accessibility

By supporting Arm devices, Docker Desktop becomes accessible to a wider audience, including users of popular Arm-based devices like the Microsoft devices. This inclusivity fosters a larger, more diverse Docker community, enabling more developers to harness the power of containerization on their preferred devices.

Enhanced developer experience

Developers can seamlessly work on the newest Windows on Arm devices, streamlining the development process and boosting productivity. Docker Desktop’s consistent, cross-platform experience ensures that development workflows remain smooth and efficient, regardless of the underlying hardware architecture.

Future-proofing development

As the tech industry gradually shifts toward Arm architecture for its efficiency and lower power consumption, Docker Desktop’s support for WoA devices ensures we remain at the forefront of innovation. This move future-proofs Docker Desktop, keeping it relevant and competitive as this transition accelerates.

Innovation and experimentation

With Docker Desktop on a new architecture, developers and organizations have more opportunities to innovate and experiment. Whether designing applications for traditional x64 or the emerging Arm ecosystems, Docker Desktop offers a versatile platform for creative exploration.

Market expansion

Furthering compatibility in the Windows Arm space opens new markets and opportunities for Docker, including new relationships with device manufacturers and increased adoption in sectors prioritizing energy efficiency and portability while supporting Docker’s users and customers in leveraging the dev environments that support their goals.

Accelerating developer innovation with Microsoft’s investment in WoA dev tooling

Windows on Arm is arguably as successful as it has ever been. Today, multiple Arm-powered Windows laptops and tablets are available, capable of running nearly the entire range of Windows apps thanks to x86-to-Arm code translation. While Windows on Arm still represents a small fraction of the entire Windows ecosystem, the development of native Arm apps provides a wealth of fresh opportunities for AI innovation.

Microsoft’s investments align with Docker’s strategic goals of cross-platform compatibility and user-centric development, ensuring Docker remains at the forefront of containerization technologies in a diversifying hardware landscape.

Expand your development landscape with Docker Desktop on Windows Arm devices. Update to Docker Desktop 4.31 or consider upgrading to Pro or Business subscriptions to unlock the full potential of cross-platform containerization. Embrace the future of development with Docker, where innovation, efficiency, and cross-platform compatibility drive progress.

Learn more

Watch the Docker Breakout Session Optimizing the Microsoft developer experience with Docker to learn more about Docker and Microsoft better together opportunities.

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account.

Learn about Docker Build Cloud and how you can leverage cloud resources directly from Docker Desktop.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

Einführung von Amazon-EC2-C7i-flex-Instances

AWS kündigt die allgemeine Verfügbarkeit von Amazon-EC2-C7i-flex-Instances an, die im Vergleich zu C6i-Instances ein um bis zu 19 % besseres Preis-Leistungs-Verhältnis bieten. C7i-flex-Instances erweitern das EC2 Flex-Instances-Portfolio und bieten Ihnen die einfachste Möglichkeit, Preis-/Leistungsvorteile für einen Großteil der rechenintensiven Workloads zu nutzen. Die neuen Instances nutzen skalierbare Intel Xeon Custom-Prozessoren der 4. Generation (Sapphire Rapids), die nur bei AWS erhältlich sind und im Vergleich zu C7i um 5 % niedrigere Preise ermöglichen.
Quelle: aws.amazon.com

AWS Fault Injection Service ist jetzt in der Region Europa (Spanien) verfügbar

Ab heute können Kunden den AWS Fault Injection Service (FIS) in der Region Europa (Spanien) nutzen. FIS ist ein vollständig verwalteter Service für die Ausführung von Fault-Injection-Experimenten, um die Leistung, Beobachtbarkeit und Ausfallsicherheit einer Anwendung zu verbessern. FIS vereinfacht den Prozess des Einrichtens und Ausführens von kontrollierten Fault-Injection-Experimenten über eine Reihe von AWS-Services, sodass Teams Vertrauen in ihr Anwendungsverhalten aufbauen können.
Quelle: aws.amazon.com

Amazon EventBridge unterstützt jetzt kundenseitig verwaltete Schlüssel (Customer Managed Keys, CMK) für Event-Busse

Amazon EventBridge kündigt Unterstützung für Amazon Key Management Service (KMS) Customer Managed Keys (CMK) auf Event-Bussen an. Mit dieser Funktion können Sie Ihre Ereignisse mit Ihren eigenen Schlüsseln statt mit einem AWS-eigenen Schlüssel (der standardmäßig verwendet wird) verschlüsseln. Mit der Unterstützung von CMK erhalten Sie eine detailliertere Sicherheitskontrolle über Ihre Ereignisse und können die Sicherheitsanforderungen und Governance-Richtlinien Ihres Unternehmens besser erfüllen.
Quelle: aws.amazon.com

AWS CodeBuild unterstützt jetzt die Verbindung zu einer Amazon VPC über reservierte Kapazität

AWS CodeBuild unterstützt jetzt die Verbindung Ihrer Flotte von dezidierten Linux-Hosts mit Ihrer Amazon VPC. Mit reservierte Kapazität können Sie eine Flotte von CodeBuild-Hosts bereitstellen, die Ihre Build-Umgebung aufrechterhalten. Diese Hosts stehen weiterhin für den Empfang nachfolgender Build-Anfragen zur Verfügung, wodurch die Startlatenzen der Builds minimiert werden.
Quelle: aws.amazon.com