What’s new in Data & AI: Prioritize AI safety to scale with confidence

A few months ago, I had the opportunity to speak to some of our partners about what we’re bringing to market with Azure AI. It was a fast-paced hour and the session was nearly done when someone raised their hand and acknowledged that there was no questioning the business value and opportunities ahead—but what they really wanted to hear more about was Responsible AI and AI safety.

This stays with me because it shows how top of mind this is as we move further in the era of AI with the launch of powerful tools that advance humankind’s critical thinking and creative expression. That partner question reminded me of the importance that AI systems are responsible by design. This means the development and deployment of AI must be guided by a responsible framework from the very beginning. It can’t be an afterthought.

Our investment in responsible AI innovation goes beyond principles, beliefs, and best practices. We also invest heavily in purpose-built tools that support responsible AI across our products. Through Azure AI tooling, we can help scientists and developers alike build, evaluate, deploy, and monitor their applications for responsible AI outcomes like fairness, privacy, and explainability. We know what a privilege it is for customers to place their trust in Microsoft.

Brad Smith authored a blog about this transformative moment we find ourselves in as AI models continue to advance. In it, he shared about Microsoft’s investments and journey as a company to build a responsible AI foundation. This began in earnest with the creation of AI principles in 2018 built on a foundation of transparency and accountability. We quickly realized principles are essential, but they aren’t self-executing. A colleague described it best by saying “principles don’t write code.” This is why we operationalize those principles with tools and best practices and help our customers do the same. In 2022, we shared our internal playbook for responsible AI development, our Responsible AI Standard, to invite public feedback and provide a framework that could help others get started.

I’m proud of the work Microsoft has done in this space. As the landscape of AI evolves rapidly and new technologies emerge, safe and responsible AI will continue to be a top priority. To echo Brad, we’ll approach whatever comes next with humility, listening, and sharing our learning along the way.

In fact, our recent commitment to customers for our first-party copilots is a direct reflection of that. Microsoft will stand behind our customers from a legal perspective if copyright infringement lawsuits are brought forward from using our Copilot, putting that commitment to responsible AI innovation into action.

In this month’s blog, I’ll focus on a few things. How we’re helping customers operationalize responsible AI with purpose-built tools. A few products and updates are designed to empower our customers with confidence so they can innovate safely with our trusted platform. I’ll also share the latest on what we’re delivering for organizations to prepare their data estates to go forward and succeed in the era of AI. Finally, I’ll highlight some fresh stories of organizations putting Azure to work. Let’s dive in!

Availability of Azure AI Content Safety delivers better online experiences

One of my favorite innovations that reflects the constant collaboration between research, policy, and engineering teams at Microsoft is Azure AI Content Safety. This month we announced the availability of Azure AI Content Safety, a state-of-the-art AI system to help keep user-generated and AI-generated content safe, ultimately creating better online experiences for everyone.

The blog shares the story of how South Australia’s Department of Education is using this solution to protect students from harmful or inappropriate content with their new, AI-powered chatbot, EdChat. The chatbot has safety features built in to block inappropriate queries and harmful responses, allowing teachers to focus on the educational benefits rather than control oversight. It’s fantastic to see this solution at work helping create safer online environments!

As organizations look to deepen their generative AI investments, many are concerned about trust, data privacy, and the safety and security of AI models and systems. That’s where Azure AI can help. With Azure AI, organizations can build the next generation of AI applications safely by seamlessly integrating responsible AI tools and practices developed through years of AI research, policy, and engineering.

All of this is built on Azure’s enterprise-grade foundation for data privacy, security, and compliance, so organizations can confidently scale AI while managing risk and reinforcing transparency. Microsoft even relies on Azure AI Content Safety to help protect users of our own AI-powered products. It’s the same technology helping us responsibly release large language models-based experiences in products like GitHub Copilot, Microsoft Copilot, and Azure OpenAI Service, which all have safety systems built in.

New model availability and fine-tuning for Azure OpenAI Service models

This month, we shared two new base inference models (Babbage-002 and Davinci-002) that are now generally available, and fine-tuning capabilities for three models (Babbage-002, Davinci-002, and GPT-3.5-Turbo) are in public preview. Fine-tuning is one of the methods available to developers and data scientists who want to customize large language models for specific tasks.

Since we launched Azure OpenAI, it’s been amazing to see the power of generative AI applied to new applications! Now it’s possible to customize your favorite OpenAI models for completion use cases using the latest base inference models to solve your specific challenges and easily and securely deploy those new custom models on Azure.

One way that developers and data scientists can adapt large language models for specific tasks is fine tuning. Unlike methods like Retrieval Augmented Generation (RAG) and prompt engineering that work by adding information and instructions to prompts, fine tuning works by modifying the large language model itself.

With Azure OpenAI Service and Azure Machine Learning, you can use Supervised Fine Tuning, which lets you provide custom data (prompt/completion or conversational chat, depending on the model) to teach new skills to the base model.

We suggest companies begin with prompt engineering or RAG to set up a baseline before they embark on fine-tuning—it’s the quickest way to get started, and we make it simple with tools like Prompt Flow or On Your Data. By starting with prompt engineering and RAG, developers establish a baseline to compare against, so your effort is not wasted.

Recent news from Azure Data and AI

We’re constantly rolling out new solutions to help customers maximize their data and successfully put AI to work for their businesses. Here are some product announcements from the past month:

Public Preview of the new Synapse Data Science experience in Microsoft Fabric. We plan to release even more new experiences going forward to help you build data science solutions as part of your analytics workflows.

Azure Cache for Redis recently introduced Vector Similarity Search, which enables developers to build generative AI based applications using Azure Cache for Redis Enterprise as a robust and high-performance vector database. From there, you can use Azure AI Content Safety to verify and filter out any results to help ensure safer content for users.

Data Activator is now in public preview for all Microsoft Fabric users. This Fabric experience lets you drive automatic alerts and actions from your Fabric data, eliminating the need for constant manual monitoring of dashboards.

Limitless innovation with Azure Data and AI

Customers and partners are experiencing the transformative power of AI

One of my favorite parts of this job is getting to see how businesses are using our data and AI solutions to solve business challenges and real-world problems. Seeing an idea go from a pure possibility to real world solution never gets old.

With the AI safety tools and principles we’ve infused into Azure AI, businesses can move forward with confidence that whatever they build is done safely and responsibly. This means the innovation potential for any organization is truly limitless. Here are a few recent stories showing what’s possible.

For any pet parents out there, the MetLife Pet app offers a one-stop shop for all your pet’s medical care needs, including records and a digital library full of care information. The app uses Azure AI services to apply advanced machine learning to automatically extract key text from documents making it easier than ever to access your pet’s health information.

HEINEKEN has begun using Azure OpenAI Service, built-in ChatGPT capabilities, and other Azure AI services to build chatbots for employees and to improve their existing business processes. Employees are excited about its potential and are even suggesting new use cases that the company plans to roll out over time.

Consultants at Arthur D. Little turned to Azure AI to build an internal, gen AI powered solution to search across vast amounts of complex document formats. Using natural language processing from Azure AI Language and Azure OpenAI Service, along with Azure Cognitive Search’s advanced information retrieval technology, the firm can now transform difficult document formats. For example, this means 100-plus slide PowerPoint decks with fragmented text and images, are immediately making them human readable and searchable.

SWM is the municipal utility company serving Munich, Germany—and it relies on Azure IoT, AI, and big data analysis to drive every aspect of the city’s energy, heating, and mobility transition forward more sustainably. The scalability of the Azure cloud platform removes all limits when it comes to using big data.

Generative AI has quickly become a powerful tool for businesses to streamline tasks and enhance productivity. Check out this recent story of five Microsoft partners—Commerce. AI, Datadog, Modern Requirements, Atera, and SymphonyAI—powering their own customers’ transformations using generative AI. Microsoft’s layered approach for these generative models is guided by our AI Principles to help ensure organizations build responsibly and comply with the Azure OpenAI Code of Conduct.

Opportunities to enhance your AI skills and expertise

New learning paths for business decision makers are available from the Azure Skilling team for you to hone your skills with Microsoft’s AI technologies and ensure you’re ahead of the curve as AI reshapes the business landscape. We’re also helping leaders from the Healthcare and Financial Services industries learn more about how to apply AI in their everyday work. Check out the new learning material on knowledge mining and developing generative AI solutions with Azure OpenAI Service.

The Azure AI and Azure Cosmos DB University Hackathon kicked off this month. The hackathon is a call to students worldwide to reimagine the future of education using Azure AI and Azure Databases. Read the blog post to learn more and register.

If you’re already using generative AI and want to learn more—or if you haven’t yet and are not sure where to start—I have a new resource for you. Here are 25 tips to help you unlock the potential of generative AI through better prompts. These tips help specify your input and yield more accurate and relevant results from language models like ChatGPT.

Just like in real life, HOW you ask for something can limit what you get in response so the inputs are important to get right. Whether you’re conducting market research, sourcing ideas for your child’s Halloween costume, or (my personal favorite) creating marketing narratives, these prompt tips will help you get better results.

Embrace the future of data and AI with upcoming events

PASS Data Community Summit 2023 is just around the corner, from November 14, 2023, through November 17, 2023. This is an opportunity to connect, share, and learn with your peers and industry thought leaders—and to celebrate all things data! The full schedule is now live so you can register and start planning your Summit week.

I hope you’ll join us for Microsoft Ignite 2023, which is also next month, November 14, 2023, through November 17, 2023. If you’re not headed to Seattle, be sure to register for the virtual experience. There’s so much in store for you to experience AI transformation in action!
The post What’s new in Data & AI: Prioritize AI safety to scale with confidence appeared first on Azure Blog.
Quelle: Azure

Protect your web apps from modern threats with Microsoft Defender for Cloud

This blog was co-written with Loren Lachapelle, Dotan Patrich, and Assaf Berenson. 

In this era of AI-driven competition, enterprises of all sizes have prioritized the value of migrating their app development from on-premises to the cloud. As developers rapidly publish new cloud applications, bad actors are equally relentless in seeking new ways to exploit misconfigured resources. One question that comes up for enterprise cloud architects is, how can you best protect your cloud deployments from attacks? More importantly, how do you incorporate security practices for cloud systems that may be different from on-premises systems and different between cloud service providers?

That’s where the power of a managed platform as a service (PaaS) with integrated cloud security comes in. Azure App Service provides native security integration with Defender for App Service in Microsoft Defender for Cloud to help protect multicloud and hybrid environments with comprehensive security across the full lifecycle, from development to runtime. In this blog, we will explore another well-kept secret: how seamless and worry-free it can be to safeguard your web applications using the integration with Defender for App Service.

Native security integration with a Zero Trust approach 

Defender for App Service is a Microsoft first-party solution that uses the scale of the cloud to identify attacks targeting applications running in Azure App Service, providing more robust security when you migrate from your on-premises web apps. With this migration to App Service, you receive automatic platform maintenance and security patching so you’re always running the latest versions of the operating system, language frameworks, and runtime software.  

By enabling Defender for App Service, you get an extra layer of protection for your App Service plan that assesses the resources and generates security recommendations based on its findings. Since it seamlessly integrates with Azure App Service, it minimizes the need for deployment and onboarding overhead on your end and requires no alterations to your apps to detect threats.  

Attackers routinely probe web applications to find and exploit weaknesses. Before being routed to specific environments, requests to applications running in Azure go through several gateways, where they’re inspected and logged. Our Zero Trust approach collects signals from your organization’s cloud app usage without any reconfiguration, with Azure Web Application Firewall optionally safeguarding data transmission between your environment and these applications. Defender for App Service then works to detect harmful exploits and malicious behavioral patterns in web apps and web app runtime activity. 

You can use the detailed instructions in these recommendations to harden your App Service resources, meaning your team will also have complete behind-the-scenes visibility into potential threats and misconfiguration. With Defender for App Service integrated with your Azure App Service deployment and managed by Microsoft, your web apps are assured of the latest security protection without necessarily requiring you to first become a hands-on Zero Trust expert.

Enhanced detection and response capabilities at scale 

Security in the cloud provides scalable defenses that are constantly updated and expertly managed. By enabling Defender for App Service in Defender for Cloud, you can implement robust security practices early in the software development process, secure code management environments, and gain valuable insights into your development environment’s security posture.  

Defender for Cloud provides a centralized view of security alerts across all your Azure resources, including App Service. It generates cloud-centric security recommendations after assessing these resources, based on the Microsoft cloud security benchmark. You can then use the detailed instructions in these recommendations to harden your App Service resources. 

Our customers have found that using security benchmarks can help you quickly secure cloud deployments. A comprehensive security best practice framework from cloud service providers can give you a starting point for selecting specific security configuration settings in your cloud environment, across multiple service providers and allow you to monitor these configurations using a single pane of glass.  

These recommendations include two key aspects: 

Security controls: These recommendations are generally applicable across your cloud workloads. Each recommendation identifies a list of stakeholders that are typically involved in the planning, approval, or implementation of the benchmark. 

Service baselines: These apply the controls to individual cloud services to provide recommendations on that specific service’s security configuration.  

Defender for App Service provides tools to help you investigate and respond to security incidents, and because it is natively integrated with Azure App Service, it’s easy to enable with just a few clicks. By utilizing the two services together, Your IT team will be able to quickly identify and fix the root cause of an attack, so that your apps can be brought back online as quickly as possible. 

A playbook for staying ahead of digital threats 

Defender for App Service maps threats according to the MITRE ATT&CK framework. The MITRE ATT&CK framework is a comprehensive list of ways that cyber attackers can try to break into and exploit computer systems. The framework helps cybersecurity experts understand and defend against these attacks by giving them a clear idea of what tactics and techniques bad actors might use.  

Defender for Cloud can also detect ongoing attacks, even if it is deployed after a web app has been exploited. This is because it can analyze log data and infrastructure data together to identify suspicious activity, such as new attacks circulating in the wild or compromises in customer applications. 

In addition, Defender for App Service also partners with the Microsoft Threat Intelligence community to incorporate the expertise of our extended team of security professionals to detect threats. 

Improve the security posture of your web apps running on App Service 

Migrating apps to Azure App Service can help improve security posture in several ways. To recap some of the benefits: 

A secure and hardened platform: Actively monitored and updated by Microsoft, you don’t have to worry about managing the underlying infrastructure, network, or software components. 

HTTPS and TLS encryption: Supported for all communication, both inbound and outbound. You can also enforce HTTPS and disable outdated protocols to prevent unencrypted or insecure connections. 

Restricted app access based on IP addresses, client certificates, or user identities: You can also use the App Service authentication feature to integrate with various identity providers, such as Microsoft Entra ID (formerly Azure Active Directory), Facebook, Google, or OpenID Connect providers. 

Managed identities: Securely access other Azure resources, such as SQL Database or Storage, without storing any secrets in your code or configuration files. You can also store sensitive app settings and connection strings as secrets in Azure Key Vault, and then monitor your Key Vault using Defender for Key Vault. 

Integrated with additional security products: App Service works with industry-leading features and tools that can help you detect and mitigate threats, such as web application firewall (WAF), Microsoft Defender for Cloud, and Azure Sentinel. 

Enable Defender for App Service in your App Service plan today 

Defender for App Service provides continuous security assessment and recommendations to help you harden your Azure App Service resources and improve your secure score. It detects and alerts you of various attacks, such as user-agent injection, web shell activity, and dangling DNS. You can also view the attack details and mitigation steps in the Azure portal or use Azure Sentinel to investigate and respond to incidents. 

Since Defender for App Service is natively integrated with App Service, you don’t have to install or configure anything. Simply enable it on your App Service subscription and refer to the pricing options to customize your plan.

Discover more of Defender for Cloud’s product portfolio by visiting our homepage.  

New to Azure App Service? Learn more about the features and benefits and try Azure for free. Visit product documentation to learn more about protecting your web applications with Microsoft Defender for Cloud.   
The post Protect your web apps from modern threats with Microsoft Defender for Cloud appeared first on Azure Blog.
Quelle: Azure

Realize the full potential of your cloud investment with Azure optimization

The advantages of the cloud are widely recognized, including enhanced scalability, growth opportunities, and innovation potential. These all hold significant value for most businesses. However, what might be less clear are the optimal strategies and resources that allow you to confidently operate in the cloud while prioritizing long-term efficiency and productivity. That’s why we’re dedicated to helping our customers improve their Azure workloads with resources, tools, and guidance promoting optimization of their cloud investment.

At Microsoft, we understand that customers strive to improve workload reliability and security and optimize their cloud spend so they can accelerate business innovation without worrying about workload vulnerabilities or ballooning cloud costs. We want to help you foster a cycle of continual improvement of your Microsoft Azure workloads and realize the cost benefits of the cloud. Once the key elements of Azure optimization are in place, customers are able to consistently extract maximum value from their cloud investment.

Improve workload reliability and security

Your company can accelerate development in the cloud by leveraging a comprehensive set of guidance that helps you design reliable and secure workloads. The Azure Well-Architected Framework empowers organizations to seamlessly construct, design, and implement cloud deployments that are optimized for security, reliability, performance, sustainability, and cost-efficiency. Importantly, the Well-Architected Framework includes guidance to help you understand the security and reliability tradeoffs associated with cost optimization and reduce unnecessary expenses.

In tandem, the Microsoft Cloud Adoption Framework for Azure provides clear directions for creating and deploying cloud environments that are precisely tailored to meet specific business needs while adhering to best practices. To ensure the seamless operation of these cloud environments and workloads, organizations can establish a Cloud Center of Excellence, facilitating effective management and governance.  

These two frameworks also include documentation specific to hybrid or multi-cloud environments so you can integrate reliability and security anywhere you run Azure workloads. The Cloud Adoption Framework hybrid scenario provides comprehensive guidance for organizations to accelerate cloud adoption, and hybrid Azure Well-Architected guidance helps reduce workload complexity.

Manage cloud spend and optimize costs

Adhering to FinOps best practices can be critical for helping your business manage cloud spend and optimize costs to improve efficiency of cloud operations. FinOps is a framework that establishes a cross-functional team that includes finance, IT, engineers, and business leaders to create a culture of financial accountability where everyone takes ownership of their cloud usage. This collaboration increases visibility into your cloud investment to all levels of the organization, while minimizing costs and maintaining accountability.

Strategically managing your cloud spend can produce long-term gains in efficiency, innovation, and competitiveness. Azure’s optimization products and tools can effectively manage your organization’s cloud expenses and enhance cost optimization through a range of strategic measures. These include taking advantage of various pricing and licensing options, such as the Azure Hybrid Benefit, Azure savings plan, and Azure Reservations, all of which can contribute to reducing the cost of operating in the cloud.

Additionally, Microsoft Cost Management is available to every customer with an Azure subscription and allows organizations to closely monitor, allocate, and optimize cloud expenditures. Earlier this year, we introduced the GPT-powered AI chat capability for Microsoft Cost Management. This feature, currently in preview, makes it even easier to optimize your cloud costs. Further cost optimization insights can be gained through Azure Advisor, another free tool which offers personalized recommendations to optimize cloud resource usage and costs.

For a proactive approach to cost management, the Azure Pricing Calculator and Azure Total Cost of Ownership (TCO) Calculator empower businesses to comprehend projected cloud costs prior to deployment, aiding in informed decision-making.

Achieve continuous improvement in the cloud

Once you’ve deployed Azure workloads, we recommend several things to improve workloads and ensure lasting improvement. Aligned to the Well-Architected Framework, Azure Advisor analyzes your workloads, identifies opportunities for optimization, and monitors progress effectively.

Another tool to identify and recommend workload improvement is the Well-Architected Review, an assessment that provides curated and personalized recommendations to guide identified remediations. Azure services recommendations is directly accessible in the portal and is a helpful tool when deploying new services.

Furthermore, you can align your growth trajectory with your business objectives by leveraging proactive services provided by Microsoft Unified. With Microsoft Unified, you can get scenario-specific support services that help you maintain, onboard, and optimize your Azure solution with prescriptive and tailored guidance. It also includes access to experts that proactively help tailor Azure solutions to meet your unique needs.

Optimize your cloud investment and accelerate innovation

The convergence of optimized workloads and effective cost management forms the bedrock of successful cloud adoption. Once you can consistently and confidently design, build, and manage optimized workloads, then you should start thinking about your future business priorities. At that point, you’ll be able to consider how to reallocate cloud spend toward modernization or new business innovation. We call this “an optimization mindset”, and it is what enables future cloud success.

Depending on where you are in your cloud journey, these next steps can help you start optimizing your Azure investment.

If you’re considering adopting Azure, Azure Migrate and Modernize and Azure Innovate are new offerings that provides access to centralized, comprehensive resources with optimization guidance built in, access to experts, and opportunities for partner funding. These offerings support you at every stage of your cloud journey from migration to innovating with AI.

If you want to optimize your existing Azure workloads, start by using Azure Advisor and the Well-Architected Review.

Keep in mind that no one is expecting you to do this on your own. To help with the decisions and tradeoffs that will arise, we always recommend using the expertise of a Microsoft partner, reaching out to a Microsoft consultant, or engaging with your Unified Support account manager about services for optimization.  

Visit the Azure Optimization Collection to explore a range of optimization tools and resources, and discover training opportunities to support your optimization journey.
The post Realize the full potential of your cloud investment with Azure optimization appeared first on Azure Blog.
Quelle: Azure

Building for the future: The enterprise generative AI application lifecycle with Azure AI

In our previous blog, we explored the emerging practice of large language model operations (LLMOps) and the nuances that set it apart from traditional machine learning operations (MLOps). We discussed the challenges of scaling large language model-powered applications and how Microsoft Azure AI uniquely helps organizations manage this complexity. We touched on the importance of considering the development journey as an iterative process to achieve a quality application.  

In this blog, we’ll explore these concepts in more detail. The enterprise development process requires collaboration, diligent evaluation, risk management, and scaled deployment. By providing a robust suite of capabilities supporting these challenges, Azure AI affords a clear and efficient path to generating value in your products for your customers.

Enterprise LLM Lifecycle

Ideating and exploring loop

The first loop typically involves a single developer searching for a model catalog for large language models (LLMs) that align with their specific business requirements. Working with a subset of data and prompts, the developer will try to understand the capabilities and limitations of each model with prototyping and evaluation. Developers usually explore altering prompts to the models, different chunking sizes and vectoring indexing methods, and basic interactions while trying to validate or refute business hypotheses. For instance, in a customer support scenario, they might input sample customer queries to see if the model generates appropriate and helpful responses. They can validate this first by typing in examples, but quickly move to bulk testing with files and automated metrics.

Explore

Azure OpenAI Service chevron_right

Beyond Azure OpenAI Service, Azure AI offers a comprehensive model catalog, which empowers users to discover, customize, evaluate, and deploy foundation models from leading providers such as Hugging Face, Meta, and OpenAI. This helps developers find and select optimal foundation models for their specific use case. Developers can quickly test and evaluate models using their own data to see how the pre-trained model would perform for their desired scenarios.  

Building and augmenting loop 

Once a developer discovers and evaluates the core capabilities of their preferred LLM, they advance to the next loop which focuses on guiding and enhancing the LLM to better meet their specific needs. Traditionally, a base model is trained with point-in-time data. However, often the scenario requires either enterprise-local data, real-time data, or more fundamental alterations.

For reasoning on enterprise data, Retrieval Augmented Generation (RAG) is preferred, which injects information from internal data sources into the prompt based on the specific user request. Common sources are document search systems, structured databases, and non-SQL stores. With RAG, a developer can “ground” their solution using the capabilities of their LLMs to process and generate responses based on this injected data. This helps developers achieve customized solutions while maintaining relevance and optimizing costs. RAG also facilitates continuous data updates without the need for fine-tuning as the data comes from other sources.  

During this loop, the developer may find cases where the output accuracy doesn’t meet desired thresholds. Another method to alter the outcome of an LLM is fine-tuning. Fine-tuning helps most when the nature of the system needs to be altered. Generally, the LLM will answer any prompt in a similar tone and format. But for example, if the use case requires code output, JSON, or any such modification, there may be a consistent change or restriction in the output, where fine-tuning can be employed to better align the system’s responses with the specific requirements of the task at hand. By adjusting the parameters of the LLM during fine-tuning, the developer can significantly improve the output accuracy and relevance, making the system more useful and efficient for the intended use case. 

It is also feasible to combine prompt engineering, RAG augmentation, and a fine-tuned LLM. Since fine-tuning necessitates additional data, most users initiate with prompt engineering and modifications to data retrieval before proceeding to fine-tune the model. 

Most importantly, continuous evaluation is an essential element of this loop. During this phase, developers assess the quality and overall groundedness of their LLMs. The end goal is to facilitate safe, responsible, and data-driven insights to inform decision-making while ensuring the AI solutions are primed for production. 

Learn More

Azure AI prompt flow chevron_right

Azure AI prompt flow is a pivotal component in this loop. Prompt flow helps teams streamline the development and evaluation of LLM applications by providing tools for systematic experimentation and a rich array of built-in templates and metrics. This ensures a structured and informed approach to LLM refinement. Developers can also effortlessly integrate with frameworks like LangChain or Semantic Kernel, tailoring their LLM flows based on their business requirements. The addition of reusable Python tools enhances data processing capabilities, while simplified and secure connections to APIs and external data sources afford flexible augmentation of the solution. Developers can also use multiple LLMs as part of their workflow, applied dynamically or conditionally to work on specific tasks and manage costs.  

With Azure AI, evaluating the effectiveness of different development approaches becomes straightforward. Developers can easily craft and compare the performance of prompt variants against sample data, using insightful metrics such as groundedness, fluency, and coherence. In essence, throughout this loop, prompt flow is the linchpin, bridging the gap between innovative ideas and tangible AI solutions. 

Operationalizing loop 

The third loop captures the transition of LLMs from development to production. This loop primarily involves deployment, monitoring, incorporating content safety systems, and integrating with CI/CD (continuous integration and continuous deployment) processes. This stage of the process is often managed by production engineers who have existing processes for application deployment. Central to this stage is collaboration, facilitating a smooth handoff of assets between application developers and data scientists building on the LLMs, and production engineers tasked with deploying them.

Deployment allows for a seamless transfer of LLMs and prompt flows to endpoints for inference without the need for a complex infrastructure setup. Monitoring helps teams track and optimize their LLM application’s safety and quality in production. Content safety systems help detect and mitigate misuse and unwanted content, both on the ingress and egress of the application. Combined, these systems fortify the application against potential risks, improving alignment with risk, governance, and compliance standards.  

Unlike traditional machine learning models that might classify content, LLMs fundamentally generate content. This content often powers end-user-facing experiences like chatbots, with the integration often falling on developers who may not have experience managing probabilistic models. LLM-based applications often incorporate agents and plugins to enhance the capabilities of models to trigger some actions, which could also amplify the risk. These factors, combined with the inherent variability of LLM outputs, show the importance of risk management in LLMOps is critical.  

Explore

Azure AI Content Safety chevron_right

Azure AI prompt flow ensures a smooth deployment process to managed online endpoints in Azure Machine Learning. Because prompt flows are well-defined files that adhere to published schemas, they are easily incorporated into existing productization pipelines. Upon deployment, Azure Machine Learning invokes the model data collector, which autonomously gathers production data. That way, monitoring capabilities in Azure AI can provide a granular understanding of resource utilization, ensuring optimal performance and cost-effectiveness through token usage and cost monitoring. More importantly, customers can monitor their generative AI applications for quality and safety in production, using scheduled drift detection using either built-in or customer-defined metrics. Developers can also use Azure AI Content Safety to detect and mitigate harmful content or use the built-in content safety filters provided with Azure OpenAI Service models. Together, these systems provide greater control, quality, and transparency, delivering AI solutions that are safer, more efficient, and more easily meet the organization’s compliance standards.

Azure AI also helps to foster closer collaboration among diverse roles by facilitating the seamless sharing of assets like models, prompts, data, and experiment results using registries. Assets crafted in one workspace can be effortlessly discovered in another, ensuring a fluid handoff of LLMs and prompts. This not only enables a smoother development process but also preserves the lineage across both development and production environments. This integrated approach ensures that LLM applications are not only effective and insightful but also deeply ingrained within the business fabric, delivering unmatched value.

Managing loop 

The final loop in the Enterprise Lifecycle LLM process lays down a structured framework for ongoing governance, management, and security. AI governance can help organizations accelerate their AI adoption and innovation by providing clear and consistent guidelines, processes, and standards for their AI projects.

Explore

Responsible AI practices chevron_right

Azure AI provides built-in AI governance capabilities for privacy, security, compliance, and responsible AI, as well as extensive connectors and integrations to simplify AI governance across your data estate. For example, administrators can set policies to allow or enforce specific security configurations, such as whether your Azure Machine Learning workspace uses a private endpoint. Or, organizations can integrate Azure Machine Learning workspaces with Microsoft Purview to publish metadata on AI assets automatically to the Purview Data Map for easier lineage tracking. This helps risk and compliance professionals understand what data is used to train AI models, how base models are fine-tuned or extended, and where models are used across different production applications. This information is crucial for supporting responsible AI practices and providing evidence for compliance reports and audits.

Whether building generative AI applications with open-source models, Azure’s managed OpenAI models, or your own pre-trained custom models, Azure AI facilitates safe, secure, and reliable AI solutions with greater ease with purpose-built, scalable infrastructure.

Explore the harmonized journey of LLMOps at Microsoft Ignite

As organizations delve deeper into LLMOps to streamline processes, one truth becomes abundantly clear: the journey is multifaceted and requires a diverse range of skills. While tools and technologies like Azure AI prompt flow play a crucial role, the human element—and diverse expertise—is indispensable. It’s the harmonious collaboration of cross-functional teams that creates real magic. Together, they ensure the transformation of a promising idea into a proof of concept and then a game-changing LLM application.

As we approach our annual Microsoft Ignite conference this month, we will continue to post updates to our product line. Join us for more groundbreaking announcements and demonstrations and stay tuned for our next blog in this series.
The post Building for the future: The enterprise generative AI application lifecycle with Azure AI appeared first on Azure Blog.
Quelle: Azure

What’s new in Data & AI: Expanding choices for generative AI app builders

Generative AI is no longer just a buzzword or something that’s just “tech for tech’s sake.” It’s here and it’s real, today, as small and large organizations across industries are adopting generative AI to deliver tangible value to their employees and customers. This has inspired and refined new techniques like prompt engineering, retrieval augmented generation, and fine-tuning so organizations can successfully deploy generative AI for their own use cases and with their own data. We see innovation across the value chain, whether it’s new foundation models or GPUs, or novel applications of preexisting capabilities, like vector similarity search or machine learning operations (MLOps) for generative AI. Together, these rapidly evolving techniques and technologies will help organizations optimize the efficiency, accuracy, and safety of generative AI applications. Which means everyone can be more productive and creative!

We also see generative AI inspiring a wellspring of new audiences to work on AI projects. For example, software developers that may have seen AI and machine learning as the realm of data scientists are getting involved in the selection, customization, evaluation, and deployment of foundation models. Many business leaders, too, feel a sense of urgency to ramp up on AI technologies to not only better understand the possibilities, but the limitations and risks. At Microsoft Azure, this expansion in addressable audiences is exciting, and pushes us to provide more integrated and customizable experiences that make responsible AI accessible for different skillsets. It also reminds us that investing in education is essential, so that all our customers can yield the benefits of generative AI—safely and responsibly—no matter where they are in their AI journey.

We have a lot of exciting news this month, much of it focused on providing developers and data science teams with expanded choice in generative AI models and greater flexibility to customize their applications. And in the spirit of education, I encourage you to check out some of these foundational learning resources:

For business leaders

Building a Foundation for AI Success: A Leader’s Guide: Read key insights from Microsoft, our customers and partners, industry analysts, and AI leaders to help your organization thrive on your path to AI transformation.

Transform your business with Microsoft AI: In this 1.5-hour learning path, business leaders will find the knowledge and resources to adopt AI in their organizations. It explores planning, strategizing, and scaling AI projects in a responsible way.

Career Essentials in Generative AI: In this 4-hour course, you will learn the core concepts of AI and generative AI functionality, how you can start using generative AI in your own day-to-day work, and considerations for responsible AI.

For builders

Introduction to generative AI: This 1-hour course for beginners will help you understand how LLMs work, how to get started with Azure OpenAI Service, and how to plan for a responsible AI solution. 

Start Building AI Plugins With Semantic Kernel: This 1-hour course for beginners will introduce you to Microsoft’s open source orchestrator, Semantic Kernel, and how to use prompts, semantic functions, and vector databases.

Work with generative AI models in Azure Machine Learning: This 1-hour intermediate course will help you understand the Transformer architecture and how to fine-tune a foundation model using the model catalog in Azure Machine Learning.

Access new, powerful foundation models for speech and vision in Azure AI

We’re constantly looking for ways to help machine learning professionals and developers easily discover, customize, and integrate large pre-trained AI models into their solutions. In May, we announced the public preview of foundation models in the Azure AI model catalog, a central hub to explore collections of various foundation models from Hugging Face, Meta, and Azure OpenAI Service. This month brought another milestone: the public preview of a diverse suite of new open-source vision models in the Azure AI model catalog, spanning image classification, object detection, and image segmentation capabilities. With these models, developers can easily integrate powerful, pre-trained vision models into their applications to improve performance for predictive maintenance, smart retail store solutions, autonomous vehicles, and other computer vision scenarios.

In July we announced that the Whisper model from OpenAI would also be coming to Azure AI services. This month, we officially released Whisper in Azure OpenAI Service and Azure AI Speech, now in public preview. Whisper can transcribe audio into text in an astounding 57 languages. The foundation model can also translate all those languages to English and generate transcripts with enhanced readability, making it a powerful complement to existing capabilities in Azure AI. For example, by using Whisper in conjunction with the Azure AI Speech batch transcription application programming interface (API), customers can quickly transcribe large volumes of audio content at scale with high accuracy. We look forward to seeing customers innovate with Whisper to make information more accessible for more audiences.

Discover vision models in Azure AI model catalog.

Operationalize application development with new code-first experiences and model monitoring for generative AI

As generative AI adoption accelerates and matures, MLOps for LLMs, or simply “LLMOps,” will be instrumental in realizing the full potential of this technology at enterprise scale. To expedite and streamline the iterative process of prompt engineering for LLMs, we introduced our prompt flow capabilities in Azure Machine Learning at Microsoft Build 2023— providing a way to design, experiment, evaluate, and deploy LLM workflows. This month, we announced a new code-first prompt flow experience through our SDK, CLI, and VS Code extension available in preview. Now, teams can more easily apply rapid testing, optimization, and version control techniques to generative AI projects, for more seamless transitions from ideation to experimentation and, ultimately, production-ready applications.

Of course, once you deploy your LLM application in production, the job isn’t finished. Changes in data and consumer behavior can influence your application over time, resulting in outdated AI systems, which negatively impact business outcomes and expose organizations to compliance and reputational risks. This month, we announced model monitoring for generative AI applications, now available in preview in Azure Machine Learning. Users can now collect production data, analyze key safety, quality, and token consumption metrics on a recurring basis, receive timely alerts about critical issues, and visualize the results over time in a rich dashboard.

View time-series metrics, histograms, detailed performance, and resolve notifications.

Enter the new era of corporate search with Azure Cognitive Search and Azure OpenAI Service

Microsoft Bing is transforming the way users discover relevant information across the world wide web. Instead of providing a lengthy list of links, Bing will now intelligently interpret your question and source the best answers from various corners of the internet. What’s more, the search engine presents the information in a clear and concise manner along with verifiable links to data sources. This shift in online search experiences makes internet browsing more user-friendly and efficient.

Now, imagine the transformative impact if businesses could search, navigate, and analyze their internal data with a similar level of ease and efficiency. This new paradigm would enable employees to swiftly access corporate knowledge and harness the power of enterprise data in a fraction of the time. This architectural pattern is known as Retrieval Augmented Generation (RAG). By combining the power of Azure Cognitive Search and Azure OpenAI Service, organizations can now make this streamlined experience possible.

Combine Hybrid Retrieval and Semantic Ranking to improve generative AI applications

Speaking of search, through extensive testing on both representative customer indexes and popular academic benchmarks, Microsoft found that a combination of the following techniques creates the most effective retrieval engine for a majority of customer scenarios, and is especially powerful in the context of generative AI:

Chunking long form content

Employing hybrid retrieval (combining BM25 and vector search)

Activating semantic ranking

Any developer building generative AI applications will want to experiment with hybrid retrieval and reranking strategies to improve the accuracy of outcomes to delight end users.

Improve the efficiency of your Azure OpenAI Service application with Azure Cosmos DB vector search

We recently expanded our documentation and tutorials with sample code to help customers learn more about the power of combining Azure Cosmos DB and Azure OpenAI Service. Applying Azure Cosmos DB vector search capabilities to Azure OpenAI applications enables you to store long term memory and chat history, improving the quality and efficiency of your LLM solution for users. This is because vector search allows you to efficiently query back the most relevant context to personalize Azure OpenAI prompts in a token-efficient manner. Storing vector embeddings alongside the data in an integrated solution minimizes the need to manage data synchronization and helps accelerate your time-to-market for AI app development.

See the full infographic.

Embrace the future of data and AI at upcoming Microsoft events

Azure continuously improves as we listen to our customers and advance our platform for excellence in applied data and AI. We hope you will join us at one of our upcoming events to learn about more innovations coming to Azure and to network directly with Microsoft experts and industry peers.

Enterprise scale open-source analytics on containers: Join Arun Ulagaratchagan (CVP, Azure Data), Kishore Chaliparambil (GM, Azure Data), and Balaji Sankaran (GM, HDInsight) for a webinar on October 3rd to learn more about the latest developments in HDInsight. Microsoft will unveil a full-stack refresh with new open-source workloads, container-based architecture, and pre-built Azure integrations. Find out how to use our modern platform to tune your analytics applications for optimal costs and improved performance, and integrate it with Microsoft Fabric to enable every role in your organization.

Microsoft Ignite is one of our largest events of the year for technical business leaders, IT professionals, developers, and enthusiasts. Join us November 14-17, 2023 virtually or in-person, to hear the latest innovations around AI, learn from product and partner experts build in-demand skills, and connect with the broader community.

The post What’s new in Data & AI: Expanding choices for generative AI app builders appeared first on Azure Blog.
Quelle: Azure

Microsoft Azure achieves HITRUST CSF v11 certification

The healthcare industry is undergoing a rapid transformation, driven by the increasing need for cloud computing to improve patient outcomes, capture cost efficiencies, and make it easier to coordinate care, especially for patients in remote areas. Cloud computing enables healthcare organizations to leverage advanced technologies such as artificial intelligence, machine learning, big data analytics, and Internet of Things to enhance their services and operations. However, cloud computing also brings new challenges and risks for securing and protecting sensitive healthcare data, such as electronic health records, medical images, genomic data, and personal health information. Healthcare organizations need to ensure that their cloud service providers meet the highest standards of security and compliance, as well as adhere to the complex and evolving regulations and frameworks that govern the healthcare industry.

Microsoft Azure committed to security and compliance in the healthcare industry

One of the most widely adopted and recognized frameworks for information protection in the healthcare industry is the HITRUST Common Security Framework (CSF). The HITRUST CSF is a comprehensive and scalable framework that integrates multiple authoritative sources, such as HIPAA, NIST, ISO, PCI, and COBIT, into a single set of harmonized controls. The HITRUST CSF provides a prescriptive and flexible approach for assessing and certifying the security and compliance posture of cloud service providers and their customers. Achieving HITRUST CSF certification demonstrates that a cloud service provider has implemented the best practices and controls to safeguard sensitive healthcare data in the cloud.

As healthcare organizations converge on the Dallas area for the HITRUST Collaborate 2023 event, providing secure and compliant cloud services for the healthcare industry is more important than ever. Microsoft Azure is committed to being a trusted partner for healthcare organizations in their digital transformation journey. Azure provides a comprehensive portfolio of cloud services that enable healthcare organizations to build innovative solutions that improve the entire healthcare experience. Azure also offers a range of capabilities that make it easier for healthcare organizations to achieve and maintain security and compliance in the cloud.

We are therefore proud to announce that Microsoft Azure has achieved HITRUST CSF v11.0.1 certification across 162 Azure services and 115 Azure Government services. All GA Azure regions across Azure and Azure Government clouds are included within this certification. This achievement reflects the continuous efforts by Azure to enhance its security and compliance offerings for customers in the healthcare industry.

HITRUST CSF v11.0.1 is the latest version of the framework that incorporates new requirements and updates from various authoritative sources, such as NIST SP 800-53 Rev 5, NIST Cybersecurity Framework v1.1, PCI DSS v3.2.1, FedRAMP High Baseline Rev 5, CSA CCM v3.0.1, GDPR, CCPA, and others. HITRUST CSF v11.0.1 also introduces new features and enhancements, such as maturity scoring model, risk factor analysis, inheritance program expansion, assessment scoping tool improvement, and more. Achieving HITRUST CSF v11.0.1 certification demonstrates the increasing commitment Azure has to providing secure and compliant cloud services for customers in the healthcare industry.

The HITRUST CSF v11.0.1 r2 Validated Assessment for Azure was performed by an independent third-party audit firm licensed under the HITRUST External Assessor program. The audit firm evaluated Azure for security policies, procedures, processes, and controls against the HITRUST CSF requirements applicable to cloud service providers. The audit firm also verified that security controls for Azure are implemented effectively and operate as intended. Azure customers can obtain the HITRUST CSF Letter of Certification, which contains the full scope of certified Azure offerings and regions, at the Service Trust Portal.

Microsoft Azure partners with HITRUST Alliance

In addition to today’s certification, Azure has also partnered in the past with HITRUST Alliance to release the HITRUST Shared Responsibility Matrix for Azure, which provides clarity around security and privacy responsibilities between Azure and its customers, making it easier for organizations to achieve their own HITRUST CSF certification. The matrix outlines which HITRUST CSF controls are fully managed by Azure, which are shared between Azure and customers, and which are solely the customers’ responsibility. The matrix also provides guidance on how customers can leverage the capabilities in Azure to meet their own security and compliance obligations.

Azure also supports the HITRUST Inheritance Program which empowers organizations to achieve more by significantly reducing the compliance cost and burden by enabling customers to externally inherit requirements from the Azure HITRUST CSF certification. The program allows customers to inherit up to 75 percent of applicable HITRUST CSF controls from the Azure certification scope without additional testing or validation by an external assessor. This reduces the time, effort, and resources required for customers to obtain their own HITRUST CSF certification or report on their compliance status using other frameworks or standards based on the HITRUST CSF. Azure has reviewed over 23,450 inheritance requests from customers since the program’s inception.

Azure has maintained the HITRUST CSF certification since November 2016. Azure was one of the first cloud service providers to achieve HITRUST CSF certification and has been continuously expanding its scope of certified services and regions. Azure is also one of the few cloud service providers that offer HITRUST CSF certified services in both public and government clouds. The Azure HITRUST CSF v11.0.1 certification is backward compatible with HITRUST CSF v9.1, v9.2, v9.3, v9.4, v9.5, and v9.6 certifications, offering support to a wide range of customers.

Learn more about the Azure HITRUST CSF certification

Azure is dedicated to helping healthcare organizations accelerate their digital transformation while ensuring security and compliance in the cloud. Azure provides a secure and compliant cloud platform that enables healthcare organizations to build innovative solutions that improve patient care, operational efficiency, and business agility. Azure also offers a variety of tools and resources that make it easier for healthcare organizations to achieve and maintain security and compliance in the cloud. The Azure HITRUST CSF certification is a testament to the commitment Azure has to be a trusted partner for healthcare organizations in their cloud journey.
The post Microsoft Azure achieves HITRUST CSF v11 certification appeared first on Azure Blog.
Quelle: Azure

Announcing Microsoft Playwright Testing: Scalable end-to-end testing for modern web apps

This blog has been co-authored by Ashish Shah, Partner Director of Engineering, Azure Developer Experience.

We are excited to announce the preview of Microsoft Playwright Testing, a new service for running Playwright tests easily at scale. Playwright, a fast-growing, open-source framework, enables reliable end-to-end testing and automation for modern web apps. Microsoft Playwright Testing is a fully managed service that uses the cloud to enable you to run Playwright tests with much higher parallelization across different operating system-browser combinations simultaneously. This means faster test runs with broader scenario coverage, which helps speed up delivery of features without sacrificing quality.

Ready to jump in? Get your free Azure trial and start running your tests at cloud-scale with Microsoft Playwright Testing.

Get test suite results faster

Adding Playwright tests to your continuous integration (CI) workflow helps ensure that as the app evolves, your web app experiences continue to work the way you expect. But as the app becomes more complex, the test suite required for comprehensive testing across multiple browser and operating system combinations also increases in size. This leads to longer test suite completion times, potentially delaying your feature delivery. Development teams are already under pressure to quickly deploy app enhancements. To work around long wait times for test completion, it is common practice for development teams to selectively run only a small subset of tests. In a more detrimental scenario, a team may choose to execute tests less frequently, such as only a few times a week in an integration environment instead of with every pull request. This approach can potentially delay catching issues, complicate the process of pinpointing the cause of problems, and adversely affect the overall productivity of the development team.

With the @playwright/test runner, your tests run in independent, parallel worker processes with each process starting its own browser.  Increasing the number of parallel workers can reduce the time it takes to complete the full test suite. You can set the number of workers using the command line:

npx playwright test –workers=4

However, when you run tests locally or in your CI pipeline, you’re limited to the number of central processing unit (CPU) cores on your local machine or CI agent machine. At some point adding more workers will lead to resource contention, slowing down each worker and introducing test flakiness.

By using Microsoft Playwright Testing service you can increase the number of workers at cloud-scale to much bigger numbers. The worker processes orchestrated by @playwright/test continue to run locally but the browser instances, which are resource-intensive, now run in the cloud. You can see in the demo video below how thousands of tests run on 50 parallel browsers in the cloud managed by Microsoft Playwright Testing, significantly reducing the wait time for test results.

Consistent test results across multiple operating systems and browser combinations

App complexity isn’t the only factor in increasing test suite size. Modern web apps need to work flawlessly across numerous browsers, operating systems, and devices. Testing across all these variables increases the amount of time it takes to run your test suite. With Microsoft Playwright Testing you’ll use the scalable parallelism provided by the service to run these tests simultaneously across all modern rendering engine. This includes Chromium, WebKit and Firefox on Windows, and Linux and mobile emulation of Google Chrome for Android and Mobile Safari. Also, the service-managed browsers ensure consistent and reliable results for both functional and visual regression testing, whether tests are run from your CI pipeline or development machine. This extensive cross-compatibility testing helps ensure your web app delivers consistent performance and functionality across all platforms, optimizing the experience for any user, regardless of their browser or operating system.

Figure 1-Use Microsoft Playwright Testing service from your CI pipelines and code editors.

No test code changes required

If you’re using Playwright today, getting started with Microsoft Playwright Testing is easy! The service is designed to seamlessly integrate with your Playwright test suite, no changes to existing test code required. In just a few steps you can connect your test suite to the service and unlock the full potential of cloud-powered parallel testing. Plus, the service supports multiple versions of Playwright and updates with each new Playwright release, ensuring your tests run against the latest browser versions and technologies while helping to keep your app current, robust, and secure. Now you can focus on thorough application testing without the worry of managing a complex test infrastructure.

Get started with a free trial

Discover all Microsoft Playwright Testing has to offer using the free trial today. Sign in using your Azure account (or create one free), then follow our Quickstart guide to configure your Playwright tests and run them at cloud-scale.

Next you can explore our flexible consumption-based pricing where you pay only for what you use.

Share your feedback

What would you like to see? We’d love to hear your feedback to help shape the future of this service.

Learn more about Microsoft Playwright Testing.

Learn more about using the Playwright Testing service for your web application testing.

Explore the features and benefits that Microsoft Playwright Testing offers for scalable and reliable web app testing.

Learn how to run your existing Playwright tests with highly parallel cloud browsers to reduce time waiting for test suite completion.

Learn how to set up continuous end-to-end testing to validate that your web app runs correctly across different browsers and operating systems with every code commit.

Learn about our flexible pricing.

Use the pricing calculator to determine your costs based on your business needs.

Learn how Playwright enables reliable end-to-end testing for modern web apps.

See Playwright on GitHub.

Interact with the Playwright community on Discord.

Stay up to date with Playwright releases.

The post Announcing Microsoft Playwright Testing: Scalable end-to-end testing for modern web apps appeared first on Azure Blog.
Quelle: Azure

Get inspired: Five Microsoft partners using generative AI to enhance productivity

Generative AI has become a critical tool for businesses seeking to streamline tasks and enhance productivity. Today, generative AI can generate everything from written content and music to product designs and programming code, paving the way for unprecedented levels of automation.

The demand and impact of cutting down on repetitive tasks is real. Generative AI solutions such as Azure OpenAI Service improve productivity and are useful for content creation, scientific advancement, customer service, and marketing automation.

According to a 2023 report by McKinsey Global Institute:

Combining generative AI with all other technologies could add 0.2 to 3.3 percentage points annually to productivity growth.  

Generative AI could further reduce the volume of human-serviced contacts by up to 50 percent, depending on a company’s existing level of automation.

AI agents integrated through APIs could act nearly autonomously or as copilots, giving real-time suggestions to agents during customer interactions.

In the lead identification stage of drug development, scientists can use foundation models to automate the preliminary screening of chemicals to search for those that will produce specific effects on drug targets. 

A 2023 BCG (Boston Consulting Group) article reported that emails drafted by a generative AI application achieved 18 percent higher customer happiness scores than humans’ email responses.

By streamlining tasks that were previously manual, time-consuming, and error-prone, generative AI is not just a tool for businesses—but a critical driver of future success.

How partners benefit from Generative AI

Customers around the globe have benefitted from the Microsoft AI Cloud Partner Program by allowing them to build quickly, scale growth, and sell worldwide. AI-specific benefits help partners leverage Microsoft’s extensive resources such as access to AI expertise, and network to develop, market, and sell AI solutions, thereby enhancing their competitiveness in the rapidly growing field of artificial intelligence.

Let’s have a look at how five partners (Commerce.AI, Datadog, Modern Requirements, Atera, and SymphonyAI) have powered their customers’ transformations and derived value from using generative AI.

Commerce.AI

Commerce.AI uses a combination of its own technology, Microsoft, and OpenAI solutions to streamline productivity in customer support centers using automation. When a customer call is received, Azure AI Services transcribes it in real-time using a custom Commerce.AI model, and, if necessary, translates it. Post-call, the system utilizes OpenAI technology to automatically generate a summary, determining any follow-up actions and exporting the data to Microsoft Dynamics 365, thereby eliminating the need for agents to write post-call notes. Commerce.AI’s solutions are lauded for enhancing productivity by up to 50 percent, increasing efficiency, and delivering instant insights.

“With OpenAI models, the answers to our questions are ready the moment we ask them,” says Andy Pandharikar: Founder and Chief Executive Officer at Commerce.AI. “What we’ve done at Commerce.AI using Azure OpenAI Service is supercharge our customers’ ability to take action based on those insights through automation.”

By automating workflows, Commerce.AI can respond swiftly to shifts in customer priorities. By analyzing unstructured data and using the insights to automate new ad campaigns or launch new products, Commerce.AI predicts that the cost of generative AI in business will decrease as its applications expand.

Datadog

Since launching its first monitoring solution for Azure Virtual Machines in 2015, Datadog has expanded its capabilities by embedding observability solutions within the Azure portal and developing over 600 in-house integrations. With many of Datadog’s customers leveraging Azure OpenAI Service, Datadog developed a seamless Azure OpenAI Service integration to expedite monitoring operations and improve efficiency.

This integration provides comprehensive monitoring for cloud-native and hybrid workloads, accelerates cloud adoption journeys, brings innovative monitoring capabilities, and guarantees best-in-class service quality.

“We have everything configured for autoscaling,” explains Benjamin Pineau, Senior Software Engineer at Datadog, “and Azure will always adapt to our needs, upping capacity by several hundreds of high-memory instances to ingest a spike and then slowing back down in a matter of minutes.”

Since launching this solution at Microsoft Build 2023, hundreds of organizations, including Fortune 50 and large global multinational companies, have adopted Datadog to monitor their AI applications. The solution enables these companies to monitor analytics, optimize costs, and troubleshoot issues in AI-powered applications, freeing up their development teams to focus on customer-centric product development. With this integration, customers can access metrics from Azure Virtual Machines, tag Azure metrics with resource-specific data, gain unique insights into their Azure environment, and correlate data across various Azure applications.

Modern Requirements

Modern Requirements is committed to optimizing the requirements processes of its customers through automation. Their key services include providing the tools necessary for effective project management throughout their life cycles, hastening time to market, and enhancing project quality. Their target sectors range from healthcare and financial services to automotive, aviation, and government, all of which have the common need for regulatory compliance, auditability, and seamless workflow solutions.

The foundation of Modern Requirements’ solution is Microsoft Azure DevOps, chosen for its scalability and security. The integration with Azure OpenAI Service further enhances this with its multifaceted model capable of handling various tasks while ensuring data privacy and security. This integration requires minimal training and opens doors for significant enhancements through OpenAI.

Designed with intent, Modern Requirements4DevOps serves both Modern Requirements’ and Microsoft’s clients in their product development life cycles by automating numerous functions. It further enriches this service with the introduction of Copilot4DevOps, an implementation of ChatGPT in Modern Requirements4DevOps. This tool automates several phases in the product development life cycle, freeing users to focus more on analytical and collaborative tasks.

Modern Requirements4DevOps relieves workflow and data management burdens, storing all information in a single source of truth in Azure DevOps. The extension also transforms Azure DevOps into a knowledge management system, moving away from just record-keeping and workflow management.

The solution is used by requirements engineers, business analysts, test leads, compliance leads, project managers, and project architects for information provision, reuse, and collaboration. It effectively substitutes up to half a dozen costly tools, providing an integrated, supportive, and affordable alternative for their clients.

Atera

Atera, an Israeli software company, has set a mission to increase IT efficiency tenfold through its AI-powered IT Platform. Developed in collaboration with Microsoft using Azure OpenAI Service, this groundbreaking tool offers a comprehensive view of IT activities and proactively identifies and resolves issues, allowing IT professionals to concentrate on critical tasks.

The platform, serving 11,000 customers across 105 countries, is revolutionizing the way IT issues are handled. It collects metrics continuously, offers immediate solutions, and remotely fixes machines. When customers contact IT support, the autopilot responds instantly with solutions, while a co-pilot takes over in case of complex issues, offering a summarized problem description and recommended solutions to technicians.

“Instead of spending 20 minutes trying to understand the problem, 15 minutes deciding on a solution, and then possibly 40 minutes to remotely fix the issue or two hours writing a script to run it, the technician can focus directly on fixing the issue,” says Oshri Moyal, Co-Founder and CTO of Atera. “All it takes is a few clicks, and the problem is solved. This change means a single technician can go from handling seven to 70 cases per day.”

SymphonyAI

Financial crime, which includes fraud and money laundering, is a major global concern, costing around five percent of the global GDP, and is linked to crimes like human trafficking and terrorism, among others. SymphonyAI is taking innovative steps to address this problem. The company’s Sensa-NetReveal division offers AI-powered solutions designed to detect financial crime and assist financial investigators. They have integrated AI algorithms and machine learning models into their platform to identify previously undetected risk areas, aiming to complete investigations up to 70 percent faster and with 70 percent less effort from human investigators.

Their Sensa Copilot—built on the Azure infrastructure, Azure Kubernetes Service (AKS), Azure AI solutions, and Azure OpenAI—was introduced in May 2023, and is designed to assist financial crime investigators by automatically collecting, collating, and summarizing financial and third-party information, identifying behaviors associated with money laundering, and efficiently analyzing these activities. Investigators can also use it to draft suspicious activity reports (SAR).

In early testing, the Sensa Copilot was shown to increase the productivity of a financial institution’s compliance department by approximately 60 percent, given the volume of alerts these institutions receive daily. This marks a significant shift in the financial crime investigation landscape. In a world where time and efficiency are of the essence, the five above-mentioned Microsoft Partners serve as an inspiration for all businesses, irrespective of their sector or size, to embrace the opportunities offered by generative AI.

Our commitment to responsible AI

Microsoft has a layered approach for generative models, guided by the Microsoft AI Principles. In Azure OpenAI, an integrated safety system provides protection from undesirable inputs and outputs and monitors for misuse. In addition, Microsoft provides guidance and best practices to help customers responsibly build applications using these models and expects customers to comply with the Azure OpenAI Code of Conduct.

Get started with Azure OpenAI Service 

Apply for access to Azure OpenAI Service by completing this form. 

Learn about Azure OpenAI Service and the latest enhancements. 

Get started with OpenAI GPT-4 in Azure OpenAI Service in Microsoft Learn. 

Read our Partner announcement blog, ”Empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service.” 

Learn how to use the new Chat Completions API (preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service. 

The post Get inspired: Five Microsoft partners using generative AI to enhance productivity appeared first on Azure Blog.
Quelle: Azure

Microsoft empowers health organizations with generative AI and actionable data insights

This post was co-authored by Naveen Valluri, General Manager, Health Data & AI, Microsoft Health & Life Sciences.

In the past year, AI has transformed what we thought was possible and opened up new avenues for groundbreaking transformations. From creating personalized treatment plans to extracting insights from XRAYs and MRIs, generative AI has made the concept of artificial intelligence real—and accessible, such as with Azure AI Health Bot. For the healthcare industry, this might mean the beginning of a transformative era that changes how healthcare is delivered and accessed—making precision medicine truly individualized, speeding up groundbreaking research for life threatening diseases, and finding new and innovative ways to improve patient care.

Making AI and machine learning real and actionable starts with the data being analytics ready. Healthcare data has been growing at an exponential rate and most healthcare organizations don’t know where to start with organizing that data. It is usually on-premises, siloed, and hard to navigate. The very first step would be to make this data accessible and normalize it in a way that makes it ready for analytics and AI in the cloud. Industry-specific solutions in Microsoft Fabric provide relevant solutions that unify data and insights for healthcare organizations through one common architecture and experience. Now available in preview, healthcare data solutions in Microsoft Fabric eliminate the costly, time-consuming process of stitching together a complex set of disconnected, multi-modal health data sources—text, images, video, and more—and provide a secure and governed way for organizations to access, analyze, and visualize data-driven insights across their organization.

We’re making several exciting announcements about new data and AI capabilities that will be introduced across the Microsoft Cloud for Healthcare to help health organizations improve patient experience, gain new insights with machine learning and AI, and handle health information securely. Features like de-identification service and getting insights from unstructured text will also be available soon in Fabric. We’re pleased to announce:

General availability of multi-language support in Text Analytics for health, an Azure AI Language service. Healthcare organizations worldwide can use the Text Analytics for health service to extract meaningful insights in six languages in addition to English—Spanish, French, German, Italian, Portuguese, and Hebrew—making this technology more accessible to health organizations worldwide and improving health equity on a global scale.

De-identification service (in preview) in Microsoft Fabric and Azure Health Data Services so organizations can de-identify medical data such that the resulting data retains its clinical relevance and distribution while also adhering to the HIPAA privacy rule. Our service supports unstructured text and will soon cover various other data types (structured, imaging, and MedTech). The service uses state-of-the-art machine learning models to automatically extract, redact, or surrogate over 30 entities—including HIPAA’s 18 protected health information (PHI) identifiers—from unstructured text such as clinical notes, messages, or clinical trial studies.

Expansion of our Azure AI Health Bot in preview to allow healthcare organizations to build copilots for their healthcare professionals to further manage administrative and clinical workloads as well as improve patient experiences. Azure AI Health Bot is designed to help healthcare organizations create specialized chatbot experiences which are now powered by generative AI, enabling high-value conversational scenarios for the health and life sciences industry.

Adding three new built-in models in preview to Azure AI Health Insights. These built-in models create actionable, chronological patient timelines based on clinical data and evidence, provide simplified, patient-friendly versions of clinical notes and reports, and surface radiology insights from radiology reports to help radiologists improve their workflow.

Building a healthcare ecosystem with a partner network 

In addition to our exciting product announcements, Wolters Kluwer also announced that its Health Language Platform, a Fast Healthcare Interoperability Resources (FHIR®) terminology server, will work with Microsoft Azure and Azure Health Data Services to help customers enrich and standardize their healthcare data with medical ontologies on Microsoft Azure.  

Customers onboarding to Azure will be able to access Wolters Kluwer’s Health Language Platform via Azure Marketplace to validate and translate their FHIR data so that it is ready for future analysis. Organizations can achieve semantic interoperability across multi-modal data sources to propel a range of use cases across healthcare. 

Working with our partner ecosystem, Microsoft is committed to continuing to develop healthcare technology that helps our customers use Microsoft Cloud to derive insights from their data and responsibly use AI. By connecting our customers with the right partners in our ecosystem and giving them access to Azure Marketplace, we want to ensure they have access to the right building blocks for their organizations use case.  

Real-world innovation in healthcare

By combining Microsoft Cloud for Healthcare services and tools, health organizations are coming up with new and innovative solutions to meet their unique needs.

For example, let’s say a researcher is working on a new drug for Alzheimer’s disease and needs to find suitable patients with specific symptoms and diagnoses to work on a hypothesis. First, they would de-identify their raw data so that they can use it for their research. If we look at this from the perspective of the clinician, they may want to look at a specific set of patients to see if there are any similarities and patterns that may help them with treatment plans for specific patients. Once they have established this, they can then gather from clinical notes they may have missed to ensure they have the full picture. When writing out their report and prescriptions for the patient, the clinician can opt to simplify their note using AI, making the note much easier for the patient to read as it will exchange complicated terminology for something easier to understand.

Next, a patient who has been diagnosed with Alzheimer’s disease takes the leading role. They are interested in finding more information about their prescription medications in the report which they were able to understand much more easily than before due to the lack of medical jargon. They find that their hospital website has a chatbot and are easily able to interact with them to get answers about their medications and set up appointments if they want.

And that’s just one possibility. Whether it’s finding new treatments, enhancing patient engagement, or optimizing workflows, Microsoft Cloud for Healthcare can help healthcare organizations achieve more.

Helping solve healthcare’s biggest problems

At Microsoft, we want to empower you in solving the challenges you face on a day-to-day basis—whether it be reducing clinician burnout or delighting your patients with personalized care—and to do these—allowing you to gain insights from your data and develop and deploy AI at scale.

With the help of Dataside, a Microsoft partner in Brazil, Oncoclínicas is using Microsoft’s Azure AI text analytics for health to extract data from non-structured fields like medical notes, anatomic pathology, genomic, and imaging reports like MRI. This data was then used by Dataside for various use cases such as clinical trial feasibility, a better understanding of the scenarios for pharmacoeconomics, and gaining a deeper understanding of group epidemiology and outcomes of interest.

“Text Analytics for health was a turning point for Grupo Oncoclínicas to scale our processes and to structure our clinical notes, exam reports and field analysis, which previously only depended on manual curation. Having a solution that works in Portuguese is key—most global solutions tend to only cater to English, thereby neglecting other languages. Accuracy in the native Portuguese allowed us to maintain a high level of accuracy while analyzing the unstructured text.”—Marcio Guimaraes Souza, Head of Data and AI at Grupo Oncoclínicas.

“We are excited to be collaborating with Microsoft to explore the potential of generative AI through the Azure AI Health Bot. This partnership aims to enhance healthcare content utilization at Ramsay Healthcare, offering a transformative way for healthcare professionals to engage with the vast clinical knowledge base. Our innovative solution facilitates seamless and efficient interactions, providing healthcare teams with quick access to answers, recommendations, and inventive troubleshooting solutions, all delivered through an intuitive chat interface. We are confident that it holds the promise to play a pivotal role in our daily operations, reducing time to find relevant content, and potentially revolutionizing the way we provide patient care.”—Towa Jexmark, Head of Innovation and Strategic Partnerships at Ramsay Santé.

Do more with your data with Microsoft Cloud for Healthcare

With Microsoft Cloud for Healthcare, organizations can transform their patient experience, discover new insights with the power of machine learning and AI, and manage PHI data with confidence. Enable your data for the future of healthcare innovation with Microsoft Cloud for Healthcare.

We look forward to working with you as you build the future of health.

Introducing Microsoft Fabric and Copilot in Microsoft Power BI.

Learn more about Azure Health Data Services.

What is Azure Text Analytics for health?

Learn about Azure AI Health Bot.

Discover more about Azure AI Health Insights.

Learn more about Microsoft Cloud for Healthcare.

Discover how health companies are using Azure to drive better health outcomes.

FHIR® is the registered trademark of HL7 and is used with the permission of HL7.
The post Microsoft empowers health organizations with generative AI and actionable data insights appeared first on Azure Blog.
Quelle: Azure

How we interact with information: The new era of search

In today’s rapidly evolving technological landscape, generative AI, and especially Large Language Models (LLMs), are ushering in a significant inflection point. These models stand at the forefront of change, reshaping how we interact with information.

The utilization of LLMs for content consumption and generation holds immense promises for businesses. They have the potential to automate content creation, enhance content quality, diversify content offerings, and even personalize content. This is an inflection point and great opportunity to discover innovative ways to accelerate your business’s potential; explore the transformative impact and shape your business strategy today.

LLMs are finding practical applications in various domains. Take, for example, Microsoft 365 Copilot—a recent innovation aiming to reinvent productivity for businesses by simplifying interactions with data. It makes data more accessible and comprehensible by summarizing email threads in Microsoft Outlook, highlighting key discussion points, suggesting action items in MicrosoftTeams, and enabling users to automate tasks and create chatbots in Microsoft Power Platform.

Data from GitHub demonstrates the tangible benefits of Github Copilot, with 88 percent of developers reporting increased productivity and 73 percent reporting less time spent searching for information or examples.

Transforming how we search

Remember the days when we typed keywords into search bars and had to click on several links to get the information we needed?

Today, search engines like Bing are changing the game. Instead of providing a lengthy list of links, they intelligently interpret your question and source from various corners of the internet. What’s more, they present the information in a clear and concise manner, complete with sources.

The shift in online search is making the process more user-friendly and helpful. We are moving from endless lists of links towards direct, easy-to-understand answers. The way we search online has undergone a true evolution.

Now, imagine the transformative impact if businesses could search, navigate, and analyze their internal data with a similar level of ease and efficiency. This new paradigm would enable employees to swiftly access corporate knowledge and harness the power of enterprise data. This architectural pattern is known as Retrieval Augmented Generation (RAG), a fusion of Azure Cognitive Search and Azure OpenAI Service—making this streamlined experience possible.

The rise of LLMs and RAG: Bridging the gap in information access

RAG is a natural language processing technique that combines the capabilities of large pre-trained language models with external retrieval or search mechanisms. It introduces external knowledge into the generation process, allowing models to pull in information beyond their initial training.

Here’s a detailed breakdown of how RAG works:

Input: The system receives an input sequence, such as a question that needs an answer.

Retrieval: Prior to generating a response, the RAG system searches for (or “retrieves”) relevant documents or passages from a predefined corpus. This corpus could encompass any collection of texts containing pertinent information related to the input.

Augmentation and generation: The retrieved documents merge with the original input to provide context. This combined data is fed into the language model, which generates a response or output.

RAG can tap into dynamic, up-to-date internal and external data sources, and can access and utilize newer information without requiring extensive training. The ability to incorporate the latest knowledge leads to better precise, informed, and contextually relevant responses that brings a key advantage.

RAG in action: A new era of business productivity

Here are some scenarios where RAG approach can enhance employee productivity:

Summarization and Q&A: Summarize massive quantitates of information for easier consumption and communication.

Data-driven decisioning: Analyze and interpret data to uncover patterns, and identify trends to gain valuable insights.

Personalization: Tailor interactions with individualized information to result in personalized recommendations.

Automation: Automate repetitive tasks to streamline and be more productive.

As AI continues to evolve, its applications across various fields are becoming increasingly pronounced.

The RAG approach for financial analysis

Consider the world of financial data analysis for a major corporation—an arena where accuracy, timely insights, and strategic decision-making are paramount. Let’s explore how RAG use cases can enhance financial analysis with a fictitious company called Contoso.

1. Summarization and Q&A

Scenario: ‘Contoso’ has just concluded its fiscal year, generating a detailed financial report that spans hundreds of pages. The board members want a summarized version of this report, highlighting key performance indicators.

Sample prompt: “Summarize the main financial outcomes, revenue streams, and significant expenses from ‘Contoso’s’ annual financial report.”

Result: The model provides a concise summary detailing ‘Contoso’s total revenue, major revenue streams, significant costs, profit margins, and other key financial metrics for the year.

2. Data-driven decisioning

Scenario: With the new fiscal year underway, ‘Contoso’ wants to analyze its revenue sources and compare them to its main competitors to better strategize for market dominance.

Sample prompt: “Analyze ‘Contoso’s revenue breakdown from the past year and compare it to its three main competitors’ revenue structures to identify any market gaps or opportunities.”

Result: The model presents a comparative analysis, revealing that while ‘Contoso’ dominates in service revenue, it lags in software licensing, an area where competitors have seen growth.

3. Personalization

Scenario: ‘Contoso’ plans to engage its investors with a personalized report, showcasing how the company’s performance directly impacts their investments.

Sample prompt: “Given the annual financial data, generate a personalized financial impact report for each investor, detailing how ‘Contoso’s’ performance has affected their investment value.”

Result: The model offers tailored reports for each investor. For instance, an investor with a significant stake in service revenue streams would see how the company’s dominance in that sector has positively impacted their returns.

4. Automation

Scenario: Every quarter, ‘Contoso’ receives multiple financial statements and reports from its various departments. Manually consolidating these for a company-wide view would be immensely time-consuming.

Sample prompt: “Automatically collate and categorize the financial data from all departmental reports of ‘Contoso’ for Q1 into overarching themes like ‘Revenue’, ‘Operational Costs’, ‘Marketing Expenses’, and ‘R&D Investments’.”

Result: The model efficiently combines the data, providing ‘Contoso’ with a consolidated view of its financial health for the quarter, highlighting strengths and areas needing attention.

LLMs: Transforming content generation for businesses

Leveraging RAG based solutions, businesses can boost employee productivity, streamline processes and make data-driven decisions. As we continue to embrace and refine these technologies, the possibilities for their application can be virtually limitless.

Where to start?

Microsoft provides a series of tools to suit your needs and use cases.

Learn more about using your data with Azure OpenAI Service.

What is Azure Machine Learning prompt flow?

Orchestrate your AI with Semantic Kernel.

Discover a sample app for the RAG pattern using Azure Cognitive Search and Azure OpenAI.

Learn more

Check out below partner solutions for a jumpstart.

Learn about generative AI with Avanade.

Discover generative AI technology services through Accenture.

Explore EY’s AI consulting Services.

PwC provides AI everywhere.

KPMG presents speed to modern data, analytics, and AI.

Integration of RAG into business operations is not just a trend, but a necessity in today’s data-driven world. By understanding and leveraging these solutions, businesses can unlock new avenues for growth and productivity.

The post How we interact with information: The new era of search appeared first on Azure Blog.
Quelle: Azure