How hollow core fiber is accelerating AI  

This blog is part of the ‘Infrastructure for the era of AI’ series that focuses on emerging technology and trends in large-scale computing. This piece dives deeper into one of our newest technologies, hollow core fiber (HCF). 

AI is at the forefront of people’s minds, and innovations are happening at lightning speed. But to continue the pace of AI innovation, companies need the right infrastructure for the compute-intensive AI workloads they are trying to run. This is what we call ‘purpose-built infrastructure’ for AI, and it’s a commitment Microsoft has made to its customers. This commitment doesn’t just mean taking hardware that was developed by partners and placing it in its’ datacenters; Microsoft is dedicated to working with partners, and occasionally on its own, to develop the newest and greatest technology to power scientific breakthroughs and AI solutions.  

Infrastructure for the era of AI

Explore how you can integrate into the world of AI

Learn more

One of these technologies that was highlighted at Microsoft Ignite in November was hollow core fiber (HCF), an innovative optical fiber that is set to optimize Microsoft Azure’s global cloud infrastructure, offering superior network quality, improved latency and secure data transmission. 

Transmission by air 

HCF technology was developed to meet the heavy demands of workloads like AI and improve global latency and connectivity. It uses a proprietary design where light propagates in an air core, which has significant advantages over traditional fiber built with a solid core of glass. An interesting piece here is that the HCF structure has nested tubes which help reduce any unwanted light leakage and keep the light going in a straight path through the core.  

As light travels faster through air than glass, HCF is 47% faster than standard silica glass, delivering increased overall speed and lower latency. It also has a higher bandwidth per fiber, but what is the difference between speed, latency and bandwidth? While speed is how quickly data travels over the fiber medium, network latency is the amount of time it takes for data to travel between two end points across the network. The lower the latency, the faster the response time. Additionally, bandwidth is the amount of data that is sent and received in the network. Imagine there are two vehicles travelling from point A to point B setting off at the same time. The first vehicle is a car (representing single mode fiber (SMF)) and the second is a van (HCF). Both vehicles are carrying passengers (which is the data); the car can take four passengers, whereas the van can take 16. The vehicles can reach different speeds, with the van travelling faster than the car. This means it will take the van less time to travel to point B, therefore arriving at its destination first (demonstrating lower latency).  

For over half a century, the industry has been dedicated to making steady, yet small, advancements in silica fiber technology. Despite the progress, the gains have been modest due to the limitations of silica loss. A significant milestone with HCF technology was reached in early 2024, attaining the lowest optical fiber loss (attenuation) ever recorded at a 1550nm wavelength, even lower than pure silica core single mode fiber (SMF). 1 Along with low attenuation, HCF offers higher launch power handling, broader spectral bandwidth, and improved signal integrity and data security compared to SMF. 

The need for speed 

Imagine you’re playing an online video game. The game requires quick reactions and split-second decisions. If you have a high-speed connection with low latency, your actions in the game will be transmitted quickly to the game server and to your friends, allowing you to react in real time and enjoy a smooth gaming experience. On the other hand, if you have a slow connection with high latency, there will be a delay between your actions and what happens in the game, making it difficult to keep up with the fast-paced gameplay. Whether you’re missing key action times or lagging behind others, lagging is highly annoying and can seriously disrupt gameplay. Similarly, in AI models, having lower latency and high-speed connections can help the models process data and make decisions faster, improving their performance. 

Reducing latency for AI workloads

So how can HCF help the performance of AI infrastructure? AI workloads are tasks that involve processing large amounts of data using machine learning algorithms and neural networks. These tasks can range from image recognition, natural language processing, computer vision, speech synthesis, and more. AI workloads require fast networking and low latency because they often involve multiple steps of data processing, such as data ingestion, preprocessing, training, inference, and evaluation. Each step can involve sending and receiving data from different sources, such as cloud servers, edge devices, or other nodes in a distributed system. The speed and quality of the network connection affect how quickly and accurately the data can be transferred and processed. If the network is slow or unreliable, it can cause delays, errors, or failures in the AI workflow. This can result in poor performance, wasted resources, or inaccurate outcomes. These models often need huge amounts of processing power and ultra-fast networking and storage to handle increasingly sophisticated workloads with billions of parameters, so ultimately low latency and high-speed networking can help speed up model training and inference, improve performance and accuracy, and foster AI innovation. 

Helping AI workloads everywhere

Fast networking and low latency are especially important for AI workloads that require real-time or near-real-time responses, such as autonomous vehicles, video streaming, online gaming, or smart devices. These workloads need to process data and make decisions in milliseconds or seconds, which means they cannot afford any lag or interruption in the network. Low latency and high-speed connections help ensure that the data is delivered and processed in time, allowing the AI models to provide timely and accurate results. Autonomous vehicles exemplify AI’s real-world application, relying on AI models to swiftly identify objects, predict movements, and plan routes amid unpredictable surroundings. Rapid data processing and transmission, facilitated by low latency and high-speed connections, enable near real-time decision-making, enhancing safety and performance. HCF technology can accelerate AI performance, providing faster, more reliable, and more secure networking for AI models and applications. 

Regional implications 

Beyond the direct hardware that runs your AI models, there are more implications. Datacenter regions are expensive, and both the distance between regions, and between regions and the customer, make a world of difference to both the customer and Azure as it decides where to build these datacenters. When a region is located too far from a customer, it results in higher latency because the model is waiting for the data to go to and from a center that is further away.

If we think about the car versus van example and how that relates to a network, with the combination of higher bandwidth and faster transmission speed, more data can be transmitted between two points in a network, in two thirds of the time. Alternatively, HCF offers longer reach by extending the transmission distance in an existing network by up to 1.5x with no impact on network performance. Ultimately, you can go a further distance at the same latency envelope as traditional SMF and with more data. This has huge implications for Azure customers, minimizing the need for datacenter proximity without increasing latency and reducing performance. 

The infrastructure for the era of AI 

HCF technology was developed to improve Azure’s global connectivity and meet the demands of AI and future workloads. It offers several benefits to end users, including higher bandwidth, improved signal integrity, and increased security. In the context of AI infrastructure, HCF technology can enable fast, reliable, and secure networking, helping to improve the performance of AI workloads. 

As AI continues to evolve, infrastructure technology remains a critical piece of the puzzle, ensuring efficient and secure connectivity for the digital era. As AI advancements continue to place additional strain on existing infrastructure, AI users are increasingly seeking to benefit from new technologies like HCF, virtual machines like the recently announced ND H100 v5, and silicon like Azure’s own first partner AI accelerator, Azure Maia 100. These advancements collectively enable more efficient processing, faster data transfer, and ultimately, more powerful and responsive AI applications. 

Keep up on our “Infrastructure for the Era of AI” series to get a better understanding of these new technologies, why we are investing where we are, what these advancements mean for you, and how they enable AI workloads.   

More from the series

Navigating AI: Insights and best practices 

New infrastructure for the era of AI: Emerging technology and trends in 2024 

A year in review for AI Infrastructure 

Tech Pulse: What the rise of AI means for IT Professionals 

Sources

1 Hollow Core DNANF Optical Fiber with <0.11 dB/km Loss
The post How hollow core fiber is accelerating AI   appeared first on Azure Blog.
Quelle: Azure

Microsoft is a Leader in the 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms 

Microsoft is a Leader in this year’s Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms. Azure AI provides a powerful, flexible end-to-end platform for accelerating data science and machine learning innovation while providing the enterprise governance that every organization needs in the era of AI. 

In May 2024, Microsoft was also named a Leader for the fifth year in a row in the Gartner® Magic Quadrant™ for Cloud AI Developer Services, where we placed furthest for our Completeness of Vision. We’re pleased by these recognitions from Gartner as we continue helping customers, from large enterprises to agile startups, bring their AI and machine learning models and applications into production securely and at scale. 

Azure AI is at the forefront of purpose-built AI infrastructure, responsible AI tooling, and helping cross-functional teams collaborate effectively using Machine Learning Operations (MLOps) for generative AI and traditional machine learning projects. Azure Machine Learning provides access to a broad selection of foundation models in the Azure AI model catalog—including the recent releases of Phi-3, JAIS, and GPT-4o—and tools to fine-tune or build your own machine learning models. Additionally, the platform supports a rich library of open-source frameworks, tools, and algorithms so that data science and machine learning teams can innovate in their own way, all on a trusted foundation. 

Azure AI

Microsoft is named a Leader in the 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms 

Read the report

Accelerate time to value with Azure AI infrastructure 

“We’re now able to get a functioning model with relevant insights up and running in just a couple of weeks thanks to Azure Machine Learning. We’ve even managed to produce verified models in just four to six weeks.”
—Dr. Nico Wintergerst, Staff AI Research Engineer at relayr GmbH 

Azure Machine Learning helps organizations build, deploy, and manage high-quality AI solutions quickly and efficiently, whether building large models from scratch, running inference on pre-trained models, consuming models as a service, or fine-tuning models for specific domains. Azure Machine Learning runs on the same powerful AI infrastructure that powers some of the world’s most popular AI services, such as ChatGPT, Bing, and Azure OpenAI Service. Additionally, Azure Machine Learning’s compatibility with ONNX Runtime and DeepSpeed can help customers further optimize training and inference time for performance, scalability, and power efficiency.

Whether your organization is training a deep learning model from scratch using open source frameworks or bringing an existing model into the cloud, Azure Machine Learning enables data science teams to scale out training jobs using elastic cloud compute resources and seamlessly transition from training to deployment. With managed online endpoints, customers can deploy models across powerful CPU and graphics processing unit (GPU) machines without needing to manage the underlying infrastructure—saving time and effort. Similarly, customers do not need to provision or manage infrastructure when deploying foundation models as a service from the Azure AI model catalog. This means customers can easily deploy and manage thousands of models across production environments—from on-premises to the edge—for batch and real-time predictions.  

Streamline operations with flexible MLOps and LLMOps 

“Prompt flow helped streamline our development and testing cycles, which established the groundedness we required for making sure the customer and the solution were interacting in a realistic way.”   
—Fabon Dzogang, Senior Machine Learning Scientist at ASOS

Machine learning operations (MLOps) and large language model operations (LLMOps) sit at the intersection of people, processes, and platforms. As data science projects scale and applications become more complex, effective automation and collaboration tools become essential for achieving high-quality, repeatable outcomes.  

Azure Machine Learning is a flexible MLOps platform, built to support data science teams of any size. The platform makes it easy for teams to share and govern machine learning assets, build repeatable pipelines using built-in interoperability with Azure DevOps and GitHub Actions, and continuously monitor model performance in production. Data connectors with Microsoft sources such as Microsoft Fabric and external sources such as Snowflake and Amazon S3, further simplify MLOps. Interoperability with MLflow also makes it seamless for data scientists to scale existing workloads from local execution to the cloud and edge, while storing all MLflow experiments, run metrics, parameters, and model artifacts in a centralized workspace. 

Azure Machine Learning prompt flow helps streamline the entire development cycle for generative AI applications with its LLMOps capabilities, orchestrating executable flows comprised of models, prompts, APIs, Python code, and tools for vector database lookup and content filtering. Azure AI prompt flow can be used together with popular open-source frameworks like LangChain and Semantic Kernel, enabling developers to bring experimental flows into prompt flow to scale those experiments and run comprehensive evaluations. Developers can debug, share, and iterate on applications collaboratively, integrating built-in testing, tracing, and evaluation tools into their CI/CD system to continually reassess the quality and safety of their application. Then, developers can deploy applications when ready with one click and monitor flows for key metrics such as latency, token usage, and generation quality in production. The result is end-to-end observability and continuous improvement. 

Develop more trustworthy models and apps 

“The responsible AI dashboard provides valuable insights into the performance and behavior of computer vision models, providing a better level of understanding into why some models perform differently than others, and insights into how various underlying algorithms or parameters influence performance. The benefit is better-performing models, enabled and optimized with less time and effort.” 
—Teague Maxfield, Senior Manager at Constellation Clearsight 

AI principles such as fairness, safety, and transparency are not self-executing. That’s why Azure Machine Learning provides data scientists and developers with practical tools to operationalize responsible AI right in their flow of work, whether they need to assess and debug a traditional machine learning model for bias, protect a foundation model from prompt injection attacks, or monitor model accuracy, quality, and safety in production. 

The Responsible AI dashboard helps data scientists assess and debug traditional machine learning models for fairness, accuracy, and explainability throughout the machine learning lifecycle. Users can also generate a Responsible AI scorecard to document and share model performance details with business stakeholders, for more informed decision-making. Similarly, developers in Azure Machine Learning can review model cards and benchmarks and perform their own evaluations to select the best foundation model for their use case from the Azure AI model catalog. Then they can apply a defense-in-depth approach to mitigating AI risks using built-in capabilities for content filtering, grounding on fresh data, and prompt engineering with safety system messages. Evaluation tools in prompt flow enable developers to iteratively measure, improve, and document the impact of their mitigations at scale, using built-in metrics and custom metrics. That way, data science teams can deploy solutions with confidence while providing transparency for business stakeholders. 

Read more on Responsible AI with Azure.

Deliver enterprise security, privacy, and compliance 

“We needed to choose a platform that provided best-in-class security and compliance due to the sensitive data we require and one that also offered best-in-class services as we didn’t want to be an infrastructure hosting company. We chose Azure because of its scalability, security, and the immense support it offers in terms of infrastructure management.”
—Michael Calvin, Chief Technical Officer at Kinectify

In today’s data-driven world, effective data security, governance, and privacy require every organization to have a comprehensive understanding of their data and AI and machine learning systems. AI governance also requires effective collaboration between diverse stakeholders, such as IT administrators, AI and machine learning engineers, data scientists, and risk and compliance roles. In addition to enabling enterprise observability through MLOps and LLMOps, Azure Machine Learning helps organizations ensure that data and models are protected and compliant with the highest standards of security and privacy.  

With Azure Machine Learning, IT administrators can restrict access to resources and operations by user account or groups, control incoming and outgoing network communications, encrypt data both in transit and at rest, scan for vulnerabilities, and centrally manage and audit configuration policies through Azure Policy. Data governance teams can also connect Azure Machine Learning to Microsoft Purview, so that metadata on AI assets—including models, datasets, and jobs—is automatically published to the Microsoft Purview Data Map. This enables data scientists and data engineers to observe how components are shared and reused and examine the lineage and transformations of training data to understand the impact of any issues in dependencies. Likewise, risk and compliance professionals can track what data is used to train models, how base models are fine-tuned or extended, and where models are employed across different production applications, and use this as evidence in compliance reports and audits. 

Lastly, with the Azure Machine Learning Kubernetes extension enabled by Azure Arc, organizations can run machine learning workloads on any Kubernetes clusters, ensuring data residency, security, and privacy compliance across hybrid public clouds and on-premises environments. This allows organizations to process data where it resides, meeting stringent regulatory requirements while maintaining flexibility and control over their MLOps. Customers using federated learning techniques along with Azure Machine Learning and Azure confidential computing can also train powerful models on disparate data sources, all without copying or moving data from secure locations. 

Get started with Azure Machine Learning 

Machine learning continues to transform the way businesses operate and compete in the digital era—whether you want to optimize your business operations, enhance customer experiences, or innovate. Azure Machine Learning provides a powerful, flexible machine learning and data science platform to operationalize AI innovation responsibly.  

Read the 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms report.

Learn more about Microsoft’s placement in the blog post “Gartner® Magic Quadrant™ for Cloud AI Developer Services.”

Explore more on the Microsoft Customer Stories blog. 

*Gartner, Magic Quadrant for Data Science and Machine Learning Platforms, By Afraz Jaffri, Aura Popa, Peter Krensky, Jim Hare, Raghvender Bhati, Maryam Hassanlou, Tong Zhang, 17 June 2024. 

Gartner, Magic Quadrant for Cloud AI Developer Services, Jim Scheibmeir, Arun Batchu, Mike Fang, Published 29 April 2024. 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. 

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from this link. 
The post Microsoft is a Leader in the 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms  appeared first on Azure Blog.
Quelle: Azure

Build exciting career opportunities with new Azure skilling options 

Microsoft Build is more than just a tech conference—it’s a celebration of innovation, a catalyst for growth, and a gateway to unlocking your professional potential through skilling opportunities on Microsoft Learn. In this blog, we’ll look back at some of the most exciting Microsoft Azure tools that were featured at Build 2024 and put you on the path to attain proficiency.  

Start your skilling journey today on Microsoft Learn

Build intelligent apps with AI and cloud-native technologies

Learn more

Jump to a section: 

Unleash the power of AI by mastering intelligent app development 

Empower your developers to achieve unprecedented productivity 

Accelerate your cloud journey with seamless Azure migration 

Master cloud-scale data analysis for powerful business insights 

Unlock maximum cloud efficiency and savings with Azure 

Unleash the power of AI by mastering intelligent app development 

Azure provides a comprehensive ecosystem of services, tools, and infrastructure tailored for the entire AI lifecycle. At Build we highlighted how your team can efficiently develop, scale, and optimize intelligent solutions that use cutting-edge technologies. 

This year at Build, Microsoft announced the general availability for developers to build and customize models in Microsoft Azure AI Studio. We recently dropped an Azure Enablement Show episode that guides viewers through building their own Copilot using Studio. Watch a demonstration of how to use prompt flow to create a custom Copilot, how to chat with the AI model, and then deploy it as an endpoint. 

Another episode focuses on new Microsoft Azure Cosmos DB developer guides for Node.js and Python, as well as a learning path for building AI chatbots using Azure Cosmos DB and Microsoft Azure Open AI. You’ll learn how to set up, migrate, manage, and utilize V Core-based Azure Cosmos DB for MongoDB to create generative AI apps, culminating in a live demo of an AI chatbot. 

If that Azure Enablement Show episode piques your interest to learn more about Azure Cosmos DB, check out the Microsoft Developers AI Learning Hackathon, where you’ll further explore the world of AI and how to build innovative apps using Azure Cosmos DB, plus get the chance to win prizes! To help you prepare for the hackathon, we have a two-part series to guide you through building AI apps with Azure Cosmos DB, which includes deep-dives into AI fundamentals, Azure Open AI API, vector search, and more.  

You can also review our official collection of Azure Cosmos DB learning resources, which includes lessons, technical documentation, and reference sample codes.  

Looking for a more structured lesson plan? Our newly launched Plans on Microsoft Learn now provides guided learning for top Azure tools and solutions, including Azure Cosmos DB. Think of it as a structured roadmap for you or your team to acquire new skills, offering focused content, clear milestones, and support to speed up the learning process. Watch for more official Plans on Microsoft Learn over the coming months! 

There’s even more to learn about building intelligent AI apps with other exciting Azure tools, with two official collections on Azure Kubernetes Service—Build Intelligent Apps with AI and cloud-native technologies and Taking Azure Kubernetes Service out of the Cloud and into your World—and Build AI Apps with Azure Database for PostgreSQL.  

Empower your developers to achieve improved productivity 

Accelerating developer productivity isn’t just about coding faster; it’s about unlocking innovation, reducing costs, and delivering high-quality software that drives business growth. Azure developer tools and services empowers you to streamline processes, automate workflows, and use advanced technologies like AI and machine learning. 

Join another fun episode of the Azure Enablement Show to discover Microsoft’s skilling resources and tools to help make Python coding more efficient. Learn how to build intelligent apps with Azure’s cloud, AI, and data capabilities and follow along with hands-on modules covering Python web app deployment and machine learning model building on Azure. 

We also have three official collections of learning resources that tackle different aspects of developer productivity:  

Microsoft Developer Tools @ Build 2024: With cutting-edge developer tools and insights, we’ll show you how to create the next generation of modern, intelligent apps. Learn how you can build, test, and deploy apps from the cloud with Microsoft Dev Box, Microsoft Visual Studio, and how Microsoft Azure Load Testing and Microsoft Playwright Testing make it easy to test modern apps.  

Accelerate Developer Productivity with GitHub and Azure for Developers: Continue unlocking the full coding potential in the cloud with GitHub Copilot. Through a series of videos, articles, and activities, you’ll see how GitHub Copilot can assist you and speed up your productivity across a variety of programming languages and projects.  

Secure Developer Platforms with GitHub and Azure: Learn how to elevate your code security with GitHub Advanced Security, an add-on to GitHub Enterprise. Safeguard your private repositories at every development stage with advanced features like secret scanning, code scanning, and dependency management. 

Accelerate your cloud journey with seamless Azure migration

Migrating to Azure empowers organizations to unlock a world of opportunities. At Build we demonstrated how, by using the robust and scalable Azure cloud platform, businesses can modernize their legacy systems, enhance security and compliance, and integrate with AI.  

Looking to get more hands-on with Azure migration tools? Check out our lineup of Microsoft Azure Virtual Training Days. These free, two-day, four-hour sessions are packed with practical knowledge and hands-on exercises for in-demand skills.  

Data Fundamentals: In this foundational-level course, you’ll learn core data concepts and skills in Azure cloud data services. Find out the difference between relational and non-relational databases, explore Azure offerings like Azure Cosmos DB, Microsoft Azure Storage, and gain insights into large-scale analytics solutions such as Microsoft Azure Synapse Analytics and Microsoft Azure Databricks.  

Migrate and Secure Windows Server and SQL Server Workloads: This comprehensive look at migrating and securing on-premises Windows Server and SQL Server workloads to Azure offers insights into assessing workloads, selecting appropriate migration options, and using Azure flexibility, scalability, and cost-saving features.  

Microsoft Azure SQL is an intelligent, scalable, and secure cloud database service that simplifies your operations and unlocks valuable insights for your business. The curated learning paths in our official Azure SQL collection will enable you to focus on the domain-specific database administration and optimization activities that are critical for your business. 

For an even more structured learning experience, there’s our official Plans on Microsoft Learn offering, Migrate and Modernize with Azure Cloud-Scale Database to Enable AI.  Designed to equip you with the expertise needed to harness the full potential of Azure SQL, Microsoft Azure Database for MySQL, Microsoft Azure Database for PostgreSQL, and Microsoft SQL Server enabled by Microsoft Azure Arc for hybrid and multi-cloud environments, this plan will immerse you in the latest capabilities and best practices.  

Master cloud-scale data analysis for insightful decision making 

Cloud-scale analytics help businesses gain valuable insights and make data-driven decisions at an unprecedented speed. Our unified analytics platform, Microsoft Fabric, simplifies data integration, enables seamless collaboration, and democratizes access to AI-powered insights, all within a single, integrated environment. 

Looking to take the Fabric Analytics Engineer Associate certification exam? Get ready with Microsoft Fabric Learn Together, a series of live, expert-led sessions designed to help you build proficiency in tools such as Apache Spark and Data Factory and understand concepts from medallion architecture design to lakehouses.   

There’s still time to register for our Virtual Training Day session, Implementing a Data Lakehouse with Microsoft Fabric, which aims to supply data pros with technical experience how to unify data analytics using AI and extract critical insights. Key objectives include identifying Fabric core workloads to deliver insights faster and setting up a data lakehouse foundation for ingestion, transformation, modeling, and visualization.  

And of course, don’t miss out on our official collection of learning resources for Microsoft Fabric and Azure Databricks, featuring modules on implementing a Data Lakehouse and using Copilot in Fabric, and workshops on building retrieval augmented generation (RAG) Applications and Azure Cosmos DB for MongoDB vCore. For a more curated experience, our Plans on Microsoft Learn collection will get started on how to ingest data with shortcuts, pipelines, or dataflows, how to transform data with dataflows, procedures, and notebooks, and how to store data in the Lakehouse and Data Warehouse.  

Unlock maximum cloud efficiency and savings with Azure 

Promoting resiliency on Azure is a strategic approach to managing your cloud resources efficiently, ensuring optimal performance while minimizing costs. By right-sizing virtual machines (VMs), utilizing reserved instances or savings plans, and taking advantage of automation tools like Microsoft Azure Advisor, you can maximize the value of your Azure investment. 

On another fun episode of our Azure Enablement Show, we explore the Learn Live resources available to help you optimize your cloud adoption journey. Confident cloud operations require an understanding of how to manage cost efficiency, reliability, security, and sustainability. Whether you’re an IT pro or just testing the waters, this two-part episode will point you to the learning resources you need.  

There’s always more to explore at Microsoft Learn 

Like every year, Microsoft Build delivered exciting new products and advancements in Azure technology. Don’t get left behind! Start your skilling journey today at Microsoft Learn.  
The post Build exciting career opportunities with new Azure skilling options  appeared first on Azure Blog.
Quelle: Azure

How to Measure DevSecOps Success: Key Metrics Explained

DevSecOps involves the integration of security throughout the entire software development and delivery lifecycle, representing a cultural shift where security is a collective responsibility for everyone building software. By embedding security at every stage, organizations can identify and resolve security issues earlier in the development process rather than during or after deployment.

Organizations adopting DevSecOps often ask, “Are we making progress?” To answer this, it’s crucial to implement metrics that provide clear insights into how an organization’s security posture evolves over time. Such metrics allow teams to track progress, pinpoint areas for improvement, and make informed decisions to drive continuous improvement in their security practices. By measuring the changing patterns in key indicators, organizations can better understand the impact of DevSecOps and make data-driven adjustments to strengthen their security efforts. 

Organizations commonly have many DevSecOps metrics that they can draw from. In this blog post, we explore two foundational metrics for assessing DevSecOps success. 

Key DevSecOps metrics

1. Number of security vulnerabilities over time

Vulnerability analysis is a foundational practice for any organization embarking on a software security journey. This metric tracks the volume of security vulnerabilities identified in a system or software project over time. It helps organizations spot trends in vulnerability detection and remediation, signaling how promptly security gaps are being remediated or mitigated. It can also be an indicator of the effectiveness of an org’s vulnerability management initiatives and their adoption, both of which are crucial to reducing the risk of cyberattacks and data breaches.

2. Compliance with security policies

Many industries are subject to cybersecurity frameworks and regulations that require organizations to maintain specific security standards. Policies provide a way for organizations to codify the rules for producing and using software artifacts. By tracking policy compliance over time, organizations can verify consistent adherence to established security requirements and best practices, promoting a unified approach to software development.

The above metrics are a good starting point for most organizations looking to measure their transformation from DevSecOps activities. The next step — once these metrics are implemented — is to invest in an observability system that enables relevant stakeholders, such as security engineering, to easily consume the data. 

DevSecOps insights with Docker Scout

Organizations interested in evaluating their container images against these metrics can get started in a few simple steps with Docker Scout. The Docker Scout web interface provides a comprehensive dashboard for CISOs, security teams, and software developers, offering an overview of vulnerability trends and policy compliance status (Figure 1). The web interface is a one-stop shop where users can drill down into specific images for deeper investigations and customize out-of-the-box policies to meet their specific needs.

Figure 1: Docker Scout dashboard.

Furthermore, the Docker Scout metrics exporter is a powerful addition to the Docker Scout ecosystem to bring vulnerability and policy compliance metrics into existing monitoring systems. This HTTP endpoint enables users to configure Prometheus-compatible tools to scrape Docker Scout data, allowing organizations to integrate with popular observability tools like Grafana and Datadog to achieve centralized security observability. 

Figures 2 and 3 show two sample Grafana dashboards illustrating the vulnerability trends and policy compliance insights that Docker Scout can provide.

Figure 2: Grafana Dashboard — Policy compliance.

Figure 2 displays a dashboard that illustrates the compliance posture for each policy configured within a Docker Scout organization. This visualization shows the proportion of images in a stream that complies with the defined policies. At the top of the dashboard, you can see the current compliance rate for each policy, while the bottom section shows compliance trends over the past 30 days.

Figure 3 shows a second Grafana dashboard illustrating the number of vulnerabilities by severity over time within a given stream. In this example, you can see notable spikes across all vulnerabilities, indicating the need for deeper investigation and prioritizing remediation.

Figure 3: Grafana Dashboard — Vulnerabilities by severity trends.

Conclusion

Docker Scout metrics exporter is designed to help security engineers improve containerized application security posture in an operationally efficient way. To get started, follow the instructions in the documentation. The instructions will get you up and running with the current public release of metrics exporter. 

Our product team is always open to feedback on social channels such as X and Slack and is looking for ways to evolve the product to align with our customers’ use cases.

Learn more

Visit the Docker Scout product page.

Looking to get up and running? Use our Docker Scout quickstart guide.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/