Are you an Elite DevOps performer? Find out with the Four Keys Project

Through six years of research, the DevOps Research and Assessment (DORA) team has identified four key metrics that indicate the performance of a software development team: Deployment Frequency – How often an organization successfully releases to productionLead Time for Changes – The amount of time it takes a commit to get into productionChange Failure Rate – The percentage of deployments causing a failure in productionTime to Restore Service – How long it takes an organization to recover from a failure in productionAt a high level, Deployment Frequency and Lead Time for Changes measure velocity, while Change Failure Rate and Time to Restore Service measure stability. And by measuring these values, and continuously iterating to improve on them, a team can achieve significantly better business outcomes. DORA, for example, uses these metrics to identify Elite, High, Medium and Low performing teams, and finds that Elite teams are twice as likely to meet or exceed their organizational performance goals.1Baselining your organization’s performance on these metrics is a great way to improve the efficiency and effectiveness of your own operations. But how do you get started? The journey starts with gathering data. To help you generate these metrics for your team, we created the Four Keys open source project, which automatically sets up a data ingestion pipeline from your Github or Gitlab repos through Google Cloud services and into Google DataStudio. It then aggregates your data and compiles it into a dashboard with these key metrics, which you can use to track your progress over time. To use the Four Keys project, we’ve included a setup script in the repo to make it easy to collect data from the default sources and view your DORA metrics. For anyone interested in contributing to the project or customizing it to their own team’s use cases, we’ve outlined the three key components below: the pipeline, the metrics, and the dashboard. The Four Keys pipelineThe Four Keys pipeline is the ETL pipeline which collects your DevOps data and transforms it into DORA metrics.One of the challenges of gathering these DORA metrics, however, is that, for any one team (let alone all the teams in an organization), deployment, change, and incident data are usually in different disparate systems. How do we develop an open-source tool that can capture data from these different sources—as well as from sources that you may want to use in the future? With Four Keys, our solution was to create a generalized pipeline that can be extended to process inputs from a wide variety of sources. Any tool or system that can output an HTTP request can be integrated into the Four Keys pipeline, which receives events via webhooks and ingests them into BigQuery.Click to enlargeIn the Four Keys pipeline, known data sources are parsed properly into changes, incidents and deployments. For example, GitHub commits are picked up by the changes script, Cloud Build deployments fall under deployments, and GitHub issues with an ‘incident’ label are categorized as incidents. If a new data source is added and the existing queries do not categorize it properly, the developer can recategorize it by editing the SQL script.Data extraction and transformationOnce the raw data is in the data warehouse, there are two challenges: extraction and transformation. To optimize for business flexibility, both of these processes are handled with SQL. Four Keys uses BigQuery scheduled queries to create the downstream tables from the raw events table.Four Keys categorizes events into Changes, Deployments, and Incidents using `WHERE` statements, and normalizes and transforms the data with the `SELECT` statement. The precise definition of a change, deployment, or incident depends on a team’s business requirements, making it all the more important to have a flexible way to include or exclude additional events.While the definition may different from team to team, the scripts do provide defaults to get you started. As an example, here’s the Deployments script:Four Keys uses the WHERE filter to only pull relevant rows from the events_raw table, and the SELECT statement to map the corresponding fields in the JSON to the commit id. One of the benefits of doing data transformations in BigQuery is that you don’t need to re-run the pipeline to edit or recategorize the data. The JSON_EXTRACT_SCALAR function allows you to parse and manipulate the JSON data in the SQL itself. BigQuery even allows you to write custom javascript functions in SQL!Calculating the metricsThis section discusses how to translate the DORA metrics to systems-level calculations. The original research done by the DORA team surveyed real people rather than gathering systems data and bucketed metric into a performance level, as follows:Click to enlargeHowever, it’s a lot easier to ask a person how frequently they deploy than it is to ask a computer! When asked if they deploy daily, weekly, monthly, etc., a DevOps manager usually has a gut feeling which bucket their organization falls into. However, when you demand the same information from a computer, you have to be very explicit about your definitions and make value judgments. Let’s look at some of the nuances in the metrics definitions and calculations.Deployment frequency`How often an organization successfully releases to production.`Deployment Frequency is the easiest metric to collect, because it only needs one table.  However, the bucketing for frequency is also one of the trickier elements to calculate. It would be simple and straightforward to show daily deployment volume or to grab the average number of deployments per week, but the metric is deployment frequency, not volume.  In the Four Keys scripts, Deployment Frequency falls into the Daily bucket when the median number of days per week with at least one successful deployment is equal to or greater than three. To put it more simply, to qualify for “deploy daily,” you must deploy on most working days. Similarly, if you deploy most weeks, it will be weekly, and then monthly and so forth.Next you have to consider what constitutes a successful deployment to production. Do you  include deployments that are only to 5% traffic? 80%? Ultimately, this depends on your team’s individual business requirements. By default, the dashboard includes any successful deployment to any level of traffic, but this threshold can be adjusted by editing the SQL scripts in the project. Lead Time for Changes`The amount of time it takes a commit to get into production`Lead Time to Changes metric requires two important pieces of data: when the commit happened, and when the deployment happened. This means that for every deployment, you need to maintain a list of all the changes included in the deployment. This is easily done by using triggers with a SHA mapping back to the commits. With the list of changes in the deploy table, you can join back to the changes table to get the timestamps, and then calculate the median lead time. Change Failure Rate`The percentage of deployments causing a failure in production`The Change Failure Rate depends on two things: how many deployments were attempted, and how many resulted in failures in production? To get this number, Four Keys needs the total count of deployments—easily acquired from the deployment table—and then links it to incidents. An incident may come from bugs or labels on github incidents, a form to spreadsheet pipeline, an issue management system, etc. The only requirement is that it contain the ID of the deployment so we can join the two tables together. Time to Restore Services`How long it takes an organization to recover from a failure in production`To measure the Time to Restore Services, you need to know when the incident was created and when it was resolved. You also need to know when the incident was created and when a deployment resolved said incident. Similar to the last metric, this data could come from any incident management system. The dashboardClick to enlargeWith all the data now aggregated and processed in BigQuery, you can visualize it in the Four Keys dashboard. The Four Keys setup script uses a DataStudio connector, which allows you to connect your data to the Four Keys dashboard template. The dashboard is designed to give you high-level categorizations based on the DORA research for the four key metrics, and also to show you a running log of your recent performance. This allows developer teams to get a sense of a dip in performance early on so they can mitigate it. Alternately, if performance is low, teams will see early signs of progress before the buckets are updated. Ready to get started?Please head over to the Four Keys project to try it out. The setup scripts will get you started setting up the architecture and integrating with your projects. We welcome feedback and contributions! To learn more about how to apply DevOps practices to improve your software delivery performance, visit cloud.google.com/devops. And be on the lookout for a follow-up post on gathering DORA metrics for applications that are hosted entirely in Google Cloud.1. The 2019 Accelerate State of DevOps: Elite performance, productivity, and scalingRelated ArticleThe 2019 Accelerate State of DevOps: Elite performance, productivity, and scalingDORA and Google Cloud have published the 2019 Accelerate State of DevOps Report.Read Article
Quelle: Google Cloud Platform

Bring innovation anywhere with Azure’s multi-cloud, multi-edge hybrid capabilities

As businesses shift priorities to enable remote work, take advantage of cloud innovation, and maximize their existing on-premises investments, relying on an effective multi-cloud, multi-edge hybrid approach is even more important than it has ever been.

Since the beginning, Microsoft Azure has always been hybrid by design, providing customers consistency and flexibility in meeting their business needs and empowering them to invent with purpose. This is one of the many reasons that the world’s leading brands trust their businesses to run on Azure. As we expand our Azure hybrid capabilities, we give customers a holistic and seamless approach to run and manage their apps anywhere across on-premises, multi-cloud, and the edge. Today, we are releasing even more innovation in our Azure hybrid portfolio.

Azure Arc: Bring Azure to any infrastructure

In November 2019, we launched Azure Arc to give customers the flexibility to innovate anywhere with Azure. Azure Arc does two key things: first, it brings Azure management capabilities to any infrastructure, and second, it enables Azure services to run anywhere. Since its launch, Azure Arc has seen tremendous customer interest and adoption across all industries. Organizations such as Africa’s Talking, Avanade, DexMach, Ferguson, Fujitsu, KPMG, and Siemens Healthineers are already realizing value with Azure Arc. They use Azure Arc to manage and govern their resources more efficiently in distributed environments, and they use Azure Arc to bring Azure data services on-premises.

Today, we are announcing more innovation with Azure Arc:

Azure Arc enabled data services is now in preview. Now, Azure SQL Managed Instance and Azure PostgreSQL Hyperscale can run across on-premises datacenters, multi-cloud, and the edge. Customers can now take advantage of the latest Azure managed database innovation, such as staying always current with evergreen SQL, elastic scale, and a unified data management experience, regardless of whether it’s running in Azure, running in their datacenter, or running in a different public cloud. And, these data services work in both connected and disconnected modes. Customers are seeing wide-ranging benefits in improving their IT productivity and business agility with Azure Arc enabled data services. Sign up for the preview of Azure Arc enabled data services.
Azure Arc enabled servers is now generally available. Customers can seamlessly organize and govern Windows and Linux servers—both physical and virtual machines (VMs)—across their multi-cloud, multi-edge environment, all from the Azure portal. Customers can now use Azure management services to monitor, secure, and update servers, and audit them with the same Azure Policy across multi-cloud and multi-edge deployments. In addition, customers can implement standardized role-based access control across all their servers to meet important compliance requirements. Learn more about Azure Arc.

Azure Stack HCI and Azure Stack Hub: Modernize on-premises datacenters

Over three years ago, we were first to market with Azure Stack that enables customers to bring cloud innovation into their own datacenters to take advantage of cloud technology while meeting any regulatory compliance requirement and the ability to run disconnected. Since then, we’ve continued to grow the Azure Stack portfolio to provide cloud consistent infrastructure and Azure services to a range of solutions within local datacenters and running at the edge.

Today, we’re launching new Azure Stack capabilities to help customers modernize their datacenters:

Preview of Azure Kubernetes Services (AKS) on Azure Stack HCI. AKS on Azure Stack HCI enables customers to deploy and manage containerized apps at scale on Azure Stack HCI, just as they can run AKS within Azure. This now provides a consistent, secure, and fully managed Kubernetes experience for customers who want to use Azure Stack HCI within their datacenters. Sign up for the preview of AKS on Azure Stack HCI.

Azure Stack Hub is now available with GPUs. To power visualization intense apps, we’ve partnered with AMD to bring the AMD Mi25 GPU to Azure Stack Hub, which allows users to share the GPU in an efficient way. The NVIDIA V100 Tensor Core GPU enables customers to run compute intense machine learning workloads in disconnected or partially connected scenarios. The NVIDIA T4 Tensor Core GPU provides visualization, inferencing, and machine learning for less compute intense workloads. Learn more about Azure Stack Hub.

Azure VMware Solution: Seamlessly extend and migrate VMware workloads to Azure

Many customers want the ability to seamlessly integrate their existing VMware environments with Azure. Today, we are announcing Azure VMware Solution is now generally available. Designed, built, and supported by Microsoft, Azure VMware Solution is cloud verified by VMware and enables customers to migrate VMware workloads to the cloud with minimal complexity. The Azure service includes the latest VMware Cloud Foundation components such as vSphere, NSX-T, HCX, and vSan, and integrates with a rich set of partner solutions, so customers can continue to use existing tools and skills. In addition, with our licensing offering Azure Hybrid Benefit, Azure is the most cost-effective cloud to migrate your Windows Server and SQL workloads to, whether they run on VMware or elsewhere. Learn more about Azure VMware Solution.

New innovation to run compute and AI at the Edge

Organizations are extending compute and AI to the edge of their network to unlock new business scenarios. Imagine that a retail store always stocks the right products at the right places, a hospital extends patient care to the most remote areas in the world, or a factory optimizes its performance level against capacity in real time. It’s what we call the intelligent edge. Azure offers a comprehensive portfolio of cloud services and edge device support to help customers realize these new use-cases.

Today, we are releasing new edge capabilities:

Azure SQL Edge is now generally available, bringing the most secure Microsoft SQL data engine to IoT gateways and edge devices. Optimized for edge workloads, this small-footprint container supports built-in data streaming, storage, and AI in connected or disconnected environments. Built on the same codebase as SQL Server and Azure SQL Database, Azure SQL Edge provides the same industry-leading security, the same familiar developer experience, and the same tooling that many teams already know and trust. Learn more about Azure SQL Edge.

Two new Azure Stack Edge rugged devices are available. Customers can perform machine learning and gain quick insights at the edge by running the Azure Stack Edge Pro R with NVIDIA's powerful T4 GPU and the lightweight, portable Azure Stack Edge Mini R. Both devices are designed to operate in the harshest environments at remote locations. To check out these new devices through augmented reality in 3D, download the Microsoft Hardware Experience on iOS or on Android.

Azure Stack Edge is now available with GPUs. Customers can run visualization, inferencing, and machine learning at the edge with the Azure Stack Edge Pro series powered by the NVIDIA T4 Tensor Core GPU. This unlocks a broad set of new edge scenarios, such as automatically recognizing license plates for efficient retail curbside pickup, and detecting defects in real time in products on a manufacturing assembly line. Learn more about Azure Stack Edge.

AT&T builds cellular-enabled guardian module with Azure Sphere: AT&T and Microsoft are teaming up to enable enterprise customers to connect their machines and equipment securely by Azure Sphere guardian devices to the cloud seamlessly via AT&T’s cellular network, without needing to rely on Wi-Fi systems. This enables customers to connect their devices where Wi-Fi does not meet their security standards. For example, customers who operate franchises in third-party locations will be able to connect their machines directly to their own clouds, bypassing third-party-owned Wi-Fi. The AT&T powered guardian device expands Azure Sphere’s reach with the AT&T Global SIM that can operate in over 200 countries, and provides multi-layered, unified security from edge to cloud. Learn more about Azure Sphere.

We look forward to sharing even more updates on our innovation in multi-cloud, multi-edge hybrid at Microsoft Ignite this week! To learn more about our Azure hybrid offerings, visit the Azure hybrid solutions page. You can also register for our upcoming webinar series that will demonstrate use case scenarios and best practices using key Azure hybrid offerings.

Azure. Invent with purpose.
Quelle: Azure

AT&T powered guardian device with Azure Sphere enables highly secured, simple, and scalable connectivity from anywhere

In July 2019, Microsoft and AT&T entered a strategic alliance to lead innovation and deliver powerful new solutions in some of the most transformative technologies, including the Internet of Things (IoT). Today, we are sharing that as part of that partnership, AT&T is introducing a new IoT solution consisting of a cellular guardian device powered by their network, security, and support services, and built with Azure Sphere.

As a global leader in telecommunications and network security, AT&T is deeply invested in providing resilient solutions that free their customers to innovate with confidence. When looking for an IoT platform to build upon, they were uncompromising in their quest to balance ease of use with best-in-class device security. They wanted renewable security to guard against emerging threats and keep devices secured over time. They chose Microsoft Azure Sphere for device security.

The cellular-enabled guardian device powered by AT&T combines the fully supported multi-layered security of AT&T’s core network with Azure Sphere’s integrated silicon, software, and cloud services. The Azure Sphere components work seamlessly together to deliver ongoing device security updates for more than ten years. The guardian device physically attaches to brownfield equipment with little to no equipment redesign, delivering edge-to-cloud communication via the AT&T secured cellular network.

AT&T’s guardian device is a transformative example of how Microsoft is working with industry leaders, like AT&T, to achieve more and, in turn, to deliver innovation and opportunity to their customers. Organizations in over 200 countries and territories can now connect their various devices directly to their own cloud network to securely manage and monitor them remotely, unlocking:

Direct connection to their own clouds bypassing the need for Wi-Fi.
Device connection in remote areas without Wi-Fi access.
Devices which are mobile without the need for disconnection and re-pairing to multiple Wi-Fi networks.

A new connectivity option for enterprises

Cellular connectivity is a great addition to the Wi-Fi connectivity option natively offered through Azure Sphere. Cellular presents a compelling opportunity for enterprise customers to solve real world business problems through data and insights.

For instance, retail organizations with embedded or franchised stores (such as convenience stores, fast food restaurants, coffee shops, located in grocery stores, airports, hospitals, or remote locations) with third-party owned Wi-Fi, are now able to simply retrofit a cellular guardian module with Azure Sphere to their equipment and connect securely and directly to their cloud environments to improve product quality and customer service and to reduce operating costs.

Delivering on the vision of securely connected devices and data

As enterprises set their sights on what can be achieved through innovation, the precautions organizations must take when connecting their essential equipment to the cloud are well warranted. If not secured adequately, infrastructure can become vulnerable to exploitation, leading to equipment being rendered useless or controlled for malicious purposes, data pollution or confidential information being compromised.

IoT attacks put organizations at risk and can be so disruptive that they jeopardize long-term business value and objectives. With fully supported network security from AT&T, and device level security from Azure Sphere, organizations can leverage the benefits of secured bi-directional data transfer between devices and any cloud. AT&T provides front-end security, including wireless network security, professional services, and support to the guardian device. Azure Sphere provides the back-end security, protecting the device or equipment connected to the guardian device. This combination empowers enterprises to protect their devices while transforming their business with connected experiences.

Streamlined and scalable connectivity​

AT&T’s extensive global network makes it possible for customers’ data to travel via cellular with fast activation out of the box. The cellular guardian device with Azure Sphere enables device connections in any market, with seamless and agile deployments. The AT&T Global SIM allows customers to utilize the same AT&T subscription across more than 200 countries and territories without the need to re-credential.

Azure Sphere’s mission is to empower every organization on the planet to connect and create secured and trustworthy IoT devices. Everything we do is organized around this mission. AT&T’s offer complements our mission; at its core, it’s designed to accelerate IoT transformation by delivering anywhere, anytime access to secured connectivity.

AT&T wants to make IoT easier for their customers. Mo Katibeh, EVP-Chief Product and Platform Officer, puts it nicely, “With the combined solutions from AT&T and Microsoft, we’re offering our customers the means to accelerate their innovation and business problem-solving – simply. This is about creating simple customer experiences that remove connectivity complexity. With cellular, devices just work without the customer having to do Wi-Fi pairing or managing issues with Wi-Fi networks that are not under their control.”

Organizations can streamline their IoT deployment process by leveraging the suite of AT&T’s professional services through the AT&T powered guardian device with Azure Sphere. Features include enhanced support, priority care and monitoring from a single provider, and simplified device connectivity process. By taking advantage of these services, organizations can scale their cellular connectivity deployment, ultimately reaching more end customers.

Get started today

AT&T has introduced a truly unique offering to the market by providing network security and services for a cellular guardian device. It’s exciting to see a network leader back their cellular guardian device with a commitment to security—AT&T is paving the way for customers to achieve more through scalable cellular-enabled IoT deployments.

Explore more about cellular connectivity with Azure Sphere and connect with this AT&T team to explore the opportunities the AT&T powered guardian device with Azure Sphere can bring to your business.

Azure. Invent with purpose.
Quelle: Azure

Introducing Azure Orbital: Process satellite data at cloud-scale

Data collected from space to observe Earth is instrumental in helping address global challenges such as climate change and furthering of scientific discovery and innovation. The cloud is central to both modern communications scenarios for remote operations and the gathering, processing, and distributing the tremendous amounts of data from space.

Today we’re announcing the preview of Azure Orbital. The new ground station service enables satellite operators to communicate to and control their satellites, process data, and scale operations directly with Microsoft Azure. With Azure Orbital, the ground segment, including the ground stations, network, and procedures, becomes a digital platform now integrated into Azure and complemented by many partners.

Amergint, Kratos, KSAT, Kubos, Viasat, and US Electrodynamics INC. and have joined the ecosystem of Azure Orbital partners, each of them bringing their unique value and expertise for the benefits of our customers.

Microsoft is well-positioned to support customer needs in gathering, transporting, and processing of geospatial data. With our intelligent cloud and edge strategy currently extending over 60 announced cloud regions, advanced analytics, and AI capabilities coupled with one of the fastest and most resilient networks in the world—security and innovation are at the core of everything we do.

Azure Orbital scenarios

Earth observation and IoT

Satellite images are used in many industries. Areas such as meteorology, oceanography, agriculture, geology, and defense and intelligence most often use satellites that are in a non-geostationary orbit (NGSO), including low-earth orbit (LEO) or medium-earth orbit (MEO). As the satellites are orbiting, a substantial amount of ground stations are required to establish contact within a specific time window to downlink the data to Earth.

Azure Orbital enables satellite operators to schedule contacts with their spacecrafts and directly downlink data into their virtual network (VNet) in Azure. Azure Virtual Networks are isolated, highly secure, and governed by Microsoft's more than 90 compliance certifications covering applications and datasets.

Azure Orbital on-ramps your data directly into Azure, where it can immediately get processed with market-leading data analytics, geospatial tools, machine learning, and Azure AI services.

Contact scheduling will be available for Microsoft owned and operated ground stations in X, S, and UHF band frequencies via shared high gain antennas. We are also directly interconnecting our global network with our partner's ground station networks for easy scheduling with your preferred Teleport operators while maintaining the benefits of direct integration with Azure.

Whether you choose to use Microsoft or partner ground stations, the digitized Radio Frequency (RF) signal from the antenna to the cloud can be transmitted using the VITA Radio Transport (VRT) format (VITA-49) and then subsequently be demodulated using custom modems or cloud modems offered by the platform.

 ​

Global communications

In-flight connectivity (IFC), maritime, connected cruise, mobility, and video broadcasting are examples of communication scenarios addressed by the space industry. Leveraging Azure Orbital, satellite operators can go beyond selling network capacity with the accelerated building of managed services. Azure Orbital offers interconnection of your existing ground stations and colocation of dedicated antennas close to our network PoPs or Datacenters. Orbital enables you to take full advantage of our global network and services infrastructure to build new product offerings and service chains with the edge, 5G, SD-WAN, and AI while continuously optimizing your operations and footprint.

SES has selected Microsoft Azure to collocate the ground stations (including Telemetry, Tracking, and Command Systems) of their next-generation SES O3b mPOWER communication system. SES has designed a cloud-scale operational environment and will leverage Azure Orbital as a core platform to scale and build managed services and streamline order and service delivery management processes.

"In the last 12 to 18 months, our focus has been to accelerate our customers' cloud adoption plans. We are pleased to have found an ideal partner in Microsoft with its new Azure Orbital system. This partnership leverages both companies' know-how—SES's experience in satellite infrastructure and Microsoft's cloud expertise—and is building blocks in developing new and innovative solutions for the future," said JP Hemingway, CEO of SES Networks."we are thrilled that we will be co-locating, deploying and operating our next-generation O3b mPOWER gateways alongside Microsoft's data centers. This one-hop connectivity to the cloud from remote sites will enable our MEO customers to enhance their cloud application performance, optimize business operations with much flexibility and agility needed to expand new markets."

The ground segment is a significant part of any satellite operator's investments. An ecosystem of partners has joined our Managed Service Providers (MSP) program to enrich the platform by selecting services and technology integrations. Cloud modems optimized for Azure, Mission Control Operations are examples of third-party cloud-native services provisioned on-demand when your business needs it.

Azure Orbital is now in preview. For questions, feedback, and interest to participate in the preview, please contact MsAzureOrbital@microsoft.com. Please visit our product documentation to learn more, share feedback, or express interest in participating in the preview.

Learn more about Azure Orbital

Introduction to Azure Orbital
What’s new in Azure networking

Azure. Invent with purpose.

Quelle: Azure

Build powerful and responsible AI solutions with Azure

As organizations assess safely reopening and continue navigating unexpected shifts in the world, getting insights to respond in an agile and conscientious manner is vital. Developers and data scientists of all skill levels are inventing with Microsoft Azure AI's powerful and responsible tools to meet these challenges.

Operating safely

To help organizations operate safely in today’s environment, we are introducing a new spatial analysis capability in the Computer Vision Azure Cognitive Service. Its advanced AI models aggregate insights from multiple cameras to count the number of people in the room, measure the distance between individuals, and monitor wait and dwell times. Organizations can now apply this technology to use their space in a safe, optimal way. For instance, RXR, one of New York City’s largest real estate companies, has embedded spatial analysis in their RxWell app to ensure occupants' safety and wellness.

“When it came to developing RxWell, there was simply no other company that had the capability and the infrastructure to meet our comprehensive data, analytics, and security needs than Microsoft. With our partnership, the RxWell program provides our customers the tools they need to safely navigate the ‘new abnormal’ of COVID-19 and beyond.” – Scott Rechler, Chairman and CEO, RXR Realty

Read more about the RXR customer story here.

Achieving agility and resiliency

To get timely insights into their business, organizations need to monitor metrics proactively and quickly diagnose issues as they arise. Metrics Advisor, a new Azure Cognitive Service, helps customers to do this through a powerful combination of real-time monitoring, auto-tuning AI models, alerting, and root cause analysis. It allows organizations to fix issues before they become significant problems. No machine learning expertise is required. Customers such as NOS telecommunications have been able to increase agility and improve customer service using Metrics Advisor. 

“Metrics Advisor helps capture potential network device failures in time so that we can react instantly. It reduces incoming customer call bottlenecks and improves customer satisfaction. “ – João Ferreira, Director of Product Development, NOS telecommunications company (Portugal)

To help customers build custom machine learning models without data science expertise, Azure Machine Learning’s no-code automated machine learning and drag and drop designer are now generally available. These capabilities empower citizen data scientists and developers to build machine learning solutions.

“By using Azure Machine Learning designer, we were able to quickly release a valuable tool built on machine learning insights, that predicted occupancy in trains, promoting social distancing in the fight against Covid-19. ” – Steffen Pedersen, Head of AI and advanced analytics, DSB 

We are also making machine learning more accessible by providing additional value at a lower cost. Azure Machine Learning customers will now get the all the Enterprise edition capabilities in the Basic edition at no extra charge, helping them adopt and scale machine learning more cost effectively. Learn more about updates to Azure Machine Learning.

Applying AI responsibly

Safe and responsible use of AI is essential as organizations, and the world, depend on technology more than ever before. Responsible AI practices and guidelines for safe use are infused into Azure AI’s services, such as spatial analysis, to ensure personal privacy, transparency, and trust. We’ve also seen the rapid adoption of Azure Machine Learning’s responsible ML capabilities and toolkits.

A recent example is Philips, a leading health technology company, who’s using the Azure and Fairlearn toolkit to build unbiased machine learning models. Healthcare models can be biased depending on how different hospitals document symptoms and tasks. Using the Fairlearn toolkit, Philips was able to assess key fairness metrics to uncover model inaccuracies for different patient groups. By improving their models’ overall fairness and mitigating biases, they were able to deliver valuable insights to their hospitals on patient wellbeing and care.

With these innovations, all developers and data scientists can harness the power of Azure AI responsibly to help their organizations move forward. For more on the latest, check out these resources:

Learn more about Azure AI.
Learn more about Metrics Advisor and spatial analysis, part of Azure Cognitive Services.
Learn more about Azure Machine Learning. 

Azure. Invent with purpose.

Quelle: Azure

Build rich communication experiences at scale with Azure Communication Services

The global situation today has truly transformed how we communicate with each other. While we woke up one day and the world was different, business hasn’t stopped. Customers are still needing to connect with businesses. Whether it’s providing real-time virtual assistance or enabling curb side pickup we’ve had to rethink how we engage without physical interactions.

In this remote-first world, businesses are looking to quickly adapt to customers’ needs and connect with them through engaging communication experiences. Building new communication solutions or integrating them into existing applications can be complex and time-consuming. Often requiring considerable investment and specialized expertise. That’s why we’re excited to announce, Azure Communication Services, the first fully managed communication platform offering from a major cloud provider.

Azure Communication Services is built natively on top a global, reliable cloud—Azure. Businesses can confidently build and deploy on the same low latency global communication network used by Microsoft Teams to support over 5 billion meeting minutes in a single day. It also enables developers to easily tap into other Azure services, such as Azure Cognitive Services for translation, sentiment analysis and more. Additionally, companies benefit from all communications being encrypted to meet privacy and compliance needs, such as HIPAA and GDPR.

"One of our customers in the construction industry was looking for a solution that would give project managers more visibility and communication with people on site. Using Azure Communication Services we were able to get a proof of concept deployed in days vs. weeks, easily integrating voice, video and messaging for our customers in a secure way." -Erik Lagerway, Founder Snapsonic

Azure Communication Services makes it easy to add voice and video calling, chat, and SMS text message capabilities to mobile apps, desktop applications, and websites with just a few lines of code. While developer friendly APIs and SDKs make it easy to create personalized communication experiences quickly, without having to worry about complex integrations. These capabilities can be used on virtually any platform and device.

Azure Communication Services capabilities

One example of how we see Azure Communication Services come to life in this remote-first world is customer service. Imagine a maintenance or installation call right now. There’s a problem but a technician is unable to go to the customers’ home. While some problems can be addressed remotely, troubleshooting over the phone can be a challenge. There aren’t many tools, that are easy to use and deploy, which connect a service rep and end user over video especially built right into a company’s app or home page. With Azure Communication Services, integrating voice and video calling into a multichannel communication experience is simple.

Every day, we find a new challenge that changes customer, developer, and business needs. Our goal is to meet businesses where they are and provide solutions to help them be resilient and move their business forward in today’s market. We see rich communication experiences—enabled by voice, video, chat, and SMS—continuing to be an integral part in how businesses connect with their customers across devices and platforms. Azure Communication Services brings together the best of communication technology, development efficiency, cloud scale, and enterprise-grade security. So, businesses can start creating more meaningful customer interactions on a secure, global platform in days, not months.

Get started today:

Visit the Azure Communication Services website
Attend the Microsoft Ignite session on Azure Communication Services
See how it works in my Mechanics show with Jeremy Chapman
Try the APIs on GitHub

Azure. Invent with purpose.

Quelle: Azure

Docker Github Actions

In our first post in our series on CI/CD we went over some of the high level best practices for using Docker. Today we are going to go a bit deeper and look at Github actions. 

We have just released a V2 of our GitHub Action to make using the Cache easier as well! We also want to call out a huge THANK YOU to @crazy-max (Kevin :D) for the of work he put into the V2 of the action, we could not have done this without him! 

Right now let’s have a look at what we can do! 

To start we will need to get a project setup, I am going to use one of my existing simple Docker projects to test this out:

The first thing I need to do is to ensure that I will be able to access Docker Hub from any workflow I create, to do this I will need to add my DockerID and a Personal Access Token (PAT) as secrets into GitHub. I can get a PAT by going to https://hub.docker.com/settings/security and clicking ‘new access token’, in this instance I will call my token ‘whaleCI’

I can then add this and my username as secrets into the GitHub secrets UI:

Great we can now start to set up our action workflow to build and store our images in Hub. In this CI flow I am using two Docker actions, the first allows me to log in to Docker Hub using my secrets store in my GitHub Repository. The second is the build and push action, in this I am setting the push flag to true (as I want to push!) and adding in my tag simply to always go to latest. Lastly in this I am also going to echo my image digest to see what was pushed. 

name: CI to Docker hub

on:

push:

branches: [ master ]

steps:

name: Login to DockerHub

uses: docker/login-action@v1

with:

username: ${{ secrets.DOCKER_HUB_USERNAME }}

password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}

name: Build and push

id: docker_build

uses: docker/build-push-action@v2

with:

context: ./

file: ./Dockerfile

push: true

tags: bengotch/simplewhale:latest

name: Image digest

run: echo ${{ steps.docker_build.outputs.digest }}

Great, now I will just let that run for the first time and then tweak my Dockerfile to make sure the CI is running and pushing the new image changes:

Next we can look at how we can optimize this; the first thing I want to do is look at using my build cache. This has two advantages, first this will reduce my build time as it will not have to re-download all of my images and second it will reduce the number of pulls I complete against Docker Hub. To do this we are going to leverage the GitHub cache, to do this I need to set up my builder with a build cache.

The first thing I want to do is actually set up a Builder, this is using Buildkit under the hood, this is done very simply using the Buildx action.

steps:

name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@master

Next I need to set up my cache for my builder, here I am adding the path and keys to store this under using the github cache for this. 


name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-

And lastly having added these two bits to the top of my Action file I need to add in the extra attributes to my build and push step. Here I am setting the builder to use the output of the buildx step and then using the cache I set up for this to store to and retrieve from.


name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}

name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
builder: ${{ steps.buildx.outputs.name }}
push: true
tags: bengotch/simplewhale:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache

name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}

Great, now we can run it again and I can see that I am using the cache!

Now we can look at how we can improve this more functionally by adding in the ability to have our tagged versions we want to be released to Docker Hub behave differently to my commits to master (rather than everything updating latest on Docker Hub!). You might want to do something like this to have your commits go to a local registry to then use in nightly tests so you can always test what is latest while reserving your tagged versions for release to Hub. 

To start we will need to modify our previous GitHub workflow to only push to Hub if we get a particular tag:

on:
push:
tags:
– “v*.*.*”

This now means our main CI will only fire if we tag our commit with V.n.n.n, let’s have a quick go and test this:

And when I check my GitHub action: 

Great!

Now we need to set up a second GitHub action file to store our latest commit as an image in the GitHub registry, you may want to do this to run your nightly tests or recurring tests against or to share work in progress images with colleagues. To start I am going to clone my previous GitHub action and add back in our previous logic for all pushes. 

Next I am going to change out our Docker Hub login to a GitHub container registry login

if: github.event_name != ‘pull_request’
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.ghcr_TOKEN }}

And I will also need to remember to change how my image is tagged, I have opted to just keep latest as my only tag but you could always add in logic for this:

  tags: ghcr.io/nebuk89/simplewhale:latest

Now we will have two different flows, one for our changes to master and one for our pull requests. Next we will need to modify what we had before so we are pushing our PRs to the GitHub registry rather than to Hub. 

We could now look at how we set up either nightly tests against our latest tag, how we want to test each PR or if we want to do something more elegant with the tags we are using and make use of the Git tag for the same tag in our image. If you would like to look at how you can do one of these or get a full example of how to setup what we have gone through today please check out Chad’s repo which runs you through this and more details on our latest GitHub action: https://github.com/metcalfc/docker-action-examples 

And keep an eye on our blog for new posts coming in the next couple of weeks looking at how we can get this setup on other CIs, if there are some in particular you would like to see reach out to us on Twitter on @docker.To get started setting up your GitHub CI with Docker Hub today sign up for a Docker account and have a go with Docker’s official GitHub actions.
The post Docker Github Actions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/