All together now: Fleet-wide monitoring for your Compute Engine VMs

Cloud Monitoring has always provided comprehensive visibility and management into individual Compute Engine virtual machines (VMs). But many Google Cloud customers have hundreds, thousands, or tens of thousands of VMs that they need to manage. Cloud Monitoring now gives you zero-config, out-of-the-box visibility into your entire Compute Engine VM fleet, with quick access to advanced Monitoring features such as installing the Cloud Monitoring agent and configuring fleetwide alerts. Our new Infrastructure Summary dashboard and expanded VM Instances dashboard jump-start your troubleshooting with no setup required!Monitor your VM fleet’s health with infrastructure summaryThe new single-pane-of-glass Infrastructure Summary dashboard lets you see aggregate fleet-wide statistics at a glance, and provides insight into the top VMs for a select group of key CPU, disk, memory, and network metrics. You can use the quick links in the top left to jump into detailed troubleshooting dashboards for load balancers, network, and VM instances. The filter bar enables you to narrow your view if you want to see a specific subset of VMs.Troubleshoot issues with VM instances fleet-wide viewYou’ve always been able to view and filter all your VM instances in Cloud Monitoring, and now you can do much more. The VM Instances dashboard now includes agent visibility and installation, and its new tabs let you see fleet-wide information across key metrics.View top VMs across key metrics for CPU, disk, memory, and networkDedicated tabs for CPU, disk, memory, and network show you outlier VMs for key metrics in each category, so you can visually inspect for anomalies and quickly drill into problem areas and VMs. Filtering allows you to narrow down the set of VMs being displayed in any tab for detailed analysis.View Monitoring agent status and install in the UIThe per-VM status of the Cloud Monitoring agent is now available in the main inventory page, and you can install the agent on a VM using our built-in wizard. Use the agent to track specified system and application metrics, including: Memory and disk metricsAdvanced system metricsMetrics for workloads like MySQL, Apache, Java virtual machine, and othersIf you want to install and manage the agent across multiple VMs at once, you can use our new Ops Agent Policies.Understand your advanced metricsThe “Explore” tab gives you insight into the advanced metrics you’re currently collecting in Cloud Monitoring, and quick links to information on how to send additional metrics, so you can see even more metrics in one place.Enable recommended alertsWe’ve made it easy to enable predefined recommended alerts across your whole VM fleet. With one click, you can ensure that all the VMs in your fleet are continuously monitored for excessive utilization (memory, disk, network, etc), and receive alert notifications across a variety of channels (email, SMS, Slack, PagerDuty, Cloud Console mobile app, Cloud Pub/Sub, and webhooks). You can also override recommended alert thresholds based on your needs.A fleet of new capabilitiesAs with all our operations tools, we want Cloud Monitoring to include everything you need to manage your environment, whether it consists of one VM or thousands. To get started with Cloud Monitoring, check out this demo.Related ArticleHigh-resolution user-defined metrics in Cloud MonitoringNow you can write custom and Prometheus metrics for Cloud Monitoring every 10 secondsRead Article
Quelle: Google Cloud Platform

Anthos in depth: Easy load balancing for your on-prem workloads

For organizations that need to run their workloads on-prem, Anthos is a real game changer. As a hybrid multi-cloud platform that’s managed by Google Cloud, Anthos includes all the innovations that we’ve developed for Google Kubernetes Engine (GKE) over the years, but running in the customer’s data center. And as such, Anthos can integrate with your existing on-prem networking stack. One of the key pieces of integration is getting traffic into the Anthos cluster, which often involves using an external load balancer. When running Anthos on Google Cloud, you create a Kubernetes service accessible from the internet through Ingress or servicetype load balancer, and Google Cloud takes care of assigning the virtual IP (VIP) and making it available to the rest of the world. In contrast, when running Anthos on-prem, advertising the service’s VIP to your on-prem network happens using an external load balancer. Anthos provides three different options for deploying an external load balancer: the F5 Container Ingress Services (CIS) controller; manually mapping your load balancer to Kubernetes with static mapping; and Anthos’ own bundled load balancer.In this post, we’ll introduce these three options and dive deep into the Anthos bundled load balancer.F5 load balancingIn this mode, Anthos integrates with F5 by including the F5 Container Ingress Services (CIS) controller with Anthos running on-prem. This approach is ideal if you have an existing investment in F5 load balancing and want to use it with your Anthos on-prem cluster.Manual load balancingIf you have another third-party load balancer, you can manually map your external load balancer to your Kubernetes resources, allowing you to use the load balancer of your choice. As there is no controller here to map the Kubernetes resources to the external load balancer, you need to perform static mapping of the load balancer service.Anthos-bundled load balancingIn both the above modes, there are costs (licensing and hardware) and expertise associated with managing the external load balancer. More importantly, there can be organizational friction, both technical and non-technical, as external load balancers and Anthos clusters are often managed by different teams. Anthos’ bundled load balancer provides an option to customers who want to program the VIP dynamically, without having to configure or support a third-party option.The Anthos-bundled load balancer takes care of integrating external load balancer functionality as well as announcing the VIP to the external world. In contrast to the previous modes, Anthos itself now bridges the Kubernetes domain with the rest of your network. This approach brings several advantages: The team managing the on-prem Anthos cluster also manages the advertisement of VIPs. This mitigates the requirements for any tight collaboration and dependency between different organizations, groups and admins.Costs are streamlined, as you don’t have to manage a separate invoice, bill or vendor for your external load balancing needs.Simplified management, as Anthos controls both the controller as well as the VIP announcement. This has benefits in operational management, support, provisioning etc., making it a more seamless experience.Multinational investment banking firm HSBC uses Anthos’ bundled load balancer and reports that it’s easy to install and configure, with minimal system requirements. “Anthos running on-premises has brought the best of Google’s managed Kubernetes to our data centers. Specifically, the bundled load-balancer provides HSBC with a highly available, high performing, layer 4 load-balancer with minimal system requirements. Configuration and installation are simple and automate deployment for each new on-prem cluster. This decreases our time to market, installation complexity, and costs for each cluster we deploy.” – Scott Surovich Global Container Engineering Lead – HSBC Operations, Services & TechnologyUsing the Anthos bundled load balancer Using Anthos’ bundled load balancer on-prem is a relatively straightforward process.The bundled load balancer uses the Seesaw load balancer, which Google created and open sourced. In high availability mode, two instances run in active-passive pairs talking the standard Virtual Router Redundancy Protocol (VRRP). The passive instance becomes the active if it does not receive an advertisement from the active instance for two seconds, based on today’s default configuration.You can create a load-balancer-typed Kubernetes service to expose your application through the bundled load balancer. For example:Here, the bundled load balancer exposes a service to clients at port 80. The service config is sent to the load balancer automatically, which begins to announce SVIP by replying to ARP (address resolution protocol) requests. The load balancer runs in IPVS gatewaying mode (also known as “direct routing” mode), not touching the IP layer of packets and delivering packets to a Kubernetes node by modifying the destination MAC address. The advantage of running in this mode is that it doesn’t add any additional IP headers to the traffic, and therefore does not impact performance. The Kubernetes data plane (iptables in this case) on the node then picks up the packets destined to SVIP:80 and routes them to backends pods. Thanks to the gatewaying mode, the load balancer achieves “Direct Server Return (DSR)” and the responses bypass the load balancers. This saves capacity needed for the load balancers. Also because of DSR, the client IP can be visible in pods by setting “externalTrafficPolicy” to “Local” on the service.No external load balancer? No problemIf you don’t have an external load balancer that’s qualified for your network—or don’t have the in-house expertise to set one up—Anthos’ bundled load balancer can help. And thankfully, it’s easy to set up and use. Click here to learn more about Anthos’ networking capabilities, and stay tuned for our upcoming post, where we’ll show you how to use GKE private clusters for increased security and compliance.Related ArticleGKE best practices: Exposing GKE applications through Ingress and ServicesWe’ll walk through the different factors you should consider when exposing applications on GKE, explain how they impact application expos…Read Article
Quelle: Google Cloud Platform

Migrate your custom ML models to Google Cloud in 3 steps

Building end-to-end pipelines is becoming more important as many businesses realize that having a machine learning model is only one small step towards getting their ML-driven application into production. Google Cloud offers a tool for training and deploying models at scale, Cloud AI Platform, which integrates with multiple orchestration tools like TensorFlow Extended and KubeFlow Pipelines (KFP). However, it is often the case that businesses have models which they have built in their own ecosystem using frameworks like scikit-learn and xgboost, and porting these models to the cloud can be complicated and time consuming. Even for experienced ML practitioners on Google Cloud Platform (GCP),  migrating a scikit-learn model (or equivalent) to AI Platform can take a long time due to all the boilerplate that is involved. ML Pipeline Generator is a tool that allows users to easily deploy existing ML models on GCP, where they can then benefit from serverless model training and deployment and a faster time to market for their solutions.This blog will provide an overview of how this solution works and the expected user journey, and instructions for orchestrating a TensorFlow training job on AI Platform. OverviewML Pipeline Generator allows users with pre-built scikit-learn, xgboost, and TensorFlow models to quickly generate and run an end-to-end ML pipeline on GCP using their own code and data. In order to do this, users must fill in a config file describing their code’s metadata. The library takes this config file and generates all the necessary boilerplate for the user to train and deploy their model on the cloud in an orchestrated fashion using a templating engine. In addition, users who train TensorFlow models can use the Explainable AI feature to better understand their model.In the figure below, we highlight the architecture of the generated pipeline. The user will bring their own data, define how they perform data preprocessing, and add their ML model file. Once the user fills out the config file, they use a simple python API to generate self-contained boilerplate code which takes care of any preprocessing specified, uploads their data to Google Cloud Storage (GCS), and launches a training job with hyperparameter tuning. Once this is completed, the model is then deployed to be served and, depending on the model type, model explainability is performed. This whole process can be orchestrated using Kubeflow Pipelines.Click to enlargeStep-by-step instructionsWe’ll demonstrate how you can build an end-to-end Kubeflow Pipeline for training and serving a model, given the model config parameters and the model code. We will build a pipeline to train a shallow TensorFlow model on the Census Income Data Set. The model will be trained on Cloud AI Platform and can be monitored in the Kubeflow UI. Before you beginTo ensure that you are able to fully use the solution, you need to set up a few items on GCP:1. You’ll need a Google Cloud project to run this demo. We recommend creating a new project and ensure the following APIs are enabled for the project: Compute Engine AI Platform Training and PredictionCloud Storage 2. Install the Google Cloud SDK so that you can access required GCP services via the command line. Once the SDK is installed, set up application default credentials with the project ID of the project you created above.3. If you’re looking to deploy your ML model on Kubeflow Pipelines using this solution, create a new KFP instance on AI Platform Pipelines in your project. Note down the instance’s hostname (Dashboard URL of the form [vm-hash]-dot-[zone].pipelines.googleusercontent.com).4. Lastly, create a bucket so that data and the models can be stored on GCS. Note down the bucket ID.Step 1: Setting up the environmentClone the github repo for the demo code, and create a Python virtual environment.Install the ml-pipeline-gen package.The following files are of interest to us to be able to get our model up and running:1. The examples/ directory contains sample code for sklearn, Tensorflow and XGBoost models. We will use the examples/kfp/model/tf_model.py  to deploy a TensorFlow model on Kubeflow Pipelines. However, if you are using your own model you can modify the tf_model.py file with your model code. 2. The examples/kfp/model/census_preprocess.py downloads the Census Income dataset and preprocesses it for the model. For your custom model, you can modify the preprocessing script as required. 3. The tool relies on a config.yaml file for the required metadata to build artifacts for the pipeline. Open the examples/kfp/config.yaml.example template file to see the sample metadata parameters and you can find the detailed schema here. 4. If you’re looking to use Cloud AI Platform’s hyperparameter tuning feature, you can include the parameters in a hptune_config.yaml file and add its path in config.yaml. You can check out the schema for hptune_config.yaml here.Step 2: Setting up required parameters1. Make a copy of the kfp/ example directory2. Create a config.yaml file using the config.yaml.example template and update the following parameters with the project ID, bucket ID, the KFP hostname you noted down earlier, and a model name.Step 3: Building the pipeline and training the modelWith the config parameters in place, we’re ready to generate modules that will build the pipeline to train the TensorFlow model. Run the demo.py file.The first time you run the Kubeflow Pipelines demo, the tool provisions Workload Identity for the GKE cluster which modifies the dashboard URL. To deploy your model, simply update the URL in config.yaml and run the demo again. The demo.py script downloads the census dataset from a public Cloud Storage bucket, prepares the datasets for training and evaluation as per examples/kfp/model/census_preprocess.py, uploads the dataset to the Cloud Storage URLs specified in config.yaml, builds the pipeline graph for training and uploads the graph on the Kubeflow Pipelines application instance as an experiment. Once the graph has been submitted for a run, you can monitor the progress of the run in the Kubeflow Pipelines UI. Open the Cloud AI Platform Pipelines page and open the Dashboard for your Kubeflow Pipelines cluster.Note:If you would like to use the Scikit-learn or XGBoost examples, you can follow the same steps above, but modify the examples/sklearn/config.yaml with similar changes as above without the additional step of creating a Kubeflow Pipelines instance. For more details, refer to the instructions in the public repo or follow our end-to-end tutorial written in a Jupyter notebook. ConclusionIn this post we showed you how to migrate your custom ML model for training and deployment to Google Cloud in three easy steps. Most of the heavy-lifting is done by the solution, where the user simply needs to bring their data, model definition and state how they would like the training and serving to be handled. We went through one example in detail and the public repository includes examples for other supported frameworks. We invite you to utilize the tool and start realizing one of the many benefits of Cloud for your Machine Learning workloads. For more details, check out the public repo. To learn more about Kubeflow Pipelines and its features, check out this session from Google Cloud Next ‘19.AcknowledgementsThis work would not have been possible without the hard work of the following people (in alphabetical order of last name): Chanchal Chatterjee, Stefan Hosein, Michael Hu, Ashok Patel and Vaibhav Singh.Related ArticleExplaining model predictions on image dataA conceptual overview and technical deep dive into how XAI works on image dataRead Article
Quelle: Google Cloud Platform

Designing your cloud strategy to maximize value on Azure

The COVID-19 pandemic continues to be challenging to adjust your business strategies to maintain productive operations and processes. You need new ways to increase efficiencies, optimize costs, and adopt new technologies at a faster rate. Your digital transformation is more critical than ever in these trying times.

Microsoft stands with you, our customers, moving quickly to adapt to the ongoing pace of global change. We believe technology will enable your success and allow you to adapt to and meet these challenges. Our goal is to provide a clear path for you to achieve benefits from the cloud, meeting your distinct business, security, and cost management requirements. We continue to invest in customer-proven, comprehensive guidance and learning resources, enabling you to successfully adopt Microsoft Azure and create ongoing cloud value across your organization.

At Microsoft Ignite, we are focused on three areas to help you:

Successfully achieve more value from the cloud.
Adapt your cloud journey to meet your needs.
Build expertise to confidently use Azure.

Successfully achieve more value from the cloud

Microsoft’s technical guidance is based on industry best practices and the successful cloud adoption experiences of our customers and you can use this proven guidance to enable your entire organization to achieve cloud value. Onboard stakeholders, prove your organization’s cloud value, and reach your business goals with resources to guide your cloud journey—develop and deploy well-architected workloads and execute the operations of your company’s cloud adoption strategy.

Plan your cloud journey with a clear path forward using the cross-team, organization-wide “people, process, and technology” approach of the Microsoft Cloud Adoption Framework for Azure. This technical guidance and industry best practices offers business process-centric templates, assessments, and customer-tested documentation across your cloud adoption journey.
Configure and optimize your Azure environment for scale, security, governance, networking, and identity to enable application migrations and greenfield development with Azure landing zones. Modular Azure landing zones are based on common cloud design areas—enabling you to establish guardrails and policies for your environment’s compliant security and governance.
Deploy and optimize your workloads and implement best practices across your entire cloud estate. The newly released Microsoft Azure Well-Architected Framework provides technical guidance specifically at the workload level across five pillars; cost optimization, security, reliability, performance efficiency and operational excellence. You can use Azure Advisor to evaluate resources deployed on Azure, offering you personalized, actionable guidance to continually optimize your Azure resources. And starting at Microsoft Ignite, Azure Advisor is previewing Advisor score where you will be able to understand and improve your current optimization posture according to Azure best practices with a prioritized list of the most impactful recommendations for your deployments, and report on your optimization progress over time.
Manage costs and unlock your cloud value while making use of cloud tools like Azure Cost Management + Billing that analyzes, manages, and optimizes your cloud spend. We recently announced that you can now manage and analyze both your Azure and AWS spend from a single location with the Azure Cost Management + Billing connector for AWS. You can also enjoy favorable licensing terms and save with Azure Hybrid Benefit, Azure Spot Virtual Machines and Azure Dev/Test pricing or pay in advance to receive deep discounts with Azure Reservations. Also new in preview, Azure Hybrid Benefit features are now being extended to Red Hat and SUSE Linux customers, for easier cloud migration, more integrated Azure user experience, and greater Linux subscription portability.
Adopt agile software development methods and securely ship code, faster. With remote work being the norm in many countries, you must change how you support developers with processes to increase productivity. Developers write code, keep systems running, and deliver continuous value to customers. Your teams can collaborate openly, implementing Azure DevOps across your organization, using GitHub, with the Microsoft Cloud Adoption Framework for Azure, and the Microsoft Azure Well-Architected Framework. Azure services are a key part of Microsoft’s support for building resilience in your development teams and our commitment to keeping you up to date extends through our entire DevOps product line which includes the imminent availability Azure DevOps Server 2020.

Adapt your cloud journey to meet your needs.

Every organization has unique business challenges during the journey to cloud adoption. You may lack the necessary expertise needed to accelerate digital transformation, or you might encounter difficulties along the way. Microsoft Partners and programs provide access to a variety of resources and Azure technical expertise to meet your unique business, industry, and security requirements. Together, we will enable you to adopt the cloud at your pace—tailored to your needs.

Accelerate with Azure advanced specialized partners and Azure expert managed service providers who offer expert technical assistance, advice, and support to enable success on Azure. You can find partners with expertise in all phases of the cloud adoption lifecycle, that follow the Microsoft Cloud Adoption Framework for Azure to accelerate your cloud journey.
Get technical assistance with the Azure Migration Program for proactive guidance and expert help at each stage of the journey to successfully migrate infrastructure, databases, and application workloads with confidence. You can access free migration tools, step-by-step technical guidance, training, and help in finding a migration partner.
Move efficiently with FastTrack for Azure (for qualifying projects) for rapid, effective design and deployment of cloud solutions, with tailored guidance from Azure engineers, proven best practices, and architectural guidance.
Rely on Microsoft Consulting Services to help you adopt technology solutions across digital strategy, data insight, sales, and more. You can count on Microsoft expertise to guide your digital transformation. We will be with you—every step of the way.
Work with Microsoft account teams and Azure technical specialists to guide and support your cloud journey—from strategy, planning, solution design and readiness, to execution.

Build Azure expertise to confidently use Azure

Workforce skilling and technical knowledge building are critical factors for a successful cloud adoption effort, according to the World Economic Forum.1

Studies demonstrate that “more than half (54 percent) of all employees will require significant reskilling by 2022.

 

According to Gartner, “even before there was a coronavirus pandemic, boards ranked digital/technology disruption as their top business priority for 2020—followed by obtaining the talent needed to execute tech transformation.”2 In the current COVID-19 crisis, your digital transformation is no longer an initiative, but a business imperative—along with aligning skilling to drive your modernization effort: 

 

COVID-19 has escalated digital initiatives into digital imperatives, creating urgent pressure on HR leaders to work with their CEO, CFO and CIO to rethink skills needs as business models change at light speed.

 

With increased remote work and learning unfolding across the globe during the COVID-19 crisis, we continue to invest in Microsoft’s learning platforms, meeting your demand for digital literacies now, and the pace of future digital technology needs.

At Microsoft, we are moving in-person training to virtual instructor-led training, delivering a variety of free digital learning experiences. We want to enable everyone to have the right skills and expertise to successfully use Azure, with confidence.

Continuously build your skills on Microsoft Learn with free, on-demand, self-paced learning paths, to skill up at any level, and certify your skills. Get hands-on experience in a sandbox environment and watch original live and pre-recorded technical content from Microsoft and the Learn TV community. Begin your cloud journey with the Cloud Adoption Framework for Azure learning path and learn how to build great solutions with the Azure Well-Architected Framework learning path.
Attend free Azure Virtual Training Days to learn from Microsoft experts in an instructor-led one-day virtual event, with presentations, demonstrations, discussions, and hands-on workshops and explore the full Microsoft event catalogue.
Engage with Microsoft learning partners for training solutions to suit your learning needs—blended learning, in-person, and online learning to prepare for certification. At Microsoft Ignite, we are announcing in preview an easier way to schedule instructor-led training with our trusted Learning Partners. On Microsoft Learn, search for courses by location and time, filter for virtual deliveries, and complete the scheduling process with an integrated checkout process directly on the partner website.
Validate your skills with Microsoft Certifications and demonstrate your achievements with industry-recognized Microsoft certifications.

Working with our partners, we are ensuring, together, that you and people across the globe can reach their learning goals, and become certified in Microsoft technologies, while staying safe at home. Learn more about our updated guidelines for Microsoft Training and Certification.

You can find more information and comprehensive technical resources to support all these initiatives to enable you with a clear path forward to successfully achieve more cloud value with Azure.

 

1 World Economic Forum: The Future of Jobs Report, 2018

2 Lack of Skills Threatens Digital Transformation, Lack of Skills Threatens Digital Transformation, July 2020

Azure. Invent with purpose.
Quelle: Azure

Run your core applications on Azure

This week at Microsoft Ignite, we discussed how a growing number of customers and independent software vendors (ISVs) are running their mission- and business-critical applications on Microsoft Azure.

Across our customers, the common thread that ties mission and business-critical systems together is that they support core business processes. Examples include enterprise resource planning (ERP), supply chain management (SCM), and customer relationship management (CRM) applications, but also analytics, e-commerce systems, systems of record like financial management, procurement, and payment, and more. Now more than ever, systems that support digitized customer experiences are also critical to business success.

Organizations like JetBlue rely on Azure for traditional business-critical applications like their e-commerce site, and for predictive models to improve the overall customer experience.
Companies like Allscripts deliver critical healthcare-related applications using open source software on Azure.
Manulife chose Azure to migrate and modernize its business-critical applications to improve agility, scalability, risk management, and cost-efficiency while accelerating the support of new business models.
Walgreens Boots Alliance runs SAP HANA on Azure, relying on virtual machines (VMs) with 12 TB of memory and 28 TB of storage to run their more than 100 TB scale-out SAP landscape.

Running your business on Azure can help you be future-ready and increase business resiliency, especially during uncertain times. For example, when cases of COVID-19 began to rise in the United States in February 2020, Adaptive Biotechnologies turned their immune medicine platform to map the immune response to COVID-19 to make this information publicly available to researchers around the world developing diagnostics, therapeutics, and vaccines.

Within weeks, Adaptive processed 500 million sequences, using 29 compute-years to identify immune signatures of infection. As a result, Adaptive demonstrated that the T cells signal could be an optimal marker to assess exposure to the COVID-19 virus at certain time points during and post-infection, enabling the company to start pursuing an Emergency Use Authorization from the FDA for the world’s first T cell-based diagnostic for any disease.

“Azure’s cloud computing resources and machine learning capabilities are powering our Immune Medicine Platform, enabling us to rapidly map our immune cell receptors to diseases like COVID-19 and Lyme Disease, and fueling the next generation of diagnostics.” – Mark Adams, Chief Technical Officer, Adaptive Biotechnologies

Challenges and opportunities

Earlier in 2020, we had the pleasure of being joined by Dave Bartoletti, Vice President and Principal Analyst at Forrester, Ramki Ramaswamy, Vice President IT, Technology and Integrations at JetBlue, and Prakash Iyer, Senior Vice President, Software Architecture & Strategy at Trimble Inc. in a cloud migration webinar series. We discussed the top challenges that IT organizations face when managing current mission-critical infrastructures. Security issues, high costs, and the difficulties IT professionals face when they need to update their environments are top of mind.

 
Conversely, when we looked at the top benefits that companies have realized by migrating their mission-critical workloads to the cloud, it’s clear that the move to the cloud has been helpful in addressing many of these challenges. The top benefits they cited were:

Improved security and compliance.
Improved performance and latency of mission-critical systems.
Improved agility (including for modernizing existing systems, developing new capabilities)
Reduced overall IT costs.
Faster infrastructure implementation time.

That’s why we continue to deliver more across all these dimensions, and earlier at Microsoft Ignite, we announced the preview of several new infrastructure as a service (IaaS) capabilities to better meet our stakeholders’ needs.

New core IaaS capabilities to increase availability, security, scalability, and performance of your business-critical applications on Azure

We recently announced several new capabilities to meet the requirements of your business-critical workloads.

Azure Dedicated Hosts: More control, flexibility, and choice

Azure customers are now able to schedule platform maintenance operations on Dedicated Hosts and isolated virtual machines (VMs), and they can control when guest OS image updates on Azure Virtual Machine Scale Sets can be rolled out. Azure Dedicated Hosts now support Virtual Machine Scale Sets, and our customers can also simplify the deployment of Azure Virtual Machines in Dedicated Hosts by letting the platform select the host group to which the VM will be deployed. All these new capabilities are now in preview. Finally, in the very near future, our customers will be able to use our Fsv2 series of VMs to run compute-intensive workloads on Azure Dedicated Hosts featuring Intel Cascade Lake processors for greater performance.

Higher performance general-purpose and memory-optimized virtual machines

Microsoft has also made recently available new Azure Virtual Machines, featuring Intel Cascade Lake processors, for general purpose and memory-intensive workloads. As a result, Azure now offers a new category of VMs that lowers the price of entry since it does not include a local temporary disk. These VM series provide up to 20 percent greater CPU performance compared to the prior generation.

New Disk Storage and networking capabilities enhance security, performance, and availability

New Azure Disk Storage updates, now generally available, include Azure Private Link integration, which enables secure import and export of data over a private virtual network for enhanced security, and support for 512E on Azure Ultra Disks to enable migration of legacy workloads, like older versions of Oracle® DB, to the cloud. Read the Azure Community post for additional details.

Finally, our customers can now use Azure Load Balancer with their globally distributed workloads, using cross-region load balancing (in preview) to improve their applications’ performance and availability.

New Linux tools and cost-effective licensing models help manage Linux infrastructure and workloads on Azure

Azure Hybrid Benefit now includes Linux, and together with Azure Image Builder provides new tools and cost-effective licensing models for you to migrate, operate and manage your Linux infrastructure and workloads on Azure.

Azure Hybrid Benefit now enables simpler and more cost-effective Linux subscription portability. This new capability is available in preview and gives customers the ability to convert existing pay-as-you-go instances running Red Hat Enterprise Linux and SUSE Linux Enterprise Server on Azure to bring-your-own-subscription (BYOS) billing, using existing Red Hat Enterprise Linux and SUSE Linux Enterprise Server subscriptions. Customers can capitalize on existing investments in Red Hat and SUSE by bringing existing subscriptions to Azure, preserve any special pricing discounts, and avoid double-billing for any on-premises subscription that has transitioned to the cloud.

Azure Image Builder, generally available by Q4, 2020, is a free image building service that streamlines the creation, update, patching, management, and operation of Linux and Windows images. Azure Image Builder will deploy resources into your subscription when used; you pay only for the VMs and associated storage and networking resources consumed when running your image building pipeline.

Next steps and additional resources

As companies increase their tech intensity, the range of business-critical applications continues to expand. Our ecosystem-based approach gives us the opportunity to build a broad range of solutions with and for our customers and partners alike. We are energized by the possibilities ahead of us and what we can accomplish together.

To learn more about our offerings, visit our business-critical applications site to access reference architectures, ISV solutions, and product capabilities. You can also experience the latest capabilities on Azure by watching our training videos.

Learn more about scheduled maintenance for host updates.
Learn more about scheduled maintenance with scale sets.
Read the Azure Community post regarding the most recent Disk Storage updates.
Read the maintenance control feature documentation.
Request access to the Azure Virtual Machine Scale Sets on Dedicated Hosts preview.
Review your options with Azure Hybrid Benefit – Linux subscription portability (preview).

Azure. Invent with purpose.

Quelle: Azure

Unleash the full potential of your developer teams and increase Developer Velocity

In today’s environment, software development excellence is becoming even more critical for business success. Over the past few months, we’ve seen organizations realize that their future success depends on taking advantage of technology to rethink business models, innovate, and improve processes to better serve employees and customers. The reality is many companies across different industries are becoming software companies. According to a recent study by McKinsey & Company: Developer Velocity: How software excellence fuels business performance, there are currently over 20 million software engineers worldwide, and over 50 percent of these developers are working at organizations outside of the tech industry.

The most successful organizations understand that transforming into software companies cannot be achieved solely through the introduction of new technologies; rather, it requires a deep focus on supporting the people who will catalyze change and create the new value they seek.

In this same study, McKinsey concluded that business leaders need to empower the people behind software development – the developers – to unlock their productivity and innovation, in what the industry has started referring to as Developer Velocity.

What is Developer Velocity?

Developer Velocity means driving business performance through software development by empowering developers, creating the right environment for them to innovate, and removing points of friction.

Developer Velocity isn’t just about increasing speed of software delivery but it’s also about unleashing developer ingenuity – turning developer teams’ ideas into software that supports the needs of your customers and the goals of your business.

Unleashing the potential and talent of developer teams, the day-to-day developer experience, and keeping software talent happy and motivated drive software success.

We often hear from software leaders that the set of potential levers to improve performance are so large and diverse – that it is often unclear how to prioritize or where to start.

To lead to a precise understanding of what it takes for a company to increase Developer Velocity, in this same study, McKinsey conducted a comprehensive review of software development practices included technology, working practices, and organizational enablement and converged on a single holistic metric – “Developer Velocity Index” or the DVI score.  The Developer Velocity Index (DVI) score takes into account 46 different drivers across 13 capability areas (exhibit).

Some of the key striking findings from this study point out that companies with a higher Developer Velocity Index (DVI) score outperform revenue growth up to five times that of their competitors and they score 55 percent higher on innovation. Those organizations also surpass their peers on other key business performance indicators such as customer satisfaction, brand perception, and talent.

The study also concluded that from all the different drivers across the different dimensions, the top four drivers of Developer Velocity include best-in-class tools, product management capabilities, culture, and talent management.  Lastly, beyond the foundations, public cloud adoption is a key driver of business performance for non-software companies, open source adoption is the biggest differentiator for top performers, and companies adopting low-code platforms score 33 percent higher on innovation.

Developers have always been true catalysts of digital transformation, innovation, and business performance. This data just validates how crucial it is for companies to invest in their development teams. To learn more about Developer Velocity study and get detailed findings and insights for retail, manufacturing and finance industries, watch this 5-episode webinar series: Improve Business Performance Through Developer Velocity.

How do I calculate the Developer Velocity Index Score (DVI) for my organization?

Last May we launched our Developer Velocity Assessment Tool. You can use this tool to discover where your organization is on the Developer Velocity maturity scale and benchmark your Developer Velocity Index (DVI) relative to your peers. Then, get actionable guidance for how to drive better business outcomes for your organization.

This online assessment takes 15-30 mins to complete and was built to help you and your organization identify gaps across three key domains: technology, working practices and organizational enablement. The assessment provides personalized reports showing a detailed outline for how your organization can improve over time to meet the growing needs of your developers and software strategy.

This week we shipped additional features into our assessment tool to allow our customers to get an even clearer view of their Developer Velocity Index (DVI) score. These additions include new charts, reports and a detailed breakdown of their DVI scores by category, sub-category, and individual metric level.

We are very excited to see how organizations take advantage of this assessment to help identify specific drivers that can help drive better business outcomes. Get started with the Developer Velocity Assessment tool to calculate the DVI score for your organization and learn how you can boost your business performance.

Scale your DevOps practices to increase Developer Velocity

Although many organizations are adopting DevOps, implementing effective practices at enterprise-scale can be difficult.  At Microsoft Ignite, we are also excited the announce the new Enterprise DevOps Report 2020–2021, a Microsoft and Sogeti research study of more than 250 cloud and DevOps implementations. This report provides a comprehensive review on how to scale your DevOps practices to improve business metrics, customer satisfaction, and increase Developer Velocity – creating the right environment for developers to innovate. You can also use the study’s recommendations as a blueprint for your successful adoption of enterprise DevOps.

In this report, you’ll learn about:

• Six key areas of enterprise IT, including governance, that face significant challenges as part of an enterprise DevOps transformation.

• Common ideologies and patterns followed by top performing adopters to address the challenges of enterprise DevOps.

• Strategies implemented by successful enterprises to build continuous governance, security, quality, and compliance into their engineering processes.

• DevOps best practices to enable your organization to support distributed teams and remote work.

 

Empower your developer teams and boost your business performance

Developer Velocity helps you unleash the full potential of your developer teams, drive innovation, and boost business performance. Today more than ever investing in software excellence and building a culture that empowers developer teams will continue to be critical for every organization’s success.

Over the past few months we’ve seen many developers around the world building amazing customer applications and internal back to work solutions while working remotely. While things get back to normal, Microsoft is pleased to play a small part in supporting developers around the world and making remote development possible. To help developers build productively, collaborate securely, and scale innovation – no matter where they are – Microsoft offers the world’s most comprehensive developer toolkit with Visual Studio, GitHub, Microsoft Azure and Power Apps. It’s this set of capabilities that enables Developer Velocity within every organization.  You can learn more about our latest innovation on Visual Studio, GitHub, Azure, and Power Apps shared at Microsoft Ignite. Also don’t miss our Microsoft Ignite keynotes where you can learn more about Developer Velocity and Microsoft’s most comprehensive developer toolkit and platform:

• Satya Nadella: Building digital resilience

• Julia White: Invent with Purpose on Azure

• Scott Hanselman: App Development in Azure

Azure. Invent with purpose.

Quelle: Azure

Mixed Reality Momentum: HoloLens 2 expands globally and new Azure service launches

Mixed reality blends our physical and digital worlds to extend computing beyond the screen and fundamentally change how we work, learn, and play. Mixed Reality has evolved from promising technology to a thriving ecosystem of solutions that are having significant and quantifiable impact today. It plays a significant role in how we manufacture goods, how we learn and retain information, and how we care for ourselves and others. 

Our mixed reality services, which leverage the scalability, reliability, and security of Azure, coupled with our industry leading HoloLens 2 headset, serve as the backbone of our comprehensive mixed reality platform—growing and expanding in several ways at Microsoft Iginte: 

HoloLens 2 is now shipping to new markets

Today, we are announcing that HoloLens 2 is now available in Italy, Netherlands, Switzerland, Spain, Austria, Sweden, Finland, Norway, Denmark, Belgium, Portugal, Poland, Singapore, Hong Kong, and Taiwan. HoloLens 2 will be available in South Korea later this fall. 

Proven return on investment (ROI) with Microsoft Azure and HoloLens 2

Thousands of leading companies in industries such as manufacturing, construction, healthcare, retail, and education are using HoloLens 2 and our Azure mixed reality services to save significant touch labor, reduce errors, improve learning and retention, and boost employee and customer satisfaction:

Lockheed Martin/NASA, United States

Leveraging a solution from mixed reality partner Scope AR, Lockheed Martin is is using HoloLens 2 to build the Orion spacecraft, which will take astronauts back to the moon. The benefits that they have realized using mixed reality are significant:

Mixed reality solutions with the HoloLens 2 has dramatically reduced touch touch labor—what used to require an eight hour shift can now be completed in just 45 minutes.
With the HoloLens, Lockheed Martin has reported zero errors in two plus years with the Orion spacecraft.
The Orion spacecraft has over 57,000 fasteners; Lockheed Martin is saving $38 per fastener installation.

Watch the following video to see how Lockheed Martin is using HoloLens 2 to send astronauts to the moon and beyond:

 

Medivis, United States

Medivis, a Microsoft mixed reality partner is using their SurgicalAR solution for 3D holographic visualizations (versus CT scans) to enable surgeons to perform routine procedures in an inherently superior way. To date, Medivis has:

Successfully completed more than 200 surgeries with the HoloLens.
Decreased radiation exposure to patients. On average, patients of SurgicalAR averaged 1 CT scan compared to 10 CT scans with traditional 2D surgical solutions.
Demonstrated the potential to place catheters with 1mm accuracy (versus the 2.2 cm accuracy that is often typical today).

Watch how Medivis is using HoloLens 2 for holographic guided surgery. 

Case Western Reserve University, United States

Case Western Reserve University is running a remote learning program using their HoloAnatomy solution and HoloLens 2 to help students more effectively learn and retain knowledge:  

Students who used HoloAnatomy and HoloLens 2 experienced a 50 percent improvement in grades versus students who used a textbook.
Students who used HoloAnatomy and HoloLens 2 retained 120 percent more knowledge over 12 months of learning versus students who did not have access to HoloAnatomy and HoloLens 2.

Watch how Case Western Reserve is teaching remote anatomy classes and improving learning outcomes.   

Azure Mixed Reality Services expands to add Azure Objects Anchors

Our Azure mixed reality services are a core pillar of our mixed reality platform. These mixed reality services overcome many of the technical challenges in building mixed reality applications and simplify cross platform mixed reality development. Today, we are announcing a preview of our latest mixed reality service, Azure Object Anchors:

In many of today’s mixed reality scenarios, there is a need to place physical markers such as QR codes to identify an object and manually align 3D and holographic content to that object. Azure Object Anchors enables mixed reality developers to build applications that automatically align 3D content to real-word objects—saving significant touch labor, reducing alignment errors, and improving user experience.

Toyota Motor Corporation is an early preview customer using Azure Object Anchors to build a task guidance application for their technicians: 

“Azure Object Anchors enables our technicians to service vehicles more quickly and accurately thanks to markerless and dynamic 3D model alignment. It has removed our need for QR codes and eliminated the risk of error from manual model alignment, thus making our maintenance procedures more efficient.”—Koichi Kayano, Program Manager Technical Service Division at Toyota

To learn more about Azure Object Anchors, see our technical blog post or sign up if you are interested in participating in the private preview.  

Microsoft Azure Kinect is going commercial, scales 3D time-of-flight technology through partners

Azure Kinect DK is our cutting-edge spatial computing developer kit that contains sophisticated computer vision and speech models, advanced AI sensors, and a range of powerful SDKs that can be connected to Azure cognitive services. Today, we are announcing the collaborations with two leading companies—Analog Devices and SICK AG—who will build devices enabled with the 3D Time of Flight depth technology that is currently only available via Azure Kinect:

Analog Devices will incorporate Microsoft’s Time of Flight technology in the design, manufacture and sale of depth sensor silicon alongside a commercial depth camera module aimed at consumer electronics, automotive cabins and industrial logistic use-cases. For more information, visit Analog’s website.
SICK AG, one of the world’s leading manufacturers of intelligent sensors and sensor solutions for industrial applications, will incorporate Microsoft’s Time of Flight technology to bring state-of-the-art technologies to SICK’s 3D Time of Flight Visionary-T camera product line, and make it even smarter. For more information, contact your local SICK subsidiary.

We are excited about the momentum we are seeing in mixed reality and the strong ROI that our customers are achieving with HoloLens 2 and our Azure mixed reality services. Today’s announcements will help to scale the breadth and impact of our mixed reality platform, which continues to increase the productivity of the firstline work force and transform how we live, work and play.
Quelle: Azure