Three things to know about Azure Machine Learning Notebook VM

Data scientists have a dynamic role. They need environments that are fast and flexible while upholding their organization’s security and compliance policies. Notebook Virtual Machine (VM), announced in May 2019, resolves these conflicting requirements while simplifying the overall experience for data scientists.
Quelle: Azure

Update IoT devices connected to Azure with Mender update manager

With many IoT solutions connecting thousands of hardware endpoints, fixing security issues or upgrading functionality becomes a challenging and expensive task. The ability to update devices is critical for any IoT solution since it ensures that your organization can respond rapidly to security vulnerabilities by deploying fixes. Azure IoT Hub provides many capabilities to enable developers to build device management processes into their solutions, such as device twins for synchronizing device configuration, and automatic device management to deploy configuration changes across large device fleet. We have previously blogged about how these features have been used to implement IoT device firmware updates.

Some customers have told us they need a turn-key IoT device update manager, so we are pleased to share a collaboration with Mender to showcase how IoT devices connected to Azure can be remotely updated and monitored using Mender open source update manager. Mender provides robust over-the-air (OTA) update management via full image updates and dual A/B partitioning with roll-back, managed and monitored through a web-based management UI.  Customers can use Mender for updating Linux images that are built with Yocto. By integrating with Azure IoT Hub Device Provisioning Service, IoT device identity credentials can be shared between Mender and IoT Hub which is accomplished using a custom allocation policy and an Azure Function. As a result, operators can monitor IoT device states and analytics through their solution built with Azure IoT Hub, and then assign and deploy updates to those devices in Mender because they share device identities.

Recently, Mender’s CTO Eystein Stenberg came on the IoT Show to show how it works:

Keeping devices updated and secure is important for any IoT solution, and Mender now provides a great new option for Azure customers to implement OTA updates.

Additional resources

•    See Mender’s blog post on how to integrate IoT Hub Device Provisioning Service with Mender
•    Learn more about automatic device management in IoT Hub
Quelle: Azure

What are IBM Cloud Paks?

It’s been more than a decade since commercial cloud first transformed business, but even now only about 20 percent of workloads have moved to the cloud. Why? Factors such as skills gaps, integration issues, difficulties with established codes and vendor lock-in may be preventing most teams from fully modernizing their IT operations.
Business leaders have the difficult task of keeping pace with innovation without sacrificing security, compliance or the value of existing investments. Organizations must move past the basic cloud model and open the next chapter of cloud transformation to help successfully balance these needs.
Containers and the path to enterprise-grade and modular cloud solutions
Organizations focused on transformation can modernize traditional software to help improve operational efficiency, integrate clouds from multiple vendors and build a more unified cloud strategy.
As a major catalyst driving this transformation, containers make integration and modernization far easier by isolating pieces of software so they can run independently. Additionally, Kubernetes provides a powerful solution for orchestrating and managing containers.
That is why IBM has embraced containers and built its multicloud solutions around the Kubernetes open source project. Teams may need more than Kubernetes alone. Enterprises typically need to transform at scale, which includes orchestrating their production topology, offering a ready-to-go development model based on open standards and providing management, security and governance of applications.

 
Moving beyond Kubernetes with IBM Cloud Paks
IBM is addressing transformation needs by introducing IBM Cloud Paks, enterprise-grade container software that is designed to offer a faster, more reliable way to build, move and manage on the cloud. IBM Cloud Paks are lightweight, enterprise-grade, modular cloud solutions, integrating a container platform, containerized IBM middleware and open source components, and common software services for development and management. These solutions have reduced development time by up to 84 percent and operational expenses by up to 75 percent.
IBM Cloud Paks help enterprises do more. Below are some key advantages of the new set of offerings.

Run anywhere. IBM Cloud Paks are portable. They can run on-premises, on public clouds or in an integrated system.
Open and secure. IBM Cloud Paks have been certified by IBM with up-to-date software to provide full stack support, from hardware to applications.
Consumable. IBM Cloud Paks are pre-integrated to deliver use cases (such as application deployment and process automation). They are priced so that companies pay for what they use.

Introducing the five IBM Cloud Paks
IBM Cloud Paks are designed to accelerate transformation projects. The five Cloud Paks are comprised of the following:

IBM Cloud Pak for Applications. Helps accelerate the modernization and building of applications by using built-in developer tools and processes. This includes support for analyzing existing applications and guiding the application owner through the modernization journey. In addition, it supports cloud-native development microservices functions and serverless computing. Customers can quickly build cloud apps, while existing IBM middleware clients gain the most straightforward path to modernization.
IBM Cloud Pak for Automation. Helps deploy on clouds where Kubernetes is supported, with low-code tools for business users and near real-time performance visibility for business managers. Customers can migrate their automation runtimes without application changes or data migration, and automate at scale without vendor lock-in.
IBM Cloud Pak for Data. Helps unify and simplify the collection, organization and analysis of data. Enterprises can turn data into insights through an integrated cloud-native architecture. IBM Cloud Pak for Data is extensible and can be customized to a client’s unique data and AI landscapes through an integrated catalog of IBM, open source and third-party microservices add-ons.
IBM Cloud Pak for Integration. Helps support the speed, flexibility, security and scale required for integration and digital transformation initiatives. It also comes with a pre-integrated set of capabilities which include API lifecycle, application and data integration, messaging and events, high speed transfer and integration security.
IBM Cloud Pak for Multicloud Management. Helps provide consistent visibility, automation and governance across a range of hybrid, multicloud management capabilities such as event management, infrastructure management, multicluster management, edge management and integration with existing tools and processes.

Two new deployment options, IBM Cloud Pak System for application workloads and IBM Cloud Pak System for Data for data and AI workloads enable a pay-as-you-go capacity model and dynamic scaling for computing, storage and network resources.
Each of the IBM Cloud Paks will harness the combined power of container technology and IBM enterprise expertise to help organizations solve their most pressing challenges.
The move to cloud is a journey. IBM Cloud Paks help to meet companies wherever they are in that journey and help drive business innovation through cloud adoption.
Learn more about IBM Cloud Paks by visiting www.ibm.com/cloud/paks.
The post What are IBM Cloud Paks? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

AI Platform Notebooks now supports R in beta

At Next ‘19, we announced the beta availability of AI Platform Notebooks, our managed service that offers an integrated environment to create JupyterLab instances that come pre-installed with the latest data science and machine learning frameworks. Today, we’re excited to introduce support for R on AI Platform Notebooks. You can now spin up a web-based development environment with JupyterLab, IRkernel, xgboost, ggplot2, caret, rpy2 and other key R libraries pre-installed.The R language is a powerful tool for data science, and has been popular with data engineers, data scientists, and statisticians everywhere since its first release in 1992. It offers a sprawling collection of open source libraries that contain implementations of a huge variety of statistical techniques. For example, the Bioconductor library contains state of the art tools for analyzing genomic data. Likewise, with the forecast package you can carry out very sophisticated time series analysis using models like ARIMA, ARMA, AR, and exponential smoothing. Or, if you prefer building deep learning models, you could use TensorFlow for R.Users of R can now leverage AI Platform Notebooks to create instances that can be accessed via the web or via SSH. This means you can install the libraries you care about; and you can easily scale your notebook instances up or down.Getting started is easyYou can get started by navigating to the AI Platform and clicking on Notebooks. Then:1. Click on “New Instance” and select R 3.5.3 (the first option).2. Give your instance a name and hit “Create”.In a few seconds your Notebook instance will show up in the list of instances available to you.You can access the instance by clicking on “Open JupyterLab”.This brings up the JupyterLab Launcher. From here you can do these three things:1. Create a new Jupyter Notebook using IRKernel by clicking on the R button under Notebook.2. Bring up an iPython style console for R by clicking on the R button under Console.3. Open up a terminal by clicking on the terminal button under Other.For fun, let’s create a new R notebook and visualize the infamous ‘Iris’ dataset, which consists of the measurements of the size of various parts of an Iris labeled by the particular species of Iris. It’s a good dataset for trying out simple clustering algorithms.1. Create a new R notebook by clicking on the R button under Notebooks.2. In the first cell, type in:data(iris)head(iris)This will let you see the first 6 rows of the Iris data set.3. Next, let’s plot Petal.Length against Sepal.Length:library(‘ggplot2’)ggplot(iris, aes(x = Petal.Length, y = Sepal.Length, colour = Species)) +  geom_point() +  ggtitle(‘Iris Species by Petal and Sepal Length’)Install additional R packagesAs mentioned earlier, one of the reasons for R’s popularity is the sheer number of open source libraries available. One popular package hosting service is the Comprehensive R Archive Network (CRAN), with over 10,000 published libraries.You can easily install any of these libraries from the R console. For example, if you wanted to install the widely popular igraph—a package for doing network analysis—you could do so by opening up the R console and running the install.packages command:install.packages(“igraph”)Scale up and down as you needAI Platform Notebooks let you easily scale your Notebook instances up or down. To change the amount of memory and the number of CPUs available to your instance:1. Stop your instance by clicking on the check box next to the instance and clicking the Stop button. 2. Click on the Machine Type column and change the number of CPUs and amount of RAM available. 3. Review your changes and hit confirm.AI Platform Notebooks is just one of the many ways that Google Cloud supports R users. (For example, check out this blog post and learn about SparkR support on Cloud Dataproc.)To learn more, and get started with AI Platform Notebooks, check out the documentation here, or just dive in.
Quelle: Google Cloud Platform

7 best practices for running Cloud Dataproc in production

Data processing operations can happen a lot faster in the cloud, whether you’re migrating Hadoop-based ETL pipelines from on-premises data centers or building net-new cloud-native approaches for ingesting, processing, and analyzing large volumes of data.Cloud Dataproc, our managed cloud service for running Apache Spark and Apache Hadoop clusters, is a trusted open-source engine for running big data jobs in production. We know that troubleshooting quickly and accurately is important when you’re using Cloud Dataproc in production, so Google Cloud Platform (GCP) supports the Cloud Dataproc APIs, services, and images, and they’re included in GCP support too.Cloud Dataproc is one of the data analytics offerings Gartner named a Leader in the 2019 Gartner Magic Quadrant for Data Management Solutions for Analytics. We hear great things from our customers using Cloud Dataproc to run their production processes, whether it’s brand protection with 3PM, enhancing online retail experiences with zulily, or migrating a massive Hadoop environment at Pandora,We’ve put together the top seven best practices to help you develop highly reliant and stable production processes that use Cloud Dataproc. These will help you process data faster to get better insights and outcomes.Cloud Dataproc best practices1. Specify cluster image versions.Cloud Dataproc uses image versions to bundle operating system and big data components (including core and optional components) and GCP connectors into a single package that is deployed on a cluster.If you don’t specify an image version when creating a new cluster, Cloud Dataproc will default to the most recent stable image version. For production environments, we recommend that you always associate your cluster creation step with a specific minor Cloud Dataproc version, as shown in this example gcloud command:gcloud dataproc clusters create my-pinned-cluster –image-version 1.4-debian9This will ensure you know the exact OSS software versions that your production jobs use. While Cloud Dataproc also lets you specify a subminor version (i.e., 1.4.xx rather than 1.4) in most environments, it’s preferable to reference Cloud Dataproc minor versions only (as shown in the gcloud command). Sub-minor versions will be updated periodically for patches or fixes, so that lets new clusters automatically get security updates without breaking compatibility.New minor versions of Cloud Dataproc are made available in a preview, non-default mode before they become the default. This lets you test and validate your production jobs against new versions of Cloud Dataproc before making the version substitution. Learn more about Cloud Dataproc versioning.2. Know when to use custom images.If you have dependencies that must be shipped with the cluster, like native Python libraries that must be installed on all nodes, or specific security hardening software or virus protection software requirements for the image, you should create a custom image from the latest image in your target minor track. This allows those dependencies to be met each time. You should update the subminor within your track each time you rebuild the image.3. Use the Jobs API for submissions.The Cloud Dataproc Jobs API makes it possible to submit a job to an existing Cloud Dataproc cluster with a jobs.submit call over HTTP, using the gcloud command-line tool or the GCP Console itself. It also makes it easy to separate the permissions of who has access to submit jobs on a cluster and who has permissions to reach the cluster itself, without setting up gateway nodes or having to use something like Apache Livy.The Jobs API makes it easy to develop custom tooling to run production jobs. In production, you should strive for jobs that only depend on cluster-level dependencies at a fixed minor version (i.e., 1.3). Bundle dependencies with jobs as they are submitted. An uber jar submitted to Spark or MapReduce is one common way to do ths.4. Control the location of your initialization actions.Initialization actions let you provide your own customizations to Cloud Dataproc. We’ve taken some of the most commonly installed OSS components and made example installation scripts available in the dataproc-initializaton-actions GitHub repository.While these scripts provide an easy way to get started, when you’re running in a production environment you should always run these initialization actions from a location that you control. Typically, a first step is to copy the Google-provided script into your own Cloud Storage location.As of now, the actions are not snapshotted and updates are often made to the public repositories. If your production code simply references the Google version of the initialization actions, unexpected changes may leak into your production clusters.5. Keep an eye on Dataproc release notes.Cloud Dataproc releases new sub-minor image versions each week. To stay on top of all the latest changes, review the release notes that accompany each change to Cloud Dataproc. You can also add this URL into your favorite feed reader.6. Know how to investigate failures.Even with these practices in place, an error may still occur. When an error occurs because of something that happens within the cluster itself and not simply in a Cloud Dataproc API call, the first place to look will be your cluster’s staging bucket. Typically, you will be able to find the Cloud Storage location of your cluster’s staging bucket in the error message itself. It may look something like this:ERROR:(gcloud.dataproc.clusters.create) Operation [projects/YOUR_PROJECT_NAME/regions/YOUR_REGION/operations/ID] failed:Multiple Errors:- Initialization action failed. Failed action ‘gs://your_failed_action.sh’, see output in: gs://dataproc-BUCKETID-us-central1/google-cloud-dataproc-metainfo/CLUSTERID/cluster-d135-m/dataproc-initialization-script-0_outputWith this error message, you can often diagnose the error with a simple cat on the file to identify the cause of the error. For example, this:gsutil cat gs://dataproc-BUCKETID-us-central1/google-cloud-dataproc-metainfo/CLUSTERID/cluster-d135-m/dataproc-initialization-script-0_outputReturns this:+ readonly RANGER_VERSION=1.2.0+ err ‘Ranger admin password not set. Please use metadata flag – default-password’++ date +%Y-%m-%dT%H:%M:%S%z+ echo ‘[2019-05-13T22:05:27+0000]: Ranger admin password not set. Please use metadata flag – default-password'[2019-05-13T22:05:27+0000]: Ranger admin password not set. Please use metadata flag – default-password+ return 1…which shows that we had forgotten to set a metadata password property for our Apache Ranger initialization action.7. Research your support options.Google Cloud is here to support your production OSS workloads and help meet your business SLAs, with various tiers of support available. In addition, Google Cloud Consulting Services can help educate your team on best practices and provide guiding principles for your specific production deployments.To hear more tips about running Cloud Dataproc in a production environment, check out this presentation from Next ’19 with Cloud Dataproc user Dunnhumby.
Quelle: Google Cloud Platform

Compute and stream IoT insights with data-driven applications

There is a lot more data in the world than can possibly be captured with even the most robust, cutting-edge technology. Edge computing and the Internet of Things (IoT) are just two examples of technologies increasing the volume of useful data. There is so much data being created that the current telecom infrastructure will struggle to transport it and even the cloud may become strained to store it. Despite the advent of 5G in telecom, and the rapid growth of cloud storage, data growth will continue to outpace the capacities of both infrastructures. One solution is to build stateful, data-driven applications with technology from SWIM.AI.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Shared awareness and communications

The increase in volume has other consequences, especially when IoT devices must be aware of each other and communicate shared information. Peer-to-peer (P2P) communications between IoT assets can overwhelm a network and impair performance. Smart grids are an example of how sensors or electric meters are networked across a distribution grid to improve the overall reliability and cost of delivering electricity. Using meters to determine the locality of issues can help improve service to a residence, neighborhood, municipality, sector, or region. The notion of shared awareness extends to vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. As networked AI spreads to more cars and devices, so do the benefits of knowing the performance or status of other assets. Other use cases include:

Traffic lights that react to the flow of vehicles across a neighborhood.
Process manufacturing equipment that can determine the impact from previous process steps.
Upstream oil/gas equipment performance that reacts to downstream oil/gas sensor validation.

Problem: Excess data means data loss

When dealing with large volumes of data, enterprises often struggle to determine which data to retain, how much to retain, and for how long they must retain it. By default, they may not retain any of it. Or, they may sub-sample data and retain an incomplete data set. That lost data may potentially contain high value insights. For example, consider traffic information that could be used for efficient vehicle routing, commuter safety, insurance analysis, and government infrastructure reviews. The city of Las Vegas maintains over 1,100 traffic light intersections that can generate more than 45TB of data every day. As stated before, IoT data will challenge our ability to transport and store data at these volumes.

Data may also become excessive when it’s aggregated. For example, telecom and network equipment typically create snapshots of data and send it every 15 minutes. By normalizing this data into a summary over time, you lose granularity. This means the nature or pattern of data over time along with any unique, intuitive events would be missed. The same applies to any equipment capturing fixed-time, window summary data. The loss of data is detrimental to networks where devices share data, either for awareness or communication. The problem is also compounded, as only snapshots are captured and aggregated for an entire network of thousands or millions of devices. Real-time is the goal.

Real-time is the goal

Near real-time is the current standard for stateless application architectures, but “near” real-time is not fast enough anymore. Real-time processing or processing within milliseconds is the new standard for V2V or V2I communications and requires a much more performant architecture. Swim does this by leveraging stateful API’s. With stateful connections, it’s possible to have a rapid response between peers in a network. Speed has enormous effects on efficiency and reliability and it’s essential for systems where safety is paramount such as preventing crashes. Autonomous systems will rely on real-time performance for safety purposes.

An intelligent edge data strategy

SWIM.AI delivers a solution for building scalable streaming applications. According to their site Meet Swim:

“Instead of configuring a separate message broker, app server and database, Swim provides for its own persistence, messaging, scheduling, clustering, replication, introspection, and security. Because everything is integrated, Swim seamlessly scales across edge, cloud, and client, for a fraction of the infrastructure and development cost of traditional cloud application architectures.”

The figure below shows an abstract view of how Swim can simplify IoT architectures:

Harvest data in mid-stream

SWIM.AI uses the lightweight Swim platform, only generating a 2MB footprint to compute and stream IoT insights, building what they call “data-driven applications.” These applications sit in the data stream and generate unique, intelligent web agents for each data source it sees. These intelligent web agents then process the raw data as it streams, only publishing state changes from the data stream. This streamed data can be used by other web agents or stored in a data lake, such as Azure.

Swim uses the “needle in a haystack” metaphor to explain this unique advantage. Swim allows you to apply a metal detector while harvesting the grain to find the needle, without having to bail, transport, or store the grain before searching for the needle. The advantage is in continuously processing data, where intelligent web agents can learn over time or be influenced by domain experts that set thresholds.

Because of the stateful architecture of Swim, only the minimum data necessary is transmitted over the network. Furthermore, application services need not wait for the cloud to establish application context. This results in extremely low latencies, as the stateful connections don’t incur the latency cost of reading and writing to a database or updating based on poll requests.

On SWIM.AI’s website, a Smart City application shows the real-time status of lights and traffic across a hundred intersections with thousands of sensors. The client using the app could be a connected or an autonomous car approaching the intersection. It could be a handheld device next to the intersection, or a browser a thousand miles away in the contiguous US. The latency to real-time is 75-150ms, less than the blink of an eye across the internet.

Benefits

The immediate benefit is saving costs for transporting and storing data.
Through Swim’s technology, you can retain the granularity. For example, take the case of 10 seconds of TB per day generated from every 1000 traffic light intersections. Winnow that data down to 100 seconds of GB per day. But the harvested dataset fully describes the original raw dataset.
Create efficient networked apps for various data sources. For example, achieve peer-to-peer awareness and communications between assets such as vehicles, devices, sensors, and other data sources across the internet.
Achieve ultra-low latencies in the 75-150 millisecond range. This is the key to creating apps that depend on data for awareness and communications.

Azure services used in the solution

The demonstration of DataFabric from SWIM.AI relies on core Azure services for security, provisioning, management, and storage. DataFabric also uses the Common Data Model to simplify sharing information with other systems, such as Power BI or PowerApps, in Azure. Azure technology enables the customer’s analytics to be integrated with events and native ML and cognitive services.

DataFabric is based on the Microsoft IoT reference architecture and uses the following core components:

IoT Hub: Provides a central point in the cloud to manage devices and their data.
IoT Edge Field gateway: An on-premises solution for delivering cloud intelligence.
Azure Event Hubs: Ingests millions of events per second.
Azure Blob: Efficient storage that includes options for hot, warm and archived data.
Azure Data Lake storage: A highly scalable and cost-effective data lake solution for big data analytics.
Azure Streaming Analytics: For transforming data into actionable insights and predictions in near real-time.

Next steps

To learn more about other industry solutions, go to the Azure for Manufacturing page.

To find out more about this solution, go to DataFabric for Azure IoT and select Get it now.
Quelle: Azure

Azure.Source – Volume 86

News and updates

Microsoft hosts HL7 FHIR DevDays

One of the largest gatherings of healthcare IT developers will come together on the Microsoft campus June 10-12 for HL7 FHIR DevDays, with the goal of advancing the open standard for interoperable health data, called HL7® FHIR® (Fast Healthcare Interoperability Resources, pronounced “fire”). Microsoft is thrilled to host this important conference, and engage with the developer community on everything from identifying immediate use cases to finding ways for all of us to hack together in ways that help advance the FHIR specification.

Announcing self-serve experience for Azure Event Hubs Clusters

For businesses today, data is indispensable. Innovative ideas in manufacturing, health care, transportation, and financial industries are often the result of capturing and correlating data from multiple sources. Now more than ever, the ability to reliably ingest and respond to large volumes of data in real time is the key to gaining competitive advantage for consumer and commercial businesses alike. To meet these big data challenges, Azure Event Hubs offers a fully managed and massively scalable distributed streaming platform designed for a plethora of use cases from telemetry processing to fraud detection.

A look at Azure's automated machine learning capabilities

The automated machine learning capability in Azure Machine Learning service allows data scientists, analysts, and developers to build machine learning models with high scalability, efficiency, and productivity all while sustaining model quality. With the announcement of automated machine learning in Azure Machine Learning service as generally available last December, we have started the journey to simplify artificial intelligence (AI). We are furthering our investment for accelerating productivity with a new release that includes exciting capabilities and features in the areas of model quality, improved model transparency, the latest integrations, ONNX support, a code-free user interface, time series forecasting, and product integrations.

Technical content

Securing the hybrid cloud with Azure Security Center and Azure Sentinel

Infrastructure security is top of mind for organizations managing workloads on-premises, in the cloud, or hybrid. Keeping on top of an ever-changing security landscape presents a major challenge. Fortunately, the power and scale of the public cloud has unlocked powerful new capabilities for helping security operations stay ahead of the changing threat landscape. Microsoft has developed a number of popular cloud based security technologies that continue to evolve as we gather input from customers. This post breaks down a few key Azure security capabilities and explain how they work together to provide layers of protection.

Customize your automatic update settings for Azure Virtual Machine disaster recovery

In today’s cloud-driven world, employees are only allowed access to data that is absolutely necessary for them to effectively perform their job. The ability to hence control access but still be able to perform job duties aligning to the infrastructure administrator profile is becoming more relevant and frequently requested by customers. When we released the automatic update of agents used in disaster recovery (DR) of Azure Virtual Machines (VMs), the most frequent feedback we received was related to access control. The request we heard from you was to allow customers to provide an existing automation account, approved and created by a person who is entrusted with the right access in the subscription. You asked, and we listened!

Azure Stack IaaS – part nine

Before we built Azure Stack, our program manager team called a lot of customers who were struggling to create a private cloud out of their virtualization infrastructure. We were surprised to learn that the few that managed to overcome the technical and political challenges of getting one set up had trouble getting their business units and developers to use it. It turns out they created what we now call a snowflake cloud, a cloud unique to just their organization. This is one of the main problems we were looking to solve with Azure Stack. A local cloud that has not only automated deployment and operations, but also is consistent with Azure so that developers and business units can tap into the ecosystem. In this blog we cover the different ways you can tap into the Azure ecosystem to get the most value out of IaaS.

What is the difference between Azure Application Gateway, Load Balancer, Front Door and Firewall?

Last week at a conference in Toronto, an attendee came to the Microsoft booth and asked something that has been asked many times in the past. So, this blog post covers all of it here for everyone’s benefit. What are the differences between Azure Firewall, Azure Application Gateway, Azure Load Balancer, Network Security Groups, Azure Traffic Manager, and Azure Front Door? This blog offers a high-level consolidation of what they each do.

Azure shows

Five tools for building APIs with GraphQL | Five Things

Burke and Chris are back and this week they're bringing you five tools for building API's with GraphQL. True story, they shot this at the end of about a twelve hour day and you can see the pain in Burke's eyes. It's not GraphQL he doesn't like, it's filming for six straight hours. Also, Chris picks whistles over bells (because of course he does) and Burke fights to stay awake for four minutes.

Microservices and more in .NET Core 3.0 | On .NET

Enabling developers to build resilient microservices is an important goal for .NET Core 3.0. In this episode, Shayne Boyer is joined by Glenn Condron and Ryan Nowak from the ASP.NET team who discuss some of the exciting work that's happening in the microservice space for .NET Core 3.0.

Interknowlogy mixes Azure IoT and mixed reality | The Internet of Things Show

When mixed reality meets the Internet of Things through Azure Digital Twins, a new way of accessing data materializes. See how Interknowlogy mixes Azure IoT and Mixed Reality to deliver not only stunning experiences but also accrued efficiency and productivity to workforce.

Bring DevOps to your open-source projects: Top three tips for maintainers | The Open Source Show

Baruch Sadogurksy, Head of Developer Relations at JFrog, and Aaron Schlesinger, Cloud Advocate at Microsoft and Project Athens Maintainer, talk about the art of DevOps for Open Source. Balancing contributor needs with the core DevOps principles of people, process, and tools. You'll learn how to future-proof your projects, avoid the dreaded Bus Factor, and get Aaron and Baruch's advice for evaluating and selecting tools, soliciting contributor input and voting, documenting processes, and so much more.

Episode 282 – Azure Front Door Service | The Azure Podcast

Cynthia talks with Sharad Agrawal on what Azure Front Door Service is, how to choose between Azure Front Door Service, CDN, Azure Traffic Manager and App Gateway, and how to get started.

HTML5 audio not supported

Atley Hunter on the Business of App Development | Azure DevOps Podcast

In this episode, Jeffrey and Atley are discussing the business of app development. Atley describes some of the first apps he’s ever developed, some of the most successful and popular apps he’s ever created, how he’s gone about creating these apps, and gives his tips for other developers in the space.

Industries and partners

Empowering clinicians with mobile health data: Right information, right place, right time

Improving patient outcomes and reducing healthcare costs depends on healthcare providers such as doctors, nurses, and specialized clinician ability to access a wide range of data at the point of patient care in the form of health records, lab results, and protocols. Tactuum, a Microsoft partner, provides the Quris solution that empowers clinicians with access to the right information, the right place, at the right time, enabling them to do their jobs efficiently and with less room for error.

Building a better asset and risk management platform with elastic Azure services

Elasticity means services can expand and contract on demand. This means Azure customers who are on a pay-as-you-go plan will reap the most benefit out of Azure services. Their service is always available, but the cost is kept to a minimum. Together with elasticity, Azure lets modern enterprises migrate and evolve more easily. For financial service providers, the modular approach lets customers benefit from best-of-breed analytics in three key areas. Read the post to learn what they are.

Symantec’s zero-downtime migration to Azure Cosmos DB

How do you migrate live, mission-critical data for a flagship product that must manage billions of requests with low latency and no downtime? The consumer business unit at Symantec faced this exact challenge when deciding to shift from their costly and complex self-managed database infrastructure, to a geographically dispersed and low latency managed database solution on Azure. The Symantec team shared their business requirements and decision to adopt Azure Cosmos DB in a recent case study.
Quelle: Azure