Adobe: Mit dem Shockwave-Player ist bald Schluss

Der Shockwave-Player von Adobe gehörte zu den vielen lästigen Plugins, die im vergangenen Jahrzehnt in so manchem Browser installiert wurden. Bekannt dürfte Shockwave den meisten durch die ewigen Meldungen von Sicherheitslücken sein. Doch Golem.de hat damals auch 3D-Shooter im Netzwerk damit gespielt. Von Andreas Sebayang (Adobe, Grafiksoftware)
Quelle: Golem

Return of the Smesh (Spinnaker Shpinnaker and Istio Shmistio to make a Smesh! Part 2)

The post Return of the Smesh (Spinnaker Shpinnaker and Istio Shmistio to make a Smesh! Part 2) appeared first on Mirantis | Pure Play Open Cloud.
One of the first things I learned on my sojourn through the open source world is that there are ALWAYS new and different approaches to building the better mouse trap when it comes to component design within a given architecture, and that a single project doesn’t usually contain all of the answers to questions created when developing new application architectures.
Yes, the Kubernetes framework does satisfy a host of application needs in an acceptable manner for most applications. But what happens when your needs become more and more dependent on the flow of data between components and the distances between the providing resources becomes greater? Issues such as Quality of Service (QoS) become very important, for one thing. What if there’s a greater need for secured access against the individual services? These issues point to needs not addressed within the Kubernetes framework itself. This is where the concept of the Smesh (Service Mesh) comes into being to fill the need.
Before we go right to the heart of the Smesh, let’s take a closer look at the the Microservices architecture and the needs that it is designed to address.
The Microservices Architecture
Martin Fowler, renowned British author and software developer, described the microservice architectural style as “an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms,” often via an HTTP resource or API.
Providing a native microservice-capable platform such as Kubernetes is essential to supporting the Microservices Architecture properly.
Below is an example of how the Microservices Architecture is laid out, and a rudimentary diagram of how the services interact:

Istio is a service mesh layered on top of the K8s framework to support the definition of authority, enhance performance of the bandwidth, and to control the flow of data between microservices.
What is a Smesh (Service Mesh)? Regarding Istio and other tools…
A service mesh is a configurable infrastructure layer for a microservices application. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.
William Morgan described the service mesh as “a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application.”
The service mesh technology comes with its own lexicon of new terms for old features and capabilities to learn and understand. Some of the more important terms and concepts are listed below for reference:

Container orchestration framework – Kubernetes is the most common framework filling this need, but there are others.
Services vs. service instances – There is a difference between the term service and the term service instance. The distinction is that the service represents the definition rather than the instance itself.
Sidecar proxy – A sidecar proxy attaches itself to a specific service instance. It is managed by the orchestration framework and handles intercommunication between all the other proxies, reducing demand on the instances themselves.
Service discovery – This capability enables the different services to “discover” each other when needed. The Kubernetes framework keeps a list of instances that are healthy and ready to receive requests.
Load balancing – In a service mesh, load balancing capabilities place the least busy instances at the top of the stack, so that more busy instances can get the greatest amount of service without starving the least busy instances of resources.
Encryption – Instead of having each of the services provide their own encryption/decryption, the service mesh can encrypt and decrypt requests and responses instead.
Authentication and authorization. The service mesh can validate requests BEFORE they are sent to the service instances.
Support for the circuit breaker pattern. The service mesh can support the circuit breaker pattern, which can stop requests from ever being sent to an unhealthy instance. We will discuss this specific feature later.

The combined use of these features and capabilities provide the means for traffic shaping or QoS. Traffic shaping, also known as packet shaping, is a type of network bandwidth management for the manipulation and prioritization of network traffic to reduce the impact of heavy use cases from affecting other users. QoS, another means of traffic shaping, recognizes the various types of traffic moving over your network and prioritizes it accordingly. Istio, for example,  provides a uniform way to connect, secure, manage and monitor microservices and provides traffic shaping between microservices while capturing the telemetry of the traffic flow for prioritizing network traffic.
Istio also includes the capability of circuit-breaking to the application development process. Circuit-breaking helps to guard against partial or total cascading network communication failures by maintaining a status of the health and viability of a service instance. The circuit-breaker feature determines whether traffic should continue to be routed to a given service instance. The application developer must determine what to do as a design consideration when the service instance has been marked as not accepting requests.
Envoy, which is integrated as the backend proxy for Istio, treats its circuit-breaking functionality as a subset of load balancing and health checking. Envoy separates out its routing methods from the communication to the actual backend clusters, eliminating the routes to those service instances which are unhealthy or unable to accept requests. This method allows for the creation of many different potential routes to map traffic to the proper healthy and request accepting backends.
Below is a diagram of the Istio architecture for reference:

The Istio components and their functions are listed below:
Control plane:

Istio-Manager: provides routing rules and service discovery information to the Envoy proxies.
Mixer: collects telemetry from each Envoy proxy and enforces access control policies.
Istio-Auth: provides “service to service” and “user to service” authentication. This component also converts unencrypted traffic to TLS based traffic between services, as needed.

Data plane:

Envoy: a feature rich proxy managed by control plane components. Envoy intercepts traffic to and from the service, applying routing and access policies following the rules set in the control plane.

So that’s the basics.  Next time, we’ll go ahead and install Istio and some sample apps and take it for a spin.
The post Return of the Smesh (Spinnaker Shpinnaker and Istio Shmistio to make a Smesh! Part 2) appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Kubeflow on OpenShift

  (Image source opensource.com) Kubeflow is an open source project that provides Machine Learning (ML) resources on Kubernetes clusters. Kubernetes is evolving to be the hybrid solution for deploying complex workloads on private and public clouds. A fast growing use case is using Kubernetes as the deployment platform of choice for machine learning. Infrastructure engineers […]
The post Kubeflow on OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Azure Marketplace new offers – Volume 33

We continue to expand the Azure Marketplace ecosystem. From February 1 to February 15, 2019, 50 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

Attunity Replicate: Attunity Replicate integrates data in real time to Azure targets, including Azure SQL Data Warehouse, Azure SQL Database, and Azure Event Hubs, and it helps load, ingest, migrate, distribute, consolidate, and synchronize data.

Cyber Security Assessment Tool (CSAT): The Cyber Security Assessment Tool (CSAT) from QS solutions provides insight into security vulnerabilities through automated scans and analyses.

Fortinet FortiSandbox Advanced Threat Protection: FortiSandbox for Azure enables organizations to defend against advanced threats natively in the cloud, alongside third-party security solutions, or as an extension to their on-premises security architectures.

InterSystems IRIS for Health Community Edition: InterSystems IRIS for Health provides the capabilities for building complex, data-intensive applications. It’s a comprehensive platform spanning data management, interoperability, transaction processing, and analytics.

KNIME Server: KNIME Server offers shared repositories, advanced access management, flexible execution, web enablement, and commercial support. Share data, nodes, metanodes, and workflows across your team and throughout your company.

ME PasswordManagerPro 20 admins,25 keys: ManageEngine Password Manager Pro is a web-based, privileged identity management solution that lets you manage privileged identity passwords, SSH keys, and SSL certificates.

MODX on Windows Server 2016: MODX is an agile and user-friendly content management system that offers unlimited scalability and high flexibility.

MODX on Windows Server 2019: MODX is an agile and user-friendly content management system that offers unlimited scalability and high flexibility.

Panzura Freedom NAS 7.1.8.0: Panzura Freedom Filer is a hybrid cloud data management solution that enables global enterprise customers to consolidate their data islands into a single source of truth in Azure.

Puppet Enterprise 2018.1.7: Puppet Enterprise lets you automate the entire lifecycle of your Azure infrastructure, simply and securely, from initial provisioning through application deployment.

Vemn Digital Folder: Digital Folder is a web application that facilitates the management of your organization’s digital documents, creating sustainable digital transformation.

WALLIX Bastion: With an unobtrusive architecture, full multitenancy, and virtual appliance packaging, WALLIX Bastion (WAB Suite) provides an effective route to security and compliance.

Web applications

Discovery Hub and Azure SQL DB: Discovery Hub supports core analytics, the modern data warehouse, IoT, and AI. Developed with a cloud-first mindset, Discovery Hub provides a cohesive data fabric across Microsoft on-premises technology and Azure Data Services.

Discovery Hub and Azure SQL DB and AAS: Discovery Hub Application Server for Azure is a high-performance data management platform that accelerates your time to data insights.

Forcepoint Email Security V8.5.3: Forcepoint Email Security is an enterprise email and data loss prevention solution offering inbound and outbound protection against malware, blended threats, and spam.

MultiChain on Azure: Save time configuring servers and installing MultiChain, a leading enterprise blockchain platform, using these templates.

NetGovern: NetGovern enables legal supervisors, attorneys, paralegals, and case administrators to perform e-discovery on file systems, email archives, SharePoint, and file-sharing solutions such as Box.com and Citrix ShareFile.

NetGovern Multitenant: Enable anyone to rapidly respond to e-discovery requests. This application will deploy a shared infrastructure layer with one tenant preloaded for cloud service providers to deliver service to their customers.

Radware Alteon VA Application Cluster: This Alteon virtual appliance on Azure provides a simple and agile way to consume and deploy all standard ADC functionality as well as advanced services like WAF, acceleration, and application performance monitoring.

S2IX – Secure Search and Information Exchange: Secure by design, S2IX provides a protected business process automation solution. This gives users the ability to collaborate and manage documents in an environment they can depend on, even in remote or challenging locations.

Starburst Presto (v 0.213-e) for Azure HDInsight: Architected for the separation of storage and compute, Presto is perfect for querying data in Azure Blob storage, Azure Data Lake storage, relational databases, Cassandra, MongoDB, Kafka, and many others.

Veritas Resiliency Platform Repository Server: Veritas Resiliency Platform (VRP) provides single-click disaster recovery and migration for any source workload into Azure. This version of Veritas Resiliency Platform Repository Server will upgrade your installation.

Container solutions

Drupal with NGINX Container Image: Drupal with NGINX enhances the popular open-source content management system with the performance and security of NGINX. Drupal's modular architecture lets you create many different types of websites and applications.

Consulting services

4-Week Azure Assessment: This cloud migration assessment by Quisitive is designed to help organizations assess cloud readiness, evaluate best paths for their application environment, and build a clear road map and ROI view for potential Azure migration.

AgileAscend-M365 Migration: 3 week Implementation: Working alongside your project manager and IT staff, Agile IT's award-winning professional team will assist in planning, preparing, and migrating on-premises infrastructure to Microsoft Azure.

AgileProtect: Azure Data Backup – 3-Week Imp.: AgileProtect Standard is designed for business-critical systems. AgileProtect Standard backs up everything on the system, enabling a complete replica to be spun up on suitable hardware at will.

AgileSecurity: Intune – 3 week implementation: With Agile IT, define a mobile device management strategy that fits the needs of your organization. Set granular app policies to containerize data access while preserving the familiar Office 365 user experience.

Azure 5-Day Proof of Concept (POC): Chorus IT's proof of concept allows you to evaluate Azure with a small-scale partial implementation or focus on a particular area you want to evaluate.

Azure Accelerate: 2-week Proof of Concept: iLink Systems Inc.'s proof of concept is designed to streamline an organization’s journey to the cloud through a combination of training, workshops, comparative analysis, and rapid prototyping.

Azure Analytics services – 2 Hour Briefing: This briefing by Incremental Group will take you through the capabilities available from Azure Analytics and discuss how these could help your organization.

Azure Cost Optimization: 3 Week Assessment: This assessment by DXC focuses on analyzing current Azure consumption to identify opportunities to right-size the environment, inclusive of storage, networking, and virtual machines.

Azure Managed CI/CD Pipeline: 8-Wk Implementation: 2nd Watch will identify your environment requirements and implement CI/CD pipeline tools to your specifications, accelerating your adoption of agile methodologies.

Azure Migration Quickstart: 4 Week POC: The Azure Migration Quickstart by DXC works to test an initial workload of O/S, application, and/or database to migrate into Azure as a proof of concept.

Azure Performance Optimization: 3 Week Assessment: DXC's Azure Performance assessment provides a data-driven review of your existing Azure environment to help identify and resolve performance challenges.

Azure Security Managed Services: 2 Wk Assessment: In this assessment, DXC consultants will implement one or more tools to assist in the security review of your Azure configuration and architecture.

Cloud Cost Optimisation – 10 day implementation: risual's implementation allows organizations to track and monitor their costs on Azure to ensure they are getting the most value possible.

Cloud Readiness Assessment: 1-Day: Incremental Group’s Cloud Readiness Assessment reviews an organization's IT infrastructure and supports the organization's future business plans while assessing the impact they could have on the IT infrastructure.

DevOps Quality Services: 8 Wk Implementation: Sogeti USA offers assistance in designing and implementing DevOps quality programs, including establishing an automation testing framework, building a quality pipeline, and providing recommendations on enterprise metrics.

Identity and Access Management – 2 Hour Briefing: One of Incremental Group’s expert consultants will talk you through the wide range of services available from Microsoft Azure to help you manage your identity and access management.

Microsoft Azure AI Chatbot: 1-Hr Assessment: In this assessment, Cynoteck Technology Solutions will discuss AI chatbot development. Learn how chatbots and Azure Bot Service can benefit your business.

Microsoft Azure Health Check: 1 Week Assessment: This health check by DXC will involve a review of cloud architecture, cost optimization, Azure security best practices, and configuration best practices.

Migrate Dynamics GP to Azure: 1 Hr Assessment: Syvantis Technologies will walk you through the process of migrating your Microsoft Dynamics GP system to Microsoft Azure.

Modernize Your Apps – 2 Week Assessment: Modernize your legacy systems and build new applications in Azure with SPR Consulting's two-week assessment to help you take advantage of the flexibility of Azure.

PCI Azure Implementation Services: 4 Week POC: DXC will build a detailed proof of concept to offload the bulk of deploying and managing PCI-compliant workloads in Microsoft Azure.

SAP to Azure Migration: 1-Day Workshop: This SAP on Azure workshop by Infopulse will cover best practices for architecting, developing, and managing SAP services and apps in Azure. Customers should have good architectural knowledge in SAP Basis and Azure services.

SAP to Azure Migration: 1-Hour Briefing: Infopulse's briefing will help you understand the key benefits and challenges of a SAP-to-Azure migration and choose the best cloud solution for your business.

SAP to Azure Migration: 2-Week PoC: Infopulse will help you develop a proof of concept to validate the feasibility of your ideas and identify the benefits of migrating your on-premises SAP solution to Microsoft Azure.

SAP to Azure Migration Readiness: 1-Day Assessment: Infopulse will help you identify business drivers and the potential challenges of a SAP migration to Azure, then gather all requirements and create a suitable migration strategy.

Small Systems Mainframe Migration: 3-Wk Assessment: This assessment by Asysco Inc. will investigate smaller mainframe systems in order to develop a plan to migrate to Azure.

TCO & Cloud Readiness Assessment – 6 Wk Assessment: Ensono's assessment will involve installing a console server (built by the customer), gathering data, creating an HCP tenant, ingesting the initial server list, and conducting analysis.

Quelle: Azure

Create a transit VNet using VNet peering

Azure Virtual Network (VNet) is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. Read more about VNet’s key benefits in our documentation, “What is Azure Virtual Network?”

With VNets, you can connect your network in multiple ways. You can connect to on-premises using Point-to-Site (P2S), Site-to-Site (S2S) gateways or ExpressRoute gateways. You can also connect to other VNets directly using VNet peering.

Customer network can be expanded by peering Virtual Networks to one another. Traffic sent over VNet peering is completely private and stays on the Microsoft Backbone. No extra hops or public Internet involved. Customers typically leverage VNet peering in the hub-and-spoke topology. The hub consists of shared services and gateways, and the spokes comprise business units or applications.

Gateway transit

Today I’d like to do a refresh of a unique and powerful functionality we’ve supported from day one with VNet peering. Gateway transit enables you to use a peered VNet’s gateway for connecting to on-premises instead of creating a new gateway for connectivity. As you increase your workloads in Azure, you need to scale your networks across regions and VNets to keep up with the growth. Gateway transit allows you to share an ExpressRoute or VPN gateway with all peered VNets and lets you manage the connectivity in one place. Sharing enables cost-savings and reduction in management overhead.

With Gateway transit enabled on VNet peering, you can create a transit VNet that contains your VPN gateway, Network Virtual Appliance, and other shared services. As your organization grows with new applications or business units and as you spin up new VNets, you can connect to your transit VNet with VNet peering. This prevents adding complexity to your network and reduces management overhead of managing multiple gateways and other appliances.

VNet peering works across regions, across subscriptions, across deployment models (classic to ARM), and across subscriptions belonging to different Azure Active Directory tenants.

You can create a Transit VNet like one shown below.

Easy to set up – just check a box!

To use this powerful capability, simply check a box.

Create or update the virtual network peering from Hub-RM to Spoke-RM inside the Azure portal. Navigate to the Hub-RM VNet or the VNet with the gateway you’d like to use for gateway transit, and select Peerings, then Add:

Set the Allow gateway transit option
Select OK

Create or update the virtual network peering from Spoke-RM to Hub-RM from the Azure portal. Navigate to the Spoke-RM VNet, select on Peerings, then Add:

Select the Hub-RM virtual network in the corresponding subscription
Set the Use remote gateways option
Select OK

You can learn more in this detailed step by step guide on how to configure VPN gateway transit for virtual network peering.

Security

You can use Network Security Groups and security rules to control the traffic between on-premises and your Azure VNets. Security policies can be centralized in the Hub or transit VNet. A network virtual appliance (NVA) can inspect all traffic going on-premises as well as into Azure. Since there policy is set in a central VNet, you can set it up just once.

Routing

We plumb the routes, so you don’t have to. Each Azure Virtual Machine (VM) you deploy will benefit with the routes being plumbed automatically. To confirm a virtual network peering, you can check effective routes for a network interface in any subnet in a virtual network. If a virtual network peering exists, all subnets within the virtual network have routes with next hop type VNet peering, for each address space in each peered virtual network.

Monitoring

You can check the health status of your VNet Peering connection in the Azure portal. Connected means you are all peered and good to go. Initiated means a second link needs to be created. Disconnected means the peering has been deleted from one side.

You can also troubleshoot connectivity to a virtual machine in a peered virtual network using Network Watcher's connectivity check. Connectivity check lets you see how traffic is routed from a source virtual machine's network interface to a destination virtual machine's network interface as seen below.

Limits

You can peer with 100 other VNets. We’ve increased limits by four times and as our customers scale in Azure we will continue to increase these limits. Stay updated with limits by visiting our documentation, “Azure subscription and service limits, quotas, and constraints.”

Pricing

You pay only on traffic that goes through the gateway. No double charge. Traffic passing through a remote gateway in this scenario is subject to VPN gateway charges and does not incur VNet peering charges. For example, If VNetA has a VPN gateway for on-premise connectivity and VNetB is peered to VNetA with appropriate properties configured, traffic from VNetB to on-premises is only charged egress per VPN gateway pricing. VNet peering charges do not apply.

Availability

VNet peering with gateway transit works across classic Azure Service Management (ASM) and Azure Resource Manager (ARM) deployment models. Your gateway should be in your ARM VNet. It also works across subscriptions and subscriptions belonging to different Azure Active Directory tenants.

Gateway transit has been available since September 2016 for VNet peering in all regions and will be available for global VNet peering shortly.

Gateway Transit with VNet peering enables you to create a transit VNet to keep your shared services in a central location. To keep up with your growing scale, you can scale your VNets and use your existing VPN gateway saving management overhead, cost, and time. We developed a template you can use to get started. Try it out today!
Quelle: Azure

Cloud AI helps you train and serve TensorFlow TFX pipelines seamlessly and at scale

Last week, at the TensorFlow Dev Summit, the TensorFlow team released new and updated components that integrate into the open source TFX Platform (TensorFlow eXtended). TFX components are a subset of the tools used inside Google to power hundreds of teams’ wide-ranging machine learning applications. They address critical challenges to successful deployment of machine learning (ML) applications in production, such as:The prevention of training-versus-serving skewInput data validation and quality checksVisualization of model performance on multiple slices of dataA TFX pipeline is a sequence of components that implements an ML pipeline that is specifically designed for scalable, high-performance machine learning tasks. TFX pipelines support modeling, training, serving/inference, and managing deployments to online, native mobile, and even JavaScript targets.In this post, we‘ll explain how Google Cloud customers can use the TFX platform for their own ML applications, and deploy them at scale.Cloud Dataflow as a serverless autoscaling execution engine for (Apache Beam-based) TFX componentsThe TensorFlow team authored TFX components using Apache Beam for distributed processing. You can run Beam natively on Google Cloud with Cloud Dataflow, a seamless autoscaling runtime that gives you access to large amounts of compute capability on-demand. Beam can also run in many other execution environments, including Apache Flink, both on-premises and in multi-cloud mode. When you run Beam pipelines on Cloud Dataflow—the execution environment they were designed for—you can access advanced optimization features such as Dataflow Shuffle that groups and joins datasets larger than 200 terabytes. The same team that designed and built MapReduce and Google Flume also created third-generation data runtime innovations like dynamic work rebalancing, batch and streaming unification, and runner-agnostic abstractions that exist today in Apache Beam.Kubeflow Pipelines makes it easy to author, deploy, and manage TFX workflowsKubeflow Pipelines, part of the popular Kubeflow open source project, helps you author, deploy and manage TFX workflows on Google Cloud. You can easily deploy Kubeflow on Google Kubernetes Engine (GKE), via the 1-click deploy process. It automatically configures and runs essential backend services, such as the orchestration service for workflows, and optionally the metadata backend that tracks information relevant to workflow runs and the corresponding artifacts that are consumed and produced. GKE provides essential enterprise capabilities for access control and security, as well as tooling for monitoring and metering.Thus, Google Cloud makes it easy for you to execute TFX workflows at considerable scale using:Distributed model training and scalable model serving on Cloud ML EngineTFX component execution at scale on Cloud DataflowWorkflow and metadata orchestration and management with Kubeflow Pipelines on GKEFigure 1: TFX workflow running in Kubeflow PipelinesThe Kubeflow Pipelines UI shown in the above diagram makes it easy to visualize and track all executions. For deeper analysis of the metadata about component runs and artifacts, you can host a Jupyter notebook in the Kubeflow cluster, and query the metadata backend directly. You can refer to this sample notebook for more details.At Google Cloud, we work to empower our customers with the same set of tools and technologies that we use internally across many Google businesses to build sophisticated ML workflows. To learn more about using TFX, please check out the TFX user guide, or learn how to integrate TFX pipelines into your existing Apache Beam workflows in this video.Acknowledgments:Sam McVeety, Clemens Mewald, and Ajay Gopinathan also contributed to this post.
Quelle: Google Cloud Platform

Stay informed about service issues with Azure Service Health

When your Azure resources go down, one of your first questions is probably, “Is it me or is it Azure?” Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance.

In this blog, we’ll cover how you can use Azure Service Health’s personalized dashboard to stay informed about issues that could affect you now or in the future.

Monitor Azure service issues and take action to mitigate downtime

You may already be familiar with the Azure status page, a global view of the health of all Azure services across all Azure regions. It’s a good reference for major incidents with widespread impact, but we recommend using Azure Service Health to stay informed about Azure incidents and maintenance. Azure Service Health only shows issues that affect you, provides information about all incidents and maintenance, and has richer capabilities like alerting, shareable updates and RCAs, and other guidance and support.

Azure Service Health tracks three types of health events that may impact you:

Service issues: Problems in Azure services that affect you right now.
Planned maintenance: Upcoming maintenance that can affect the availability of your services in the future. Typically communicated at least seven days prior to the event.
Health advisories: Health-related issues that may require you to act to avoid service disruption. Examples include service retirements, misconfigurations, exceeding a usage quota, and more. Usually communicated at least 90 days prior, with notable exceptions including service retirements, which are announced at least 12 months in advance, and misconfigurations, which are immediately surfaced.

Learn more about your personalized health dashboard.

Get started with Azure Service Health

Azure Service Health’s dashboard provides a large amount of information about incidents, planned maintenance, and other health advisories that could affect you. While you can always visit the dashboard in the portal, the best way to stay informed and take action is to set up Azure Service Health alerts. With alerts, as soon as we publish any health-related information, you’ll get notified on whichever channels you prefer, including email, SMS, push notification, webhook into ServiceNow, and more.

Below are a few resources to help you get started:

Review your Azure Service Health dashboard and set up alerts.
If you need help getting started, check our Azure Service Health documentation.
We always welcome feedback. Submit your ideas or email us with any questions or comments at servicehealth@microsoft.com.

Quelle: Azure