New GCP region in Zurich: Growing our support for Swiss and European businesses

Die deutsche Version ist hier | La version française est ici | Qui la versione ItalianaOur Google Cloud Platform (GCP) region in Zurich is now live and ready for business. Our sixth European region and nineteenth overall, this new region gives companies doing business in Switzerland more opportunities with lower latency access to their data and workloads.A cloud made for SwitzerlandDesigned to support Swiss and European customers, the Zurich GCP region (europe-west6) comes with three availability zones, enabling high availability workloads. Hybrid cloud customers can seamlessly integrate new and existing deployments with help from our regional partner ecosystem, and via two dedicated interconnect points of presence.The launch of the Zurich region brings lower latency access to GCP products and services for organizations doing business in Switzerland. Hosting applications in the new region can improve latency for end users in Switzerland by up to 10ms. Visit GCPing.com to see latency to  the Zurich region from wherever you happen to be.The Zurich region launches with our standard set of products, including Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, and BigQuery.To take advantage of many GCP services, you first have to get your data into the cloud.  Transfer Appliance is a high-capacity server that lets you transfer large amounts of data to GCP, quickly and securely, and it’s coming to the Swiss market. We recommend Transfer Appliance if you’re moving large quantities of data that would take more than a week to upload. You can request a Transfer Appliance here.This region comes with Cloud Interconnect, our private, software-defined network that provides a fast and reliable link between each region around the world. You can use services that aren’t presently available within the Zurich region via the Google Network, and combine them with other GCP services deployed around the world. That lets you quickly deploy and scale across multiple regions with products designed for organizations with a global footprint.Celebrating with Swiss customersWe kicked off the new region with a special event in Zurich with over 800 business leaders and developers in attendance. SVP of Technical Infrastructure Urs Hölzle officially opened the region. Customers from pharmaceutical, manufacturing, and financial businesses all over Switzerland and Europe learned about GCP and how the local region can benefit their cloud operations.What customers are saying“Swiss-AS dedicates its business exclusively to the support of AMOS, leading aviation maintenance and engineering software. Today, Google Cloud Platform enables us to deliver our AMOS Cloud Service fully dedicated cloud environment worldwide. Now with GCP’s local presence in Zurich, we can bring our service even closer to our AMOS customers based in German-speaking countries.”– Alexis Rapior, Hosting Team, Swiss AviationSoftware ltd.“The new Swiss cloud region opens up exciting opportunities for the health sector. It will enable Balgrist University Hospital to introduce new real-time processing technologies. Collaborations in medical research and development will also be easier and more effective.”– Thomas Huggler, Executive Director, University Hospital Balgrist“We are very excited about the arrival of Google Cloud Platform in Switzerland. With Google Cloud, we can focus our efforts on developing new innovative software features for our customers. It gives us the opportunity to have new environments ready within seconds.”– Marc Loosli, Chief Innovation, NeXora AG (part of the Quickline Group)“Belimo is the leading global manufacturer of actuators, valves, and sensors used in heating, ventilation, and air conditioning (HVAC) systems. Recently, IoT technologies have allowed us to offer HVAC systems controlled by cloud-connected devices, which deliver additional comfort, energy efficiency, safety, ease of installation and maintenance. Belimo chose GCP because we depend on high availability, reliable performance as well as scalability for our global cloud services. The cutting-edge technology and tools from GCP help our teams to focus on the essential.”– Peter Schmidlin, Chief Innovation Officer, Belimo Automation AGWhat partners are saying”Wabion is more than just excited to see Google Cloud coming to Switzerland. Frankly, I believe this is the best thing that could happen to the Swiss cloud market. We have customers that are very interested in Google’s innovation who haven’t migrated because of the lack of a Swiss hub. The new Zurich region closes this gap, unlocking huge opportunities for Wabion to help customers on their Google Cloud journey.”– Michael Gomez, Co-Manager, WabionWhat’s nextFor more details about this region, please visit our Zurich region page where you’ll get access to free resources, whitepapers, the “Cloud On-Air” on-demand video series, and more. If you’re new to GCP, check out Best Practices for Compute Engine Region Selection and contact sales to get started on GCP today.We are launching more GCP zones and regions later this year, starting with Osaka. Our locations page provides updates on the availability of additional services and regions.
Quelle: Google Cloud Platform

La nuova area geografica di GCP a Zurigo: espandiamo il supporto alle imprese Svizzere ed Europee

L’area geografica di Google Cloud (GCP) a Zurigo è ora disponibile ed operativa per migliorare l’attività delle imprese. Sesta in Europa e diciannovesima nel mondo, questa nuova area geografica offre alle aziende che operano in Svizzera maggiori opportunità riducendo la latenza di accesso ai propri dati e al carico di lavoro.Il Cloud fatto per la SvizzeraProgettata per supportare i clienti svizzeri ed europei, l’area geografica GCP di Zurigo (europe-west6) è dotata di tre zone, che consentono carichi di lavoro ad alta disponibilità. I clienti di cloud ibrido possono integrare senza problemi le implementazioni nuove e quelle esistenti con l’aiuto del nostro ecosistema di partner dell’area geografica e tramite due Point of Presence dedicati.L’apertura dell’area geografica di Zurigo ridurrà la latenza all’accesso di prodotti e servizi di Google Cloud per le organizzazioni che operano in Svizzera. Le applicazioni di hosting in questa nuova area possono diminuire la latenza fino a 10 millisecondi per gli utenti finali in Svizzera. Si può visitare il sito GCPing.com per visualizzare il tempo di latenza dall’area di Zurigo, dovunque tu sia nel mondo.L’area di Zurigo si presenta con il nostro standard di prodotti e servizi, inclusi Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, e BigQuery.Per ottenere tutti i vantaggi dei diversi servizi di GCP, è necessario prima di tutto trasferire i dati nel cloud. Transfer Appliance, che sta arrivando nel mercato svizzero, è un server ad alta capacità che consente di trasferire grandi quantità di dati su GCP, in modo rapido e sicuro. Transfer Appliance è consigliata quando si muovono grandi quantità di dati che potrebbero richiedere  più di una settimana per essere caricati. Qui è possibile accedere a Transfer Appliance.Questa area geografica include Cloud Interconnect, il nostro private software-defined network che fornisce un collegamento veloce ed affidabile tra ogni area del mondo. È infatti possibile usare servizi che non sono presenti all’interno dell’area di Zurigo attraverso la Google Network, e combinarli con altri servizi di GCP distribuiti in tutto il mondo. Ciò permette di usare velocemente in più aree geografiche, i prodottidi GCP, creati per organizzazioni che operano a livello globale.Festeggia con i clienti svizzeriAbbiamo dato il via alla nuova area geografica con un evento speciale a Zurigo insieme a oltre 800 imprenditori e sviluppatori. L’area geografica è stata ufficialmente aperta da Urs Hölzle, Senior Vice President Technical Infrastructure. I clienti di aziende farmaceutiche, manifatturiere e finanziarie di tutta la Svizzera e d’Europa hanno imparato a conoscere GCP ed in che modo l’area geografica può beneficiare delle operazioni Cloud.Cosa dicono i nostri clienti”Swiss-AS dedica la propria attività esclusivamente al supporto di AMOS, leader nel software di manutenzione e ingegneria aeronautica. Oggi, GCP ci consente di offrire il nostro AMOS Cloud Service in tutto il mondo. Ora con la presenza locale diGCP a Zurigo, possiamo portare il nostro servizio ancora più vicino ai nostri clienti AMOS con sede in paesi di lingua tedesca.”– Alexis Rapior, Hosting Team, Swiss AviationSoftware ltd.”La nuova cloud region svizzera apre interessanti opportunità per il settore sanitario. Permetterà al Balgrist University Hospital di introdurre nuove tecnologie di elaborazione dei dati in tempo reale. Le collaborazioni nella ricerca e nello sviluppo della medicina saranno ancora più facili e più efficaci “.- Thomas Huggler, direttore esecutivo, University Hospital Balgrist”Siamo molto felici dell’arrivo di Google Cloud Platform in Svizzera. Con Google Cloud, possiamo concentrare i nostri sforzi sullo sviluppo di nuove  e innovative funzionalità software per i nostri clienti. Abbiamo così l’opportunità di avere nuovi ambienti digitali pronti in pochi secondi. “– Marc Loosli, Chief Innovation, NeXora AG (part of the Quickline Group)”Belimo è il principale produttore globale di attuatori, valvole e sensori utilizzati nei sistemi di riscaldamento, ventilazione e condizionamento dell’aria (HVAC). Recentemente, le tecnologie IoT ci hanno permesso di offrire sistemi HVAC controllati da dispositivi connessi al cloud, che offrono ulteriore comfort, efficienza energetica, sicurezza, facilità di installazione e manutenzione. “Belimo ha scelto Google Cloud Platform perchè l’alta disponibilità, le prestazioni affidabili e la scalabilità per i nostri servizi cloud globali sono fondamentali. La tecnologia e gli strumenti all’avanguardia di GCP aiutano i nostri team a concentrarsi sull’essenziale.”– Peter Schmidlin, Chief Innovation Officer, Belimo Automation AGCosa dicono i nostri partners”Wabion è più che entusiasta di vedere Google Cloud arrivare in Svizzera. Francamente credo che questa sia la cosa migliore che potrebbe accadere al mercato svizzero del cloud. Abbiamo clienti molto interessati all’innovazione di Google che non hanno ancora migrato i propri dati in Cloud per la mancanza di un hub svizzero. La nuova area geografica di Zurigo è un’ottima notizia, che può portare nuove opportunità a Wabion per aiutare i clienti nel loro viaggio su Google Cloud. “– Michael Gomez, Co-Manager, WabionQuali saranno le prossime novitàPer maggiori dettagli su questa area geografica, visita la nostra pagina dell’area geografica di Zurigo dove sarà possibile accedere a risorse gratuite, white paper, una serie di video on-demand “Cloud On-Air” e altro ancora. Se non si ha familiarità con GCP, consulta le Best Practices per selezionare l’area geografica di Compute Engine e contattare il team di vendita per iniziare a utilizzare GCP oggi.Entro la fine dell’anno saranno aperte alcune nuove aree geografiche, a partire da quella di Osaka. La nostra pagina delle località Cloud fornisce aggiornamenti costanti sulla disponibilità di servizi e nuove aree geografiche.
Quelle: Google Cloud Platform

Azure Stack IaaS – part four

Protect your stuff

In this post, we’ll cover the concepts and best practices to protect your IaaS virtual machines (VMs) on Azure Stack. This post is part of the Azure Stack Considerations for Business Continuity and Disaster Recovery white paper.

Protecting your IaaS virtual machine based applications

Azure Stack is an extension of Azure that lets you deliver IaaS Azure services from your organization’s datacenter. Consuming IaaS services from Azure Stack requires a modern approach to business continuity and disaster recovery (BC/DR). If you’re just starting your journey with Azure and Azure Stack, make sure to work through a comprehensive BC/DR strategy so your organization understands the immediate and long-term impact of modernizing applications in the context of cloud. If you already have Azure Stack, keep in mind that each application must have a well-articulated BC/DR plan calling out the resiliency, reliability, and availability requirements that meet the business needs of your organization.

What Azure Stack is and what it isn’t

Since launching Azure Stack at Ignite 2017, we’ve received feedback from many customers on the challenges they face within their organization evangelizing Azure Stack to their end customers. The main concerns are the stark differences from traditional virtualization. In the context of modernizing BC/DR practices, three misconceptions stand out:

Azure Stack is just another virtualization platform

Azure Stack is delivered as an appliance on prescriptive hardware co-engineered with our integrated system partners. Your focus must be on the services delivered by Azure Stack and the applications your customers will deploy on the system. You are responsible for working with your applications teams to define how they will achieve high availability, backup recovery, disaster recovery, and monitoring in the context of modern IaaS, separate from infrastructure running the services.

I should be able to use the same virtualization protection schemes with Azure Stack

Azure Stack is delivered as a sealed system with multiple layers of security to protect the infrastructure. Constraints include:

Azure Stack operators only have constrained administrative access to the system. Elevated access to the system is only possible through Microsoft support.
Scale unit nodes and infrastructure services have code integrity enabled.
At the networking layer, the traffic flow defined in the switches is locked down at deployment time using access control lists.

Given these constraints, there is no opportunity to install backup/replication agents on the scale-unit nodes, grant access to the nodes from an external device for replication and snapshotting, or physically attach external storage devices for storage level replication to another site.

Another ask from customers is the possibility of deploying one Azure Stack scale-unit across multiple datacenters or sites. Azure Stack doesn’t support a stretched or multi-site topology for scale-units. In a stretched deployment, the expectation is that nodes in one site can go offline with the remaining nodes in the secondary site available to continue running applications. From an availability perspective, Azure Stack only supports N-1 fault tolerance, so losing half of the node count will take the system offline. In addition, based on how scale-units are configured, Azure Stack only supports fault domains at a node level. There is no concept of a site within the scale-unit.

I am not deploying modern applications in Azure, none of this applies to me

Azure Stack is designed to offer cloud services in your datacenter. There is a clear separation between the operation of the infrastructure and how IaaS VM-based applications are delivered. Even if you’re not planning to deploy any applications to Azure, deploying to Azure Stack is not “business as usual” and will require thinking through the BC/DR implications throughout the entire lifecycle of your application.

Define your level of risk tolerance

With the understanding that Azure Stack requires a different approach to BC/DR for your IaaS VM-based applications, let’s look at the implications of having one or more Azure Stack systems, the physical and logical constructs in Azure Stack, and the recovery objectives you and your application owners need to focus on.

How far apart will you deploy Azure Stack systems

Let’s start by defining the impact radius you want to protect against in the event of a disaster. This can be as small as a rack in a co-location facility or an entire region of a country or continent. Within the impact radius, you can choose to deploy one or more Azure Stack systems. If the region is large enough you may even have multiple datacenters close together, each with Azure Stack systems. The key takeaway is that if the site goes offline due to a disaster or catastrophic event, there is no amount of redundancy that will keep the Azure systems online. If your intent is to survive the loss of an entire site as the diagram below shows, then you must consider deploying Azure Stack systems into multiple geographic locations separated by enough distance so a disaster in one location does not impact any other locations.

Help your application owners understand the physical and logical layers of Azure Stack

Next it’s important to understand the physical and logical layers that come together in an Azure Stack environment. The Azure Stack system running all the foundational services and your applications physically reside within a rack in a datacenter. Each deployment of Azure Stack is a separate instance or cloud with its own portal. The diagram below shows the physical and logical layering that’s common for all Azure Stack systems deployed today and for the foreseeable future.

 

Define the recovery time objectives for each application with your application owners

Now that you have a clear understanding of your risk tolerance if a system goes offline, you need to decide the protection schemes for your applications. You need to make sure you can quickly recover applications and data on a healthy system. We’re talking about making sure your applications are designed to be highly available within a scale-unit using availability sets to protect against hardware failures. In addition, you should also consider the possibility of an application going offline due to corruption or accidental deletion. Recovery can be as simple as scaling-out an application or restoring from a backup.

To survive an outage of the entire system, you’ll need to identify the availability requirements of each application, where the application can run in the event of an outage, and what tools you need to introduce to enable recovery. If your application can run temporarily in Azure, you can use services like Azure Site Recovery and Azure Backup to protect your application. Another option is to have additional Azure Stack systems fully deployed, operational, and ready to run applications. The time required to get the application running on a secondary system is the recovery time objective (RTO). This objective is established between you and the application owners. Some application owners will only tolerate minimal downtime while others are ok with multiple days of downtime if the data is protected in a separate location. Achieving this RTO will differ from one application to another. The diagram below summarizes the common protection schemes used at the VM or application level.

 

In the event of a disaster, there will be no time to request an on-demand deployment of Azure Stack to a secondary location. If you don’t have a deployed system in a secondary location, you will need to order one from your hardware partner. The time required to deliver, install, and deploy the system is measured in weeks.

Establish the offerings for application and data protection

Now that you know what you need to protect on Azure Stack and your risk tolerance for each application, let’s review some specific patterns used with IaaS VMs.

Data protection

Applications deployed into IaaS VMs can be protected at the guest OS level using backup agents. Data can be restored to the same IaaS VM, to a new VM on the same system, or a different system in the event of a disaster. Backup agents support multiple data sources in an IaaS VM such as:

Disk: This requires block-level backup of one, some, or all disks exposed to the guest OS. It protects the entire disk and captures any changes at the block level.
File or folder: This requires file system-level backup of specific files and folders on one, some, or all volumes attached to the guest OS.
OS state: This requires backup targeted at the OS state.
Application: This requires a backup coordinated with the application installed in the guest OS. Application-aware backups typically include quiescing input and output in the guest for application consistency (for example, Volume Shadow Copy Service (VSS) in the Windows OS).

Application data replication

Another option is to use replication at the guest OS level or at the application level to make data available in a different system. The replication isn’t offloaded to the underlying infrastructure, it’s handled at the guest OS or above. One example is applications like SQL support asynchronous replication in a distributed availability group.

High availability

For high availability, you need to start by understanding the data persistence model of your applications:

Stateful workloads write data to one or more repositories. It’s necessary to understand which parts of the architecture need point-in-time data protection and high availability to recover from a catastrophic event.
Stateless workloads on the other hand don’t contain data that needs to be protected. These workloads typically support on-demand scale-up and scale-down and can be deployed in multiple locations in a scale-out topology behind a load balancer.

To support application level high availability within an Azure Stack system, multiple virtual machines are grouped into an availability set. Applications deployed in an availability set sit behind a load balancer that distributes incoming traffic randomly among multiple virtual machines.

Across Azure Stack systems, a similar approach is possible with the following differences; The load balancer must be external to both systems or in Azure (i.e. Traffic Manager). Availability sets do not span across independent Azure Stack systems.

Conclusion

Deploying your IaaS VM-based applications to Azure and Azure Stack requires a comprehensive evaluation of your BC/DR strategy. “Business as usual” is not enough in the context of cloud. For Azure Stack, you need to evaluate the resiliency, availability, and recoverability requirements of the applications separate from the protection schemes for the underlying infrastructure.

You must also reset end user expectations starting with the agreed upon SLAs. Customers onboarding their VMs to Azure Stack will need to agree to the SLAs that are possible on Azure Stack. For example, Azure Stack will not meet the stringent zero data loss requirements required by some mission critical applications that rely on storage level synchronous replication between sites. Take the time to identify these requirements early on and build a successful track record of onboarding new applications to Azure Stack with the appropriate level of protection and disaster recovery.

Learn more

Azure Stack Considerations for Business Continuity and Disaster Recovery white paper
Backup and data recovery for Azure Stack with the Infrastructure Backup Service
List of all the BC/DR partners with validated offers for Azure Stack
Azure Backup support for Azure Stack
Azure Site Recovery support for Azure Stack
Understanding architectural patterns and practices for business continuity and disaster recovery on Microsoft Azure Stack
Configure multiple virtual machines in an availability set for redundancy
Availability Sets in Azure Stack
Tutorial: Create and deploy highly available virtual machines

In this blog series

We hope you come back to read future posts in this series. Here are some of our planned upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Foundation of Azure Stack IaaS
Do it yourself
Pay for what you use
It takes a team
If you do it often, automate it
Build on the success of others
Journey to PaaS

Quelle: Azure

Cloud Commercial Communities webinar and podcast newsletter–March 2019

Welcome to the Cloud Commercial Communities monthly webinar and podcast update. Each month the team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. Make sure you catch a live webinar and participate in live QA. If you miss a session, you can review it on demand. Also consider subscribing to the industry podcasts to keep up to date with industry news.

Upcoming in March 2019

Webinars

Getting Started with High Performance Computing (HPC)
Tuesday, March 19, 2019 10AM Pacific
Commandments of Outstanding Presentation Slides
Thursday, March 21, 2019 9AM Pacific
Launching the New H Series of High-Performance Computing (HPC) Clusters
Tuesday, March 26, 2019 10AM Pacific
Securely Migrating to Azure with F5
Wednesday, March 27, 2019 10AM Pacific

Podcasts

Changing Everything for Retailers
Thursday, March 7, 2019
Growing a culture of innovation at FIS with InnovateIN48
Thursday, March 21, 2019

Recap for February 2019

Webinars

Optimize Your Marketplace Listing with Featured Apps and Services
Tuesday, February 5, 2019 11:00 AM PST
Do you have an application or service listed on Azure Marketplace or AppSource? Looking to optimize your listing to be more discoverable by customers? Discoverability in Azure Marketplace and AppSource can be optimized in a variety of ways. Join this session to learn about how you can gain more visibility for your listings by optimizing content, using keywords, adding trials, and about what matters to Microsoft for Featured Apps and Featured Services on Azure Marketplace and AppSource.
Leveraging Free Azure Sponsorship to Grow Your Business on Azure
Tuesday, February 12, 2019 10:00 AM PST
Microsoft has made significant investments in our partners and customers to help them meet today’s complex business challenges and drive business growth. Through Microsoft Azure Sponsorship, partners and customers can get access to free Azure based on their deployment and technical needs. Azure Sponsorship is available to new and existing Azure customers looking to try new partner solutions, and to partners working to build their solutions on Azure.
Get the Most Out of Azure with Azure Advisor
Tuesday, February 19, 2019 10:00 AM PST
Azure Advisor is a free Azure service that analyzes your configurations and usage and provides personalized recommendations to help you optimize your resources for high availability, security, performance, and cost. In this demo-heavy webinar, you’ll learn how to review and remediate Azure Advisor recommendations so you can stay on top of Azure best practices and get the most out of your Azure investment both for your own organization and your customers.
Incidents, Maintenance, and Health Advisories: Stay Informed with Azure Service Health
Tuesday, February 26, 2019 10:00 AM PST
Azure Service Health is a free Azure service that provides personalized alerts and guidance when Azure service issues affect you. It notifies you, helps you understand the impact to your resources, and keeps you updated as the issue is resolved. It can also help you prepare for planned maintenance and changes that could affect the availability of your resources. In this demo-heavy webinar, you’ll learn how to use Azure Service Health keep both your organization and your customers informed about Azure service incidents.
Introducing a New Approach to Learning: Microsoft Learn
Wednesday, February 27, 2019 11:00 AM PST
At Microsoft Ignite 2019, Microsoft launched an exciting new learning platform called Microsoft Learn. During this session, we will provide a demo and overview of the platform, the inspiration and vision of its design, and how we have adapted training to modern learning styles.

Podcasts

The full lifecycle of implementing IoT with PTC
Thursday, Feb 7, 2019
Running an eCommerce system in Azure (and more)
Friday, Feb 22, 2019

Check out recent podcast episodes at the Microsoft industry experiences team podcast page.
Quelle: Azure

Azure Data Box family now enables import to Managed Disks

The Azure Data Box offline family lets you transfer hundreds of terabytes of data to Microsoft Azure in a quick, inexpensive, and reliable manner. We are excited to share that support for managed disks is now available across the Azure Data Box family of devices, which includes Data Box, Data Box Disk, and Data Box Heavy.

With managed disks support on Data Box, you can now move your on-premises virtual hard disks (VHDs) as managed disks in Azure with one simple step. This allows you to save a significant amount of time in lift and shift migration scenarios.

How managed disks work with Data Box solution?

The Data Box family supports the following managed disk types: Premium SSD, Standard SSD, and Standard HDD. When you place your order for any of the Data Box data transfer solutions in the Azure portal, you can now select your storage destination as managed disks and specify the resource groups for ingestion. You will be asked to select a staging storage account, which is used to stage VHDs as page blobs and to then convert page blobs to managed disks. 

When your Data Box device arrives, it will have the shares or folders corresponding to the selected resource groups. These shares or folders are further broken down by managed disk storage types – Premium SSD, Standard SSD, and Standard HDD. Copying your data to the target managed disk type is as easy as copying the VHDs to the corresponding folders using utility like robocopy or just drag and drop.  

For more information on movement to managed disks, please refer to the following, 

Data Box documentation for managed disks, “Tutorial: Use Data Box to import data as managed disks in Azure.”
Data Box Disk documentation for managed disks, “Tutorial: Copy data to Azure Data Box Disk and verify.”

You can also place an order for a Data Box today and import your VHDs as managed disks. Please continue to provide your valuable thoughts and comments by posting on Azure Feedback.
Quelle: Azure