Welcome back to our blog post series on Service Mesh and Istio.In our previous posts, we talked about what the Istio service mesh is, and why it matters. Then, we dove into demos on how to bring Istio into production, from safe application rollouts and security, to SRE monitoring best practices.Today, leading up to Google Cloud NEXT ‘19, we’re talking all about using Istio across environments, and how Istio can help you unlock the power of hybrid cloud.Why hybrid?Hybrid cloud can take on many forms. Typically, hybrid cloud refers to operating across public cloud and private (on-premises) cloud, and multi-cloud means operating across multiple public cloud platforms.Adopting a hybrid- or multi-cloud architecture could provide a ton of benefits for your organization. For instance, using multiple cloud providers helps you avoid vendor lock-in, and allows you to choose the best cloud services for your goals. Using both cloud and on-premises environments allows you to simultaneously enjoy the benefits of the cloud (flexibility, scalability, reduced costs) and on-prem (security, lower latency, hardware re-use). And if you’re looking to move to the cloud for the first time, adopting a hybrid setup lets you do so at your own pace, in the way that works best for your business.Based on our experience at Google, and what we hear from our customers, we believe that adopting a hybrid service mesh is key to simplifying application management, security, and reliability across cloud and on-prem environments—no matter if your applications run in containers, or in virtual machines. Let’s talk about how to use Istio to bring that hybrid service mesh into reality.Hybrid Istio: a mesh across environmentsOne key feature of Istio is that it provides a services abstraction for your workloads (Pods, Jobs, VM-based applications). When you move to a hybrid topology, this services abstraction becomes even more crucial, because now you have not just one, but many environments to worry about.When you adopt Istio, you get all the management benefits for your microservices on one Kubernetes cluster—visibility, granular traffic policies, unified telemetry, and security. But when you adopt Istio across multiple environments, you are effectively giving your applications new superpowers. Because Istio is not just a services abstraction on Kubernetes. Istio is also a way to standardize networking across your environments. It’s a way to centralize API management and decouple JWT validation from your code. It’s a fast-track to a secure, zero-trust network across cloud providers.So how does all this magic happen? Hybrid Istio refers to a set of sidecar Istio proxies (Envoys) that sit next to all your services across your environments—every VM, every container—and know how to talk to each other across boundaries. These Envoy sidecars might be managed by one central Istio control plane, or by multiple control planes running in each environment.Let’s dive into some examples. Multicluster Istio, one control planeOne way to enable hybrid Istio is by configuring a remote Kubernetes cluster that “calls home” to a centrally-running Istio control plane. This setup is useful if you have multiple GKE clusters in the same GCP project, but Kubernetes pods in both clusters need to talk to each other. Use cases for this include: production and test clusters through which you canary new features, standby clusters ready to handle failover, or redundant clusters across zones or regions.This demo spins up two GKE clusters in the same project, but across two different zones (us-central and us-east). We install the Istio control plane on one cluster, and Istio’s remote components (including the sidecar proxy injector) on the other cluster. From there, we can deploy a sample application spanning both Kubernetes clusters.The exciting thing about this single control plane approach is that we didn’t have to change anything about how our microservices talk to each other. For instance, the Frontend can still call CartService with a local Kubernetes DNS name (cartservice:port). This DNS resolution works because GKE pods in the same GCP project belong to the same virtual network, thus allowing direct pod-to-pod communication across clusters.Multicluster Istio, two control planesNow that we have seen a basic multi-cluster Istio example, let’s take it a step further with another demo.Say you’re running applications on-prem and in the cloud, or across cloud platforms. For Istio to span these different environments, pods inside both clusters must be able to cross network boundaries.This demo uses two Istio control planes—one per cluster—to form a single, two-headed logical service mesh. Rather than having the sidecar proxies talk directly to each other, traffic moves across clusters using Istio’s Ingress Gateways. An Istio Gateway is just another Envoy proxy, but it’s specifically dedicated for traffic in and out of a single-cluster Istio mesh.For this setup to work across a network partition, each Istio control plane has a special domain name server (DNS) configuration. In this dual-control-plane topology, Istio installs a secondary DNS server (CoreDNS) which resolves domain names for services outside of the local cluster. For those outside services, traffic moves between the Istio Ingress Gateways, then onwards to the relevant service.In the demo for this topology, we show how this installation works, then how to configure the microservices running across both clusters to talk to each other. We do this through the Istio ServiceEntry resource. For instance, we deploy aservice entry for the Frontend (cluster 2) into cluster 1. In this way, cluster 1 knows about services running in cluster 2.Unlike the first demo, this dual control-plane Istio setup does not require a flat network between clusters. This means you can have overlapping GKE pod CIDRs between your clusters. All that this setup requires is that the Istio Gateways are exposed to the Internet. In this way, the services inside each cluster can stay safe in their own respective environments.Adding a virtual machine to the Istio meshMany organizations use virtual machines (VMs) to run their applications, instead of (or in addition to) containers. If you’re using VMs, you can still enjoy the benefits of an Istio mesh. This demo shows you how to integrate a Google Compute Engine instance with Istio running on GKE. We deploy the same application as before. But this time, one service (ProductCatalog) runs on an external VM, outside of the Kubernetes cluster.This GCE VM runs a minimal set of Istio components to be able to communicate with the central Istio Control Plane. We then deploy an Istio ServiceEntry object to the GKE cluster, which logically adds the external ProductCatalog service to the mesh.This Istio configuration model is useful because now, all the other microservices can reference ProductCatalog as if it were running internal to the Kubernetes cluster. From here, you could even add Istio policies and rules for ProductCatalog as if it were running in Kubernetes; for instance, you could enable mutual TLS for all inbound traffic to the VM.Note that while this demo uses a Google Cloud VM for demo purposes, you could run this same example on bare metal, or with an on-prem VM. In this way, you can bring Istio’s modern, cloud-native principles to virtual machines running anywhere. Building the hybrid futureWe hope that one or more of these hybrid Istio demos resonates with the way your organization runs applications today. But we also understand that adopting a service mesh like Istio means taking on complexity and installation overhead, in addition to any complexity associated with moving to microservices and Kubernetes. In that case, adopting a hybrid service mesh is even more complex, because you’re dealing with different environments, each with their own technical specifications.Here at Google Cloud, we are dedicated to helping you simplify your day-to-day cloud operations with a consistent, modern, cross-platform setup. It’s why we created Istio on GKE, which provides a one-click install of Istio on Google Kubernetes Engine (GKE). And it’s the driving force behind our work on our Cloud Services Platform (CSP). CSP is a product to help your organization move to (and across) the cloud—at your own pace, and in the way that works best for you. CSP relies on an open cloud stack—Kubernetes and Istio—to emphasize portability. We are excited to make CSP a reality this year.Thank you for joining us in the service mesh series so far. Stay tuned for the Keynotes and Hybrid Cloud track at Google Cloud NEXT in April. After NEXT, we’ll continue the series with a few advanced posts on Istio operations.
Quelle: Google Cloud Platform
Published by