The service mesh era: Securing your environment with Istio

Welcome to the third installment of our series on the Istio service mesh. So far, we’ve discussed the benefits of using a service mesh like Istio and also demonstrated how to deploy applications and manage traffic. In this post we’ll look at something that keeps IT professionals up at night: security.Not only are threats increasing and becoming more diverse, but microservices-based environments also introduce some unique challenges. These challenges include many points of entry, multiple protocols, and the fact that security vulnerabilities in one service tend to get replicated as code is reused.One of Istio’s more important value propositions, then, is how it can help ease the burden of securing your microservices environments, without sacrificing developer time.Istio on Google Kubernetes Engine (GKE) helps with these security goals in a few ways. First, it provides defense in depth; it integrates with your existing security systems to provide multiple layers of protection. Second, it’s the foundation for a zero-trust network, where trust and access are determined by identity and other controls rather than presence inside the network perimeter. Security experts agree that zero-trust environments are more secure than traditional “castle-and-moat” security models, so you can build secure applications on otherwise untrusted networks. Finally, it provides this security by default—you don’t need to change your application code or infrastructure to turn it on.The best way to demonstrate the value of the Istio security layer is to show it in action. Specifically, let’s look at Istio on GKE and how you can use authentication—who a service or user is, and whether we can trust that they are who they say they are—and authorization—what specific permissions this user or service has. Together, these protect your environment from security threats like “man-in-the-middle” attacks and keep your sensitive data safe. As you read, you can follow along with this hands-on demo.Authentication with Mutual TLSMan-in-the-middle attacks are an increasingly common way for bad actors to intercept, and potentially change, communications between two parties by rerouting these communications through their own system. These parties can be an end user, a device and a server, or almost any two systems. There are a couple things we can do to combat this riskMutual TLS authentication helps prevent man-in-the-middle attacks and other potential breaches, by securing service communication from end to end. As the name suggests, mutual TLS means that both communicating parties authenticate each other simultaneously, and can secure service-to-service and end-user-to-service communication. It also ensures that all communication is encrypted in transit. A service using mTLS detects and rejects any request that had been compromised.While mTLS is an important security tool, it’s often difficult and time consuming to manage. To start, you have to create, distribute, and rotate keys and certificates for each service. You then need to ensure you are properly implementing mTLS on all of your clients and servers. And when you adopt a microservices architecture, you have even more services, which means more and more keys and certificates to manage. Finally, rolling out your own public key infrastructure (PKI) can be time-consuming and risky.Istio on GKE supports mTLS and can help ease many of these challenges. Istio uses the Envoy sidecar proxy to enforce mTLS and requires no code changes to implement. Istio automates key and certificate management, including generation, distribution, and rotation, while allowing interoperability across clusters and clouds by giving each service a strong identity. You can easily enable Istio mTLS on GKE today, by choosing an mTLS option from a simple dropdown menu.Permissive mode is the default. It allows services in your mesh to accept both encrypted and unencrypted traffic. In this mode, all your services send unencrypted calls by default, but these defaults can be overridden for any specific services that you choose. This makes permissive mode a great option if you still have services that must accept unencrypted traffic.When you select strict mTLS mode, Istio on GKE enforces mTLS encryption between all the services and control plane components in your service mesh by default; all calls are encrypted and services won’t accept unencrypted traffic. This means that if you have services that still send or receive unencrypted traffic, installing strict mTLS may, in fact, break your application. As with permissive mode, you can override these defaults with destination-specific rules.Many organizations choose to first enable permissive mTLS for the entire namespace, and then transition to strict mode on a service-by-service basis. This is one of the major benefits of Istio—it lets you adopt mTLS service-by-service, or turn it on and off for your whole mesh. This incremental adoption model lets you implement the security features of mTLS without breaking anything.To enable mTLS incrementally you first need a Policy for inbound traffic, and a DestinationRule for outbound. The YAML and instructions you need to do it are here. To encrypt every service in your namespace is a very similar process. Just set up another policy and DestinationRule, this time for the full namespace, then execute it.It’s also easy to then add another level of security through end user authentication, a.k.a. origin authentication, using JSON web tokens in addition to mTLS. You can also see this in the demo.Authorization tools to protect your dataIn a world with increasing security threats, keeping your critical information—like private customer data—safe and secure is a mission-critical activity. A major step towards keeping this data secure is making sure that only the right people can access, change, delete, and add to it. This is easier said than done, and gets into the “what” side of the equation: what are users and services allowed to do?Istio Authorization—which is based on Kubernetes Role-based Access Control (RBAC)—provides access control for the services in your mesh with multiple levels of granularity.At its most basic, Istio RBAC maps subjects to roles. An Istio authorization policy involves groups of permissions for accessing services (the ServiceRole specification), and then determining which users, groups, and services gets those specific access permissions (ServiceRoleBinding). A ServiceRole contains a list of permissions, while a ServiceRoleBinding assigns a specific ServiceRole to a list of subjects.When you’re configuring Istio authorization policies, you can specify a wide range of different groups of permissions, and grant access to them at the level that makes sense, down to the user level. The demo shows how this structure makes it easy to enable authorization on an entire namespace by applying the RBAC resources to the cluster.Having a strict access policy for each role in your system helps ensure that only those that are supposed to access your critical data can do so.We hope this tour of Istio’s security features demonstrated how Istio makes it easier for you to  implement and manage a comprehensive microservices security strategy that makes sense for your organization.What’s nextTo try out the Istio security features we discussed here, head over to the demo. In our next post, we’ll take a deep dive into observability, tracing, and SLOs using Istio and Stackdriver.Learn More:Istio security overviewMutual TLS Deep DiveUsing Istio to Improve End-to-End SecurityMicro-Segmentation with Istio Authorization
Quelle: Google Cloud Platform

Preview: Distributed tracing support for IoT Hub

Most IoT solutions, including our Azure IoT reference architecture, use several different services. An IoT message, starting from the device, could flow through a dozen or more services before it is stored or visualized. If something goes wrong in this flow, it can be very challenging to pinpoint the issue. How do you know where the message is dropped? For example, you have an IoT solution that uses five different Azure services and 1,500 active devices. Each device sends ten device-to-cloud messages/second (for a total of 15,000 messages/second), but you notice that your web app sees only 10,000 messages/second. Where is the issue? How do you find the culprit?

To completely understand the flow of messages through IoT Hub, you must trace each message's path using unique identifiers. This process is called distributed tracing. Today, we're announcing distributed tracing support for IoT Hub, in public preview.

Get started with distributed tracing support for IoT Hub

With this feature, you can:

Precisely monitor the flow of each message through IoT Hub using trace context. This trace context includes correlation IDs that allow you to correlate events from one component with events from another component. It can be applied for a subset or all IoT device messages using device twin.
Automatically log the trace context to Azure Monitor diagnostic logs.
Measure and understand message flow and latency from devices to IoT Hub and routing endpoints.
Start considering how you want to implement distributed tracing for the non-Azure services in your IoT solution.

In the public preview, the feature will be available for IoT Hubs created in select regions.

To get started:

Follow our documentation, “Trace Azure IoT device-to-cloud messages with distributed tracing (preview).”
Check out the C sample code.
Give us feedback via UserVoice.

Quelle: Azure