Serverless load balancing with Terraform: The hard way

Earlier this year, we announced Cloud Load Balancer support for Cloud Run. You might wonder, aren’t Cloud Run services already load-balanced? Yes, each *.run.app endpoint load balances traffic between an autoscaling set of containers. However, with the Cloud Balancing integration for serverless platforms, you can now fine tune lower levels of your networking stack. In this article, we will explain the use cases for this type of set up and build an HTTPS load balancer from ground up for Cloud Run using Terraform.Why use a Load Balancer for Cloud Run?Every Cloud Run service comes with a load-balanced *.run.app endpoint that’s secured with HTTPS. Furthermore, Cloud Run also lets you map your custom domains to your services. However, if you want to customize other details about how your load balancing works, you need to provision a Cloud HTTP load balancer yourself.Here are a few reasons to run your Cloud Run service behind a Cloud Load Balancer:Serving static assets with CDN since Cloud CDN integrates with Cloud Load BalancingServing traffic from multiple regions since Cloud Run is a regional service but you can provision a load balancer with a global anycast IP and route users to the closest available region.Serve content from mixed backends, for example your /static path can be served from a storage bucket, /api can go to a Kubernetes cluster.Bring your own TLS certificates, such as wildcard certificates you might have purchased.Customize networking settings, such as TLS versions and ciphers supported.Authenticating and enforcing authorization for specific users or groups with Cloud IAP (this does not work yet with Cloud Run, however, stay tuned)Configure WAF or DDoS protection with Cloud Armor.The list goes on, Cloud HTTP Load Balancing has quite a lot of features.Why use Terraform for this?The short answer is that a Cloud HTTP Load Balancer consists of many networking resources that you need to create and connect to each other. There’s no single “load balancer” object in GCP APIs.To understand the upcoming task, let’s take a look at the resources involved:global IP address for your load balancerGoogle-managed SSL certificate (or bring your own)forwarding rules to associate IP address with backendstarget HTTPS proxy to terminate your HTTPS traffictarget HTTP proxy to receive HTTP traffic and redirect to HTTPSURL maps to specify routing rules for URL path patterns.backend service to keep track of eligible backendsnetwork endpoint group allowing you to register serverless apps as backends.As you might imagine, it is very tedious to provision and connect these resources just to achieve a simple task like enabling CDN.You could write a bash script with the gcloud command-line tool to create these resources; however, it will be cumbersome to check corner cases like if a resource already exists, or modified manually later. You would also need to write a cleanup script to delete what you provisioned.This is where Terraform shines. It lets you declaratively configure cloud resources and create/destroy your stack in different GCP projects efficiently with just a few commands.Building a load balancer: The hard wayThe goal of this article is to intentionally show you the hard way for each resource involved in creating a load balancer using Terraform configuration language.We’ll start with a few Terraform variables:var.name: used for naming the load balancer resourcesvar.project: GCP project IDvar.region: region to deploy the Cloud Run servicevar.domain: a domain name for your managed SSL certificateFirst, let’s define our Terraform providers:Then, let’s deploy a new Cloud Run service named “hello” with the sample image, and allow unauthenticated access to it:If you manage your Cloud Run deployments outside Terraform, that’s perfectly fine: You can still import the equivalent data source to reference that service in your configuration file.Next, we’ll reserve a global IPv4 address for our global load balancer:Next, let’s create a managed SSL certificate that’s issued and renewed by Google for you:If you want to bring your own SSL certificates, you can create your own google_compute_ssl_certificate resource instead.Then, make a network endpoint group (NEG) out of your serverless service:Now, let’s create a backend service that’ll keep track of these network endpoints:If you want to configure load balancing features such as CDN, Cloud Armor or custom headers, the google_compute_backend_service resource is the right place.Then, create an empty URL map that doesn’t have any routing rules and sends the traffic to this backend service we created earlier:Next, configure an HTTPS proxy to terminate the traffic with the Google-managed certificate and route it to the URL map:Finally, configure a global forwarding rule to route the HTTPS traffic on the IP address to the target HTTPS proxy:After writing this module, create an output variable that lists your IP address:When you apply these resources and set your domain’s DNS records to point to this IP address, a huge machinery starts rolling its wheels.Soon, Google Cloud will verify your domain name ownership and start to issue a managed TLS certificate for your domain. After the certificate is issued, the load balancer configuration will propagate to all of Google’s edge locations around the globe. This might take a while, but once it starts working.Astute readers will notice that so far this setup cannot handle the unencrypted HTTP traffic. Therefore, any requests that come over port 80 are dropped, which is not great for usability. To mitigate this, you need to create a new set of URL map, target HTTP proxy, and a forwarding rule with these:As we are nearing 150 lines of Terraform configuration, you probably have realized by now, this is indeed the hard way to get a load balancer for your serverless applications.If you like to try out this example, feel free to obtain a copy of this Terraform configuration file from this gist and adopt it for your needs.Building a load balancer: The easy wayTo address the complexity in this experience, we have been designing a new Terraform module specifically to skip the hard parts of deploying serverless applications behind a Cloud HTTPS Load Balancer.Stay tuned for the next article where we take a closer look at this new Terraform module and show you how easier this can get.Related ArticleGlobal HTTP(S) Load Balancing and CDN now support serverless computeNow, our App Engine, Cloud Run and Cloud Functions serverless compute offerings can take advantage of global load balancing and Cloud CDN.Read Article
Quelle: Google Cloud Platform

Welcome Canonical to Docker Hub and the Docker Verified Publisher Program

Today, we are thrilled to announce that Canonical will distribute its free and commercial software through Docker Hub as a Docker Verified Publisher. Canonical and Docker will partner together to ensure that hardened free and commercial Ubuntu images will be available to all developer software supply chains for multi-cloud app development. 

Canonical is the publisher of the Ubuntu OS, and a global provider of enterprise open source software, for all use cases from cloud to IoT. Canonical’s Ubuntu is one of the most popular Docker Official Images on Docker Hub, with over one billion images pulled. With Canonical as a Docker Verified Publisher, developers who pull Ubuntu images from Docker Hub can be confident they get the latest images backed by both Canonical and Docker. 

The Ideal Container Registry for Multi-Cloud 

Canonical is the latest publisher to choose Docker Hub for globally sharing their container images. With millions of users, Docker Hub is the world’s largest container registry, ensuring Canonical can reach their developers regardless of where they build and deploy their applications. 

This partnership, which covers both free and commercial Canonical LTS images, so developers can confidently pull the latest images straight from the source without concern for rate limits. 

Canonical chose Docker Hub as its primary distribution for its Ubuntu images to developers for three key reasons: 

Canonical wanted a container registry with developer ubiquity, that had simple integrations with developer automations, and an independent, un-opinionated registry provider. Canonical wants to enable developers to build new apps on top of Ubuntu with the most flexibility and optionalty for where their apps will run both today and tomorrow. These features of Docker Hub fit well with Canonical’s focus on delivering secure trusted images to customers through image provenance and ongoing maintenance and updates. 

As a Docker Verified Publisher, Canonical joins a list of over 200 ISVs using Docker Hub to distribute their software to developers where they get their work done. When Docker Hub users see the Docker “Verified Publisher” mark, they know that the containers they are pulling come straight from, and are supported by, the ISV publisher.

With 13 billion container image pulls per month from nearly 8 million repositories by over 11  million developers, Docker Hub is the industry’s leading container registry. Docker Hub delivers developers the largest breadth and depth of container images and plays a central role in building and sharing cloud-native applications. Docker Verified Publishers like Canonical ensure that the millions of Docker developers can easily and confidently find images and get to the business of app innovation. 

As part of this agreement Docker and Canonical will also collaborate in the coming months on the Ubuntu versions of Docker Official images to extend the quality of these already trusted and widely-used images .

You can get more information about Canonical’s announcement here, or browse the Canonical LTS offerings on Docker Hub. Software publishers and ISVs interested in joining the Docker Verified Publisher program can get more information by filling out this form.
The post Welcome Canonical to Docker Hub and the Docker Verified Publisher Program appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Ankündigung von Kontextmanagement auf Amazon Lex

Wir freuen uns, die Verfügbarkeit von Context Management auf Amazon Lex anzukündigen. Konversationen umfassen die Verwaltung von Kontext über mehrere Runden hinweg, wenn die Interaktion sich entwickelt. Gleichzeitig müssen Bots den Fortschritt des Kontexts einer Konversation verstehen, um entsprechend zu reagieren. Vorher mussten Sie den Code aufschreiben, um den Kontext über Vortragsattribute zu schreiben. Abhängig von der erfüllten Absicht, musste der Code den Aufruf der nächsten Absicht orchestrieren. Ab heute unterstützt Lex Kontextmanagement nativ, damit Sie den Kontext direkt von der Konsole verwalten können. Mit der Kontextmanagement-Funktion können Sie leicht kontrollieren, wenn eine Absicht aktiviert sein sollte. Berücksichtigen Sie zum Beispiel, wenn ein Benutzer fragt „Was waren meine Kosten in diesem Monat?“ und dann denselben Kontext bezüglich der Kosten in der folgenden Runde behält „Und wie war es im letzten Monat?“ Die native Verwendung des Kontexts ermöglicht es Ihnen, eine anspruchsvolle, Multi-Turn-Konversationserfahrung zu erstellen, ohne dass Sie einen Code schreiben müssen. Zusätzlich ist es jetzt möglich, Standard-Slot-Werte einzurichten. Sie können den Standardwert auf ein konstantes, ein aktives Kontextattribut oder ein Vortragsattribut einstellen.
Quelle: aws.amazon.com

AWS Managed Microsoft AD fügt eine automatisierte Multi-Region-Replikation hinzu

AWS Directory Service für Microsoft Active Directory (auch als AWS Managed Microsoft AD bezeichnet) unterstützt jetzt die automatisierte, Multi-Region-Replikation Ihres Verzeichnisses. Sie können jetzt ein einziges AWS Managed Microsoft AD (Enterprise Edition)-Verzeichnis über mehrere AWS-Regionen hinweg bereitstellen. Dies macht es leicht und kostengünstig für Sie, um Ihre Microsoft Windows- und Linux-Workloads weltweit bereitzustellen. Mit der automatisierten Multi-Region-Replikations-Fähigkeit bekommen Sie eine höhere Ausfallsicherheit, während Ihre Anwendungen ein lokales Verzeichnis für optimale Leistung verwenden. 
Quelle: aws.amazon.com

Amazon Kinesis Data Analytics unterstützt jetzt das Apache Flink Dashboard

Amazon Kinesis Data Analytics für Apache Flink bietet jetzt Zugriff auf das Apache Flink Dashboard, was Ihnen einen besseren Einblick in Ihre Anwendungen und fortgeschrittenen Überwachungsfunktionen bietet. Sie können jetzt die Umgebungsvariablen Ihrer Apache Flink-Anwendung, über 120 Metriken, Protokolle und den DAG (gerichteter azyklischer Graph) der Apache Flink-Anwedungen in einer einfachen, kontextualisierten Benutzerschnittstelle ansehen.
Quelle: aws.amazon.com

Amazon Kinesis Data Analytics unterstützt jetzt Apache Flink v1.11

Sie können jetzt Streaming-Anwendungen mit Apache Flink Version 1.11 in Amazon Kinesis Data Analytics for Apache Flink erstellen und ausführen. Apache Flink v1.11 bietet Verbesserungen an der Tabelle und SQL API, was eine einheitliche, relationale API für Stream- und Batchverarbeitung ist und der SQL-Sprache übergeordnet agiert, speziell entwickelt zum Arbeiten mit Apache Flink. Apache Flink v1.11-Funktionen umfassen auch ein Speichermodell und RocksDB-Optimierungen für erhöhte Anwendungsstabilität und Support für Task Manager Stack-Traces im Apache Flink Dashboard.
Quelle: aws.amazon.com