Run your fault-tolerant workloads cost-effectively with Google Cloud Spot VMs

Modern applications such as microservices, containerized workloads and horizontal scalable applications are engineered to persist even when the underlying machine does not. This architecture allows customers to leverage Spot VMs to access Google’s idle capacity and run your application at the lowest price possible. Customers will save 60 – 91% off the price of our on-demand VMs with Spot VMs. Maximize cost optimization by integrating with Google Kubernetes Engine (GKE) Standard and your scalable applications will seamlessly switch to using on-demand resources when Google needs the Spot VM capacity back.Available in Preview today, customers can begin deploying Spot VMs in their Google Cloud projects for:Improved TCO: With a maximum discount of 91% over on-demand VMs, applications that can take advantage of Spot VMs will quickly see these savings add up. When combining Spot VM with our Custom Machine Types and adding on discounted Spot GPU and Spot local SSD, customers can maximize their TCO without sacrificing performance.Better automation: Let GKE handle your deployments to seamlessly mix in Spot VMs with your current infrastructure. Automatically scale up when Spot VMs are available and then gracefully terminate when a preemption occurs, ensuring that work gets done with minimal interruptions. Ease of use and integration: Spot VMs are available globally and are a simple, one-line change to start using. The resources are yours until they need to be reclaimed with no specific duration limits. Take advantage of the new Termination Action property to delete on preemption and clean up after use.Spot VMs are available in every region and most Compute Engine VM families with the same performance characteristics as on-demand VMs. The only difference is that Spot VMs offer a 60-91% discount since the resources may be reclaimed at any time with 30-second notice.Spot VMs are a great fit for a variety of workloads in a broad spectrum of industries including financial modeling, visual effects, genomics, forecasting, and simulations. Any workload that is fault tolerant or stateless, should consider trying Spot VMs as a way to save up to 91% on VM costs! Getting started is as simple as adding –provisioning_model=Spot to your instance request and savings will begin immediately. New dynamic pricing model for Spot VMsTo maximize savings to customers, Google is introducing a new dynamic pricing model that offers discounts from 60% to 91% off of our on-demand VMs. This new pricing model ensures that everyone is getting the best preemptible experience possible by applying a discount to each region based on rolling historical usage for the region. This discount amount may change up to once a month. The discount will always be at least 60% but has the opportunity to move up and down between 60 and 91% off of our on-demand VMs. Customers will also be able to preview the pricing forecast to have visibility to the next pricing change before the new pricing goes live. Starting today, we are announcing price drops for these VM families and locations and dynamic prices will begin in 2022.Preemptible VM instances created through –preemptible will continue to be supported. Preemptible VM customers will not need to make any changes to begin receiving the new pricing. However, preemptible VMs will continue to have a 24h limit. Customers who wish to have no max duration, should switch to Spot VMs to avoid any limits. In order to keep pricing as simple as possible, Preemptibles will follow the same pricing as Spot VMs.Building on our ecosystem, Google Kubernetes Engine (GKE), the leading platform for organizations looking for advanced container orchestration, will also leverage Spot VMs. GKE nodes using Spot VMs are a popular way for users to get the most out of their containerized workloads in a cost effective way. In GKE, Spot nodes can be created by using –spot; preemptible nodes created using –preemptible will continue to be supported. Moreover, starting in GKE v1.21, enable_graceful_node_shutdown is enabled by default to ensure a smooth experience with Spot on GKE. When combined with custom machine types and GKE cost optimization best practices, customers using GKE Spot nodes can achieve even greater savings.NetApp partnership As part of our on-going investment into Spot, we are also strengthening how the GCP ecosystem supports and builds on top of Spot VMs. We are pleased to announce our partnership with Spot.IO to ensure that our joint customers can take advantage of our best pricing ever. “Spot.IO is excited about the market-leading combination of savings and predictability of Google Cloud’s new Spot VMs. Google’s Spot VMs will offer our joint customers more flexibility and versatility in automating cloud infrastructure workloads and create more opportunities to optimize cloud spend while accelerating cloud adoption across micro services, containers, and VM-based stateless and stateful applications.” —Amiram Shachar VP and GM, Spot.IOGet startedSpot VMs are available in Preview now. To get started, check out ourSpot VM documentation for a deeper overview and how to create Spot VMs in your project.
Quelle: Google Cloud Platform

Open data lakehouse on Google Cloud

For more than a decade the technology industry has been searching for optimal ways to store and analyze vast amounts of data that can handle the variety, volume, latency, resilience, and varying data access requirements demanded by organizations.Historically, organizations have implemented siloed and separate architectures, data warehouses used to store structured aggregate data primarily used for BI and reporting whereas data lakes, used to store unstructured and semi-structured data, in large volumes, primarily used for ML workloads. This approach often resulted in extensive data movement, processing, and duplication requiring complex ETL pipelines. Operationalizing and governing this architecture was challenging, costly and reduced agility. As organizations are moving to the cloud they want to break these silos. To address some of these issues,a new architecture choice has emerged: the data lakehouse, which combines key benefits of data lakes and data warehouses. This architecture offers low-cost storage in an open format accessible by a variety of processing engines like Spark while also providing powerful management and optimization features. At Google cloud we believe in providing choice to our customers. Organizations that want to build their data lakehouse using open source technologies only can easily do so by using low cost object storage provided by Google Cloud Storage, storing data in open formats like Parquet, with processing engines like Spark and use frameworks like Delta, Iceberg or Hudi through Dataproc to enable transactions. This open source based solution is still evolving and requires a lot of effort in configuration, tuning and scaling. At Google Cloud, we provide a cloud native, highly scalable and secure, data lakehouse solution that delivers choice and interoperability to customers. Our cloud native architecture reduces cost and improves efficiency for organizations.  Our solution is based on:Storage: Providing choice of storage across low cost object storage in Google Cloud Storage or highly optimized analytical storage in BigQueryCompute: Serverless compute that provide different engines for different workloadsBigQuery, our serverless cloud data warehouse provides ANSI SQL compatible engine that can enable analytics on petabytes of data.Dataproc, our managed Hadoop and Spark service enables using various open source frameworksServerless Spark, allows customers to submit their workloads to a managed service and take care of the job execution. Vertex AI, our unified MLOps platform enables building large scale ML models with very limited codingAdditionally you can use many of our partner products like Databricks, Starburst or Elastic for various workloads.Management: Dataplex enables a metadata-led data management fabric across data in Google Cloud Storage (object storage) and BigQuery (highly optimized analytical storage). Organizations can create, manage, secure, organize and analyze data in the lakehouse using Dataplex.Let’s take a closer look at some key characteristics of a data lakehouse architecture and how customers have been building this on GCP at scale. Storage OptionalityAt Google Cloud our core principle is delivering an open platform. We want to provide customers with a choice of storing their data in low cost object storage in Google Cloud Storage or highly optimized analytical storage or other storage options available on GCP. We recommend organizations store their structured data in BigQuery Storage. BigQuery Storage also provides a streaming API that enables organizations to ingest large amounts of data in real-time and analyze it. We recommend unstructured data to be stored in Google Cloud storage. In some cases where organizations need to access their structured data in OSS formats like Parquet or ORC they can store them on Google Cloud Storage. At Google Cloud we have invested in building Data Lake Storage API also known as BigQuery Storage API to provide consistent capabilities for structured data across both BigQuery and GCS storage tiers. This API enables users to access BigQuery Storage and GCS through any open source engine like Spark, Flink etc. Storage API enables users to apply fine grained access control on data in BigQuery and GCS storage (coming soon).Serverless ComputeThe data lakehouse enables organizations to break data silos and centralize data, which facilitates various different types of use cases across organizations. To get maximum value from data, Google Cloud allows organizations to use different execution engines, optimized for different workloads and personas to run on top the same data tiers. This is made possible because of complete separation of compute and storage on Google Cloud.  Meeting users at their level of data access including SQL, Python, or more GUI-based methods mean that technological skills do not limit their ability to use data for any job. Data scientists may be working outside traditional SQL-based or BI types of tools. Because BigQuery has the storage API, tools such as AI notebooks, Spark running on Dataproc, or Spark Serverless can easily be integrated into the workflow. The paradigm shift here is that the data lakehouse architecture supports bringing the compute to the data rather than moving the data around. With serverless Spark and BigQuery, data engineers can spend all their time on the code and logic. They do not need to manage clusters or tune infrastructure. They submit SQL or PySpark jobs from their interface of choice, and processing is auto-scaled to match the needs of the job.BigQuery leverages serverless architecture to enable organizations to run large scale analytics using a familiar SQL interface. Organizations can leverage BigQuery SQL to run analytics on petabyte scale data sets. In addition, BigQuery ML democratizes machine learning by letting SQL practitioners build models using existing SQL tools and skills. BigQuery ML is another example of how customers’ development speed can be increased by using familiar dialects and the need to move data.  Dataproc, Google Cloud’s managed Hadoop, can read the data directly from lakehouse storage; BigQuery or GCS and run its computations, and write it back. In effect, users are given freedom to choose where and how to store the data and how to process it depending on their needs and skills. Dataproc enables organizations to leverage all major OSS engines like Spark, Flink, Presto, Hive etc.  Vertex AI is a managed machine learning (ML) platform that allows companies to accelerate the deployment and maintenance of artificial intelligence (AI) models. Vertex AI natively integrates with BigQuery Storage and GCS to process both structured and unstructured data. It enables data scientists and ML engineers across all levels of expertise to implement Machine Learning Operations (MLOps) and thus efficiently build and manage ML projects throughout the entire development lifecycle. Intelligent data management and governanceThe data lakehouse works to store the data in a single-source-of-truth, making minimal copies of the data. Consistent security and governance is key to any lakehouse. Dataplex, our intelligent data fabric service, provides data governance and security capabilities across various lakehouse storage tiers built on GCS and BigQuery. Dataplex uses metadata associated with the underlying data to enable organizations to logically organize their data assets into lakes and data zones. This logical organization can span across data stored in BigQuery and GCS. Dataplex sits on top of the entire data stack to unify governance and data management. It provides a unified data fabric that enables enterprises to intelligently  curate,secure and govern data, at scale, with an integrated analytics experience. It provides automatic data discovery and schema inference across different systems and complements this with automatic registration of metadata as tables and filesets into metastores. With built-in data classification and data quality checks in Dataplex, customers have access to data they can trust.Data sharing: is one of the key promises of evolved data lakes is that different teams and different personas can share the data across the organization in a  timely manner. To make this a reality and break organizational barriers, Google offers a layer on top of BigQuery called Analytics Hub. Analytics Hub provides the ability to create private data exchanges, in which exchange administrators (a.k.a. data curators) give permissions to publish and subscribe to data in the exchange to specific individuals or groups both inside the company and externally to business partners or buyers. (within or outside of their organization). Open and flexibleIn the ever evolving world of data architectures and ecosystems, there are a growing suite of tools being offered to enable data management, governance, scalability, and even machine learning. With promises of digital transformation and evolution, organizations often find themselves with sophisticated solutions that have a significant amount of bolted-on functionality. However, the ultimate goal should be to simplify the underlying infrastructure,and enable teams to focus on their core responsibilities: data engineers make raw data more useful to the organization, data scientists explore the data and produce predictive models so business users can make the right decision for their domains.Google Cloud has taken an approach anchored on openness, choice and simplicity and offers a planet-scale analytics platform that brings together two of the core tenants of enterprise data operations, data lakes and data warehouses into a unified data ecosystem.  The data lakehouse is a culmination of this architectural effort and we look forward to working with you to enable it at your organization. For more interesting insights on lakehouse, you can read the full whitepaper here.Related ArticleRead Article
Quelle: Google Cloud Platform

Google Cloud Networking overview

How is the Google Cloud physical network organized? Google Cloud is divided into regions, which are further subdivided into zones. A region is a geographic area where the round trip time (RTT) from one VM to another is typically under 1 ms. A zone is a deployment area within a region that has its own fully isolated and independent failure domain. This means that no two machines in different zones or in different regions share the same fate in the event of a single failure. At the time of this writing, Google has more than 27 regions and more than 82 zones across 200+ countries. This includes 146 network edge locations and CDN to deliver the content. This is the same network that also powers Google Search, Maps, Gmail, and YouTube. Click to enlargeGoogle network infrastructureGoogle network infrastructure consists of three main types of networks: Data center network, which connects all the machines in the network together. This includes 100s of 1000s of miles of fiber optic cables including  more than a dozen subsea cables. Software-based private network WAN connects all data centers together Software defined public WAN for user-facing traffic entering the Google network A machine gets connected from the internet via the public WAN and gets connected to other machines on the network via the private WAN. For example, when you send a packet from your virtual machine running in the cloud in one region to a GCS bucket in another, the packet does not leave the Google network backbone. In addition, network load balancers and layer 7 reverse proxies are deployed at the network edge, which terminates the TCP/SSL connection at a location closest to the user — eliminating the two network round trips needed to establish an HTTPS connection.Cloud networking servicesGoogle’s physical network infrastructure powers the global virtual network that you need to run your applications in the cloud. It offers virtual networking and tools needed to lift-and-shift, expand, and/or modernize your applications:Click to enlargeConnectThe first thing you need is to provision the virtual network, connect to it from other clouds or on-premises, and isolate your resources so other projects and resources cannot inadvertently access the network.Hybrid Connectivity: Consider company X, which has an on-premises environment with a prod and dev network. They would like to connect their on-premises environment with Google Cloud so the resources and services can easily connect between the two environments. They can either use Cloud Interconnect for dedicated connection or Cloud VPN for connection via an IPSec secure tunnel. Both work, but the choice would depend on how much bandwidth they need; for higher bandwidth and more data dedicated interconnect is recommended. Cloud Router would help enable the dynamic routes between the on-premises environment and Google Cloud VPC. If they have multiple networks/locations, they could also use Network Connectivity Center to connect their different enterprise sites outside of Google Cloud by using the Google network as a wide area network (WAN). Virtual Private Cloud (VPC): They deploy all their resources in VPC but one of the requirements is to keep the Prod and Dev environments separate. For this the team needs to use Shared VPC, which allows them to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network. Cloud DNS: They use Cloud DNS to manage:Public and private DNS zonesPublic/private IPs within the VPC and over the internetDNS peering ForwardingSplit horizonsDNSSEC for DNS security ScaleScaling includes not only quickly scaling applications, but also enabling real-time distribution of load across resources in single or multiple regions, and accelerating content delivery to optimize last-mile performance.Cloud Load Balancing: Quickly scale applications on Compute Engine—no pre-warming needed. Distribute load-balanced compute resources in single or multiple regions (and near users) while meeting high-availability requirements. Cloud Load Balancing can put resources behind a single anycast IP, scale up or down with intelligent autoscaling, and integrate with Cloud CDN.Cloud CDN: Accelerate content delivery for websites and applications served out of Compute Engine with Google’s globally distributed edge caches. Cloud CDN lowers network latency, offloads origin traffic, and reduces serving costs. Once you’ve set up HTTP(S) load balancing, you can enable Cloud CDN with a single checkbox.SecureNetworking security tools for defense against infrastructure DDoS attacks, mitigating data exfiltration risks when connecting with services within Google Cloud, and network address translation to enable controlled internet access for resources without public IP addresses.Firewall Rules: Lets you allow or deny connections to or from your virtual machine (VM) instances based on a configuration that you specify. Every VPC network functions as a distributed firewall. While firewall rules are defined at the network level, connections are allowed or denied on a per-instance basis. You can think of the VPC firewall rules as existing not only between your instances and other networks, but also between individual instances within the same network.Cloud Armor: It works alongside an HTTP(S) load balancer to provide built-in defenses against infrastructure DDoS attacks. IP-based and geo-based access control, support for hybrid and multi-cloud deployments, preconfigured WAF rules, and Named IP ListsPacket Mirroring: Packet Mirroring is useful when you need to monitor and analyze your security status. VPC Packet Mirroring clones the traffic of specific instances in your Virtual Private Cloud (VPC) network and forwards it for examination. It captures all traffic (ingress and egress) and packet data, including payloads and headers.The mirroring happens on the virtual machine (VM) instances, not on the network, which means it consumes additional bandwidth only on the VMs.Cloud NAT:  Lets certain resources without external IP addresses create outbound connections to the internet.Cloud IAP: Helps work from untrusted networks without the use of a VPN. Verifies user identity and uses context to determine if a user should be granted access. Uses identity and context to guard access to your on-premises and cloud-based applications. Optimize It’s important to keep a watchful eye on network performance to make sure the infrastructure is meeting your performance needs.This includes visualizing and monitoring network topology, performing diagnostic tests, and assessing real-time performance metrics.  Network Service Tiers – Premium Tier delivers traffic from external systems to Google Cloud resources by using Google’s low-latency, highly reliable global network while Standard Tier is for routing traffic over the internet. Choose Premium Tier for performance and Standard Tier as a low-cost alternative.Network Intelligence Center – provides a single console for Google Cloud network observability, monitoring, and troubleshootingModernize As you modernize your infrastructure, adopt microservices-based architectures, and expand your use of containerization you will need access to tools that can help manage the inventory of your heterogeneous services and route traffic amongst them. GKE Networking (+ on-prem in Anthos) – When you use GKE, Kubernetes and Google Cloud dynamically configure IP filtering rules, routing tables, and firewall rules on each node, depending on the declarative model of your Kubernetes deployments and your cluster configuration on Google Cloud.Traffic Director – Helps you run microservices in a global service mesh (outside of your cluster). This separation of application logic from networking logic helps you improve your development velocity, increase service availability, and introduce modern DevOps practices in your organization. Service Directory – Platform for discovering, publishing, and connecting services, regardless of the environment. It provides real-time information about all your services in a single place, enabling you to perform service inventory management at scale, whether you have a few service endpoints or thousands. For a more in-depth look into Google Cloud Networking products check out this.  For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.dev Related ArticleTraffic Director explained!If your application is deployed in a microservices architecture then you are likely familiar with the networking challenges that come wit…Read Article
Quelle: Google Cloud Platform

Beta IPv6 Support on Docker Hub Registry

At Docker we’re all about our community, so we listened to your excitement about Docker Hub support for IPv6 on the public roadmap, and now we are pleased to be introducing beta IPv6 support for the Docker Hub Registry! This means if you’re on an IPv6 only network, you can now opt in to use the registry directly with no NAT64 gateway.

Internet Protocol version 4 (IPv4), in use since the 1980s, can no longer meet the world’s growing demands for globally unique address spaces and this pool will eventually be depleted. IPv6 was created as a replacement for IPv4 and it is anticipated that it will become the new internet protocol standard. This move not only increases access to Docker Hub, but positions Hub to continue being easily accessible as the world transitions to IPv6.

IPv6 adoption of Google users

Docker will now be one of the few container registries that supports IPv6. This update enables more of our community to use the world’s most popular container registry, while making sure Docker Hub is positioned to support our users in the next stage of the internet.

What does this mean for you?

IPv4 Users: Your access to Hub does not change

Dualstack Users: Can choose between IPv4 or IPv6 endpoints

Dualstack users will now be able to use the new IPv6-only endpoints while in beta. At a future point in time, the primary endpoints will also support IPv6.

IPv6 Only Users: able to access new IPv6 only domain

IPv6 only users will now be able to use the beta IPv6 endpoint without the need of a NAT64 gateway!

How to use the beta IPv6-only endpoint

If you are on a network with IPv6 support, you can begin using the IPv6-only endpoint registry.ipv6.docker.com! To login to this new endpoint simply run the following command (using your regular Docker hub credentials):

docker login registry.ipv6.docker.com

Once logged in, add the IPv6-only endpoint to the image you wish to push/pull. For example, if you wish to pull the official ubuntu image instead of running the following:

docker pull ubuntu:latest

you will run:

docker pull registry.ipv6.docker.com/library/ubuntu:latest

Note: library will only be used for official images, replace this with a namespace when applicable. For example pulling docker/dockerfile will be:

docker pull registry.ipv6.docker.com/docker/dockerfile:latest

This endpoint is only supported for push/pulls for Docker Hub Registry with the Docker CLI, Docker Desktop is not supported. The Docker Hub website and other systems will see updates for IPv6 in the future based on what we learn here.

Please note this new endpoint is only a beta – there is no guarantee of functionality or uptime and it will be removed in the future. Do not use this endpoint for anything other than testing.

Implementation

Updating networking infrastructure correctly and in an automated fashion on a high traffic network such as Docker Hub requires precision, delicacy and rigorous testing. A significant number of changes were made across our Amazon Web Services (AWS) network resources and routing stack in order to support IPv6. To give an idea of the process involved, here are some notable highlights:

Rate Limiting

In order to prevent abuse and to enforce our Docker Hub rate limiting, we limit requests based on a user’s IP address. Previously, we were limiting addresses based on the full 32-bit IPv4 addresses. To keep this consistent, we are now limiting based on full IPv4 addresses and the first 64 bits of IPv6 addresses.

We also updated our allowlist systems, which provide our large organization customers and cloud partners with unlimited access to Hub downloads. Similarly, our regulatory blocklist system was updated to include IPv6 addresses.

Load Balancing

For IPv6 connections, we’ve provisioned brand new Network Load Balancers (NLBs) which will be handling all AAAA (IPv6) traffic. These give us more performance and better scalability.

Likewise, our application load balancer configurations were updated to understand IPv6 addresses, pass those along properly to the backend applications, and correctly create logs and metrics based on those.

Software Compatibility

Docker Hub receives billions of requests per day and all of these are logged in order for us to ensure access compliance, security, and also gives us a tool to have more debugging capabilities. Due to this, our tooling and configuration required an update to ensure our logs were consistent with both IPv4 and IPv6.

Alongside logging, some applications needed an update to support Dualstack endpoints – in particular our distribution service which is now providing IPv6 access to our blob storage! Code changes were made to the registry middleware and authentication services to make sure we could serve IPv6 requests across the whole registry push/pull flow.

The Future

We’re thrilled that more users (specifically on IPv6 only networks) will now have better accessibility to Docker Hub! We’re also happy to be supporting the internet and our industry as we make the step into new this IP space.

If you have feedback on this beta release, please let us know here: https://github.com/docker/hub-feedback/issues/2165
The post Beta IPv6 Support on Docker Hub Registry appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Die gemeinsame Nutzung des benutzerdefinierten Frameworks von AWS Audit Manager ist jetzt allgemein verfügbar

AWS Audit Manager bietet jetzt die Möglichkeit, benutzerdefinierte Frameworks auf sichere und einfache Weise über AWS-Konten und -Regionen hinweg gemeinsam zu nutzen. Dies ermöglicht den sofortigen Zugriff auf Ihre benutzerdefinierten Frameworks über mehrere AWS-Konten hinweg, ohne dass Sie die zugrunde liegenden benutzerdefinierten Steuerelemente manuell kopieren oder verschieben müssen. Die gemeinsame Nutzung eines benutzerdefinierten Frameworks ermöglicht einen schnellen Zugriff auf das gemeinsame Framework, so dass Ihre Benutzer immer die aktuellsten und konsistentesten Informationen sehen, die von Ihnen bereitgestellt werden. Sie können die Funktion zur gemeinsamen Nutzung benutzerdefinierter Rahmen in Ihrem AWS Audit Manager-Konto ohne zusätzliche Kosten nutzen.
Quelle: aws.amazon.com

Amazon Connect führt API ein, um Betriebszeiten programmatisch zu konfigurieren

Amazon Connect bietet jetzt eine API zur programmgesteuerten Erstellung und Verwaltung von Betriebszeiten. Mit dieser API können Sie programmatisch Betriebszeiten konfigurieren, die in Kontaktflüssen verwendet werden können, um zu entscheiden, an welche Warteschlange Kontakte weiter geroutet werden sollen. Außerdem können Sie jetzt nicht mehr benötigte Betriebszeiten über die Lösch-API löschen. Weitere Informationen finden Sie in der API-Dokumentation.
Quelle: aws.amazon.com

Amazon Connect startet AWS CloudFormation-Support für Benutzer, Benutzerhierarchiegruppen und Betriebszeiten

Amazon Connect unterstützt jetzt AWS CloudFormation für drei neue APIs: Benutzer, Benutzerhierarchien und Betriebszeiten. Sie können jetzt AWS CloudFormation-Vorlagen verwenden, um diese Amazon Connect-Ressourcen — zusammen mit dem Rest Ihrer AWS-Infrastruktur — auf sichere, effiziente und wiederholbare Weise bereitzustellen. Außerdem können Sie diese Vorlagen verwenden, um die Konsistenz zwischen verschiedenen Amazon Connect-Instances zu gewährleisten. Weitere Informationen finden Sie unter Amazon Connect Ressourcentyp-Referenz im AWS CloudFormation-Benutzerhandbuch.
Quelle: aws.amazon.com

AWS Load Balancer Controller Version 2.3 jetzt mit Support für ALB IPv6-Ziele verfügbar

Der AWS Load Balancer Controller stellt eine Kubernetes-native Möglichkeit zur Konfiguration und Verwaltung von Elastic Load Balancers bereit, die den Datenverkehr an in Kubernetes-Clustern ausgeführte Anwendungen weiterleiten. Elastic Load Balancing bietet mehrere Load Balancers, die alle über die hohe Verfügbarkeit, Auto Scaling und robuste Sicherheit verfügen, die notwendig sind, um Ihre Anwendungen fehlertolerant zu machen.
Quelle: aws.amazon.com