Gateway Load Balancer now generally available in all regions

Previously, we announced the public preview release of Gateway Load Balancer (GWLB), a new SKU of Azure Load Balancer targeted for transparent NVA (network virtual appliance) insertion supported by a growing list of NVA providers. Today, placing NVAs in the path of traffic is a growing need for customers as their workloads scale. Common use cases of NVAs we’ve seen are:

Allowing or blocking specific IPs using virtual firewalls.
Protecting applications from DDoS attacks.
Analyzing or visualizing traffic patterns.

And GWLB now offers the following benefits for NVA scenarios:

Source IP preservation.
Flow symmetry.
Lightweight NVA management at scale.
Auto-scaling with Azure Virtual Machines Scale Sets (VMSS).

With GWLB, bump-in-the-wire service chaining becomes easy to add on to new or existing architectures in Azure. This means customers can easily “chain” a new GWLB resource to both Standard Public Load Balancers and individual virtual machines with Standard Public IPs, covering scenarios involving both highly available, zonally resilient deployments and simpler workloads.

Figure 1: GWLB can be associated to multiple consumer resources, including both Standard Public Load Balancers and Virtual Machines with Standard Public IPs. When GWLB is chained to the front-end configuration or VM NIC IP configuration, unfiltered traffic from the internet will first be directed to the GWLB and then reach the configured NVAs. The NVAs will then inspect the traffic and send the filtered traffic to the final destination, the consumer application hosted on either the load balancer or virtual machine.

What’s new with Gateway Load Balancer

GWLB borrows a majority of the same concepts as the Standard Load Balancers that customers are familiar with today. You’ll have most of the same components such as frontend IPs, load balancing rules, backend pools, health probes, and metrics, but you’ll also see a new component unique to GWLB—VXLAN tunnel interfaces.

VXLAN is an encapsulation protocol utilized by GWLB. This allows traffic packets to be encapsulated and decapsulated with VXLAN headers as they traverse the appropriate data path, all while maintaining their original source IP and flow symmetry without requiring Source Network Address Translation (SNAT) or other complex configurations like user-defined routes (UDRs).

The VXLAN tunnel interfaces are configured as part of the GWLB’s back-end pool and enable the NVAs to isolate “untrusted” traffic from “trusted” traffic. Tunnel interfaces can either be internal or external and each backend pool can have up to two tunnel interfaces. Typically, the external interface is used for “untrusted” traffic—traffic coming from the internet and headed to the appliance. Correspondingly, the internal interface is used for “trusted” traffic—traffic going from your appliances to your application.

Contoso case study

To better understand the use case of GWLB, let’s dive deeper into example retail company Contoso’s use case.

Who is Contoso?

Contoso is a retail company that uses Azure Load Balancer today to make their web servers supporting their retail platform regionally resilient. In the past few years, they’ve experienced exponential growth and now serve over 20 million visitors per month. When faced with the need to scale their retail platform, they chose Azure Load Balancer because of its high performance coupled with ultra-low latency. As a result of their success, they’ve begun to adopt stricter security practices to protect customer transactions and reduce the risk of harmful traffic reaching their platforms.

What does Contoso’s architecture look like today?

One of their load balancers supporting the eastus region is called contoso-eastus and has a front-end IP configuration with the public IP 101.22.462. Today, traffic headed to 101.22.462 on port 80 is distributed to the backend instances on port 80 as well.

What’s the problem?

The security team recently identified some potentially malicious IP addresses that have been attempting to access their retail platform. As a result, they’re looking to place a network-layer virtual firewall to protect their applications from IP addresses with poor reputations.

What’s the plan?

Contoso has decided to go with a third-party NVA vendor whose appliances the team has used in other contexts such as smaller scale applications or other internal-facing tools. The security team wants to keep the creation of additional resources to a minimum to simplify their NVA management architecture, so they decide map one GWLB with an auto-scaling backend pool of NVAs using Azure VMSS to each group of load balancers deployed in the same region.

Deploying Gateway Load Balancer

The cloud infrastructure team at Contoso creates a GWLB with their NVAs deployed using Azure VMSS. Then, they chain this GWLB to their 5 Standard Public LBs for the eastus region. After verifying that their Data Path Availability and Health Probe Status metrics are 100 percent on both their GWLB and on each chained Standard Public LB, they run a quick packet capture to ensure everything is working as expected.

What happens now?

Now, traffic packets whose destination are any of the frontend IPs of the Standard Public LBs for eastus will be encapsulated using VXLAN and sent to the GWLB first. At this point, the firewall NVAs will decapsulate the traffic, inspect the source IP, and determine whether this traffic is safe to continue on towards the end application. The NVA will then re-encapsulate traffic packets that meet the firewall’s criteria and send it back to the Standard LB. When the traffic reaches the Standard LB, the packets will be decapsulated, meaning that the traffic will appear as if it came directly from the internet, with its original source IP intact. This is what we mean by transparent NVA insertion, as Contoso’s retail platform applications will behave exactly as they did before, without ever knowing that the packet was inspected or filtered by a firewall appliance prior to reaching the application server.

Gateway Load Balancer partners

Gateway Load Balancer supports a variety of NVA providers, you can learn more about each of our partners on our partners page.

Virtual firewalls

Check Point
Cisco
F5
Fortinet
Palo Alto Networks

Traffic observability

cPacket Networks
Glasnostic

Network security

Citrix
Trend Micro
Valtix

DDoS protection

A10 Networks

Learn more

Try out Gateway Load Balancer today with the help of our quickstart tutorials, or read more about Gateway Load Balancer on our public documentation.
Quelle: Azure

Use Cases and Tips for Using the BusyBox Docker Official Image

While developing applications, using the slimmest possible images can help reduce build times while reducing your app’s overall footprint. Similarly, successfully deploying such compact, Linux-friendly applications means packaging them into a cross-platform unit. That’s where containers and the BusyBox Docker Official Image come in handy.
 

Maintaining the BusyBox image has also been an ongoing priority at Docker. In fact, our very first container demo used BusyBox back in 2013! Users have downloaded it over one billion times, making BusyBox one of our most popular images.
Not exceeding 2.71 MB in size — with most tags under 900 KB, depending on architecture — the BusyBox container image is incredibly lightweight. It’s even much smaller than our Alpine image, which developers gravitate towards given its slimness. BusyBox’s compact size enables quicker sharing, by greatly reducing initial upload and download times. Smaller base images, depending on changes and optimizations to their subsequent layers, can also reduce your application’s attack surface.
In this guide, we’ll introduce you to BusyBox, cover some potential use cases, explore best practices, and briefly show you how to use its container image.
What’s BusyBox?
Dubbed the “Swiss Army Knife of Embedded Linux,” BusyBox packages together multiple, common UNIX utilities (or applets) into one executable binary. It helps you create your own Linux distribution, and our associated container image helps you deploy it across different devices.
This is possible thanks to BusyBox’s ability to run in numerous POSIX environments — which also includes FreeBSD and Android. It works in concert with the Linux kernel.
Miniature but mighty, it contains nearly 400 of UNIX’s leading commands while replacing many GNU utilities (shellutils, fileutils, and more) with something comparable in its full distribution. Although some of these may not be fully-featured, their core functionalities remain intact without forcing developers to make concessions.
Which BusyBox version should I use?
BusyBox helps replicate the experience of using common shell commands. Some Linux distributions use GNU’s coreutils package to ship these commands, while others have instead opted for BusyBox. Though BusyBox isn’t the most complete environment available, it checks most boxes for developers who need something approachable and lightweight.
BusyBox comes in a variety of pre-built binary versions. As a result, we support over 30 image tags on Docker Hub. Each includes its own Linux binary variant per CPU and sets of dependencies — impacting both image size and functionality.
Each is also built against various libc variants. To understand how each image’s relation to musl, uClibc, dietlibc, and glibc impacts your build, check out this comparison chart. This will help you choose the correct image for your specific use case.
That said, which use cases pair best with the BusyBox image? Let’s jump in.
BusyBox Use Cases
The Linux landscape is vast, and developer use cases will vary pretty greatly. However, we’ll tackle a few interesting examples and why they matter.
Building Distros for Embedded Systems
Known for having very limited available resources, embedded systems require distros with minute sizes that only include essential functionality. There’s very little extra room for frills or extra dependencies. Consequently, embedded Linux versions must be streamlined and purpose-built, which is where BusyBox excels.
BusyBox’s maintainers highlight its modularity. You can choose any BusyBox image that suits your build, yet you can also pick and choose commands or features during compilation. You don’t have to package together — to a point — anything you don’t need. While you can run atop the Linux kernel, containerizing your BusyBox implementation alleviates the need to include this kernel within the container itself. BusyBox will instead leverage your embedded system’s kernel by default, saving space.
Each applet’s behavior within your given image will determine how it works within a given embedded environment. BusyBox lets you modify configuration files, directories, and infrastructure to best fit your embedded system of choice.
Leveraging Kubernetes Init Containers
While the BusyBox Docker Official Image is a great base for other projects, BusyBox works well with the Kubernetes initContainer feature. These specialized Docker containers (for our example) run before app containers in a Pod. Init containers can contain scripts or other utilities that reside outside of the application image, and properly initializing these “regular” containers may depend on k8s spinning up these components first. Init containers always run until their tasks finish, and they run synchronously.
These containers also adhere to strictly-configured resource limits, support volumes, and respect your security settings. Why would you use an initContainer? According to the k8s documentation, you can do the following:

Wait for a Service to be created as Pods spin up
Register a Pod with a remote server from an API
Wait for an allotted period of time before finally starting an app container
Clone a Git repo into a volume
Generate configuration files automatically from value inputs

 
Kubernetes uses its configuration files to specify how these processes occur — alongside any shell commands. You can specify your BusyBox Docker image in this file with your chosen tag. Kubernetes will pull your BusyBox image, then create and start Docker containers from it while assigning them unique IDs.
By using init containers with BusyBox and Docker, you can better prepare your app containers to run vital workflows before they spin up.
Running an HTTP Web Server
Since the BusyBox container image helps you create a basic Linux environment, we can use that environment to run compiled Linux applications.
As a result, we can use our BusyBox base image to create custom executables, which — in this case — support a web app powered by a Go server. We’d like to shoutout developer Soham Kamani for highlighting this example!
How is this possible? To simplify the process, Soham accomplished this by:

Creating a BusyBox container using the Docker CLI (enabling us to run common commands).
Running custom executables after creating a custom Golang “hello world” program, and creating a companion Dockerfile to support it.
Building and running a Docker image using BusyBox as the base.
Creating a server.go file, compiling it, and running it as a web server using Docker components.

 
BusyBox lets you tackle this workflow while creating a final image that’s very slim. It gives developers an environment where their applications can run, thrive, scale, and deploy effectively. You can even manage your images and containers easily with Docker Desktop, if you prefer a visual interface.
You can read through the entire tutorial here, and view the sample code on GitHub. Want to explore more Go-based server deployments? Check out our Caddy 2 image guide.
This is not an exhaustive list of BusyBox use cases. However, these examples do showcase how creative you can get, even with a simple Linux base image.
Getting Started with the BusyBox Image
Hopefully you’ve discovered how the BusyBox image punches above its weight in terms of functionality. Luckily, using the BusyBox image is equally simple. Here’s how to get started in a Docker context.
First, run BusyBox as a shell with the following command:

$ docker run -it –rm busybox

 
This lets you execute commands within your BusyBox system, since you’re now effectively sh-ing into your environment. The -it flag combines both -i and -t together — which keeps STDIN open and allocates a pseudo-tty. This -tty tells Docker to create a virtual terminal session within your BusyBox container. Using the –rm flag tells Docker to tidy up your container and remove the filesystem when it exits.
Next, you’ll create a Dockerfile for your statically-compiled BusyBox binary. Here’s how that basic Dockerfile could look:

FROM busybox
COPY ./my-static-binary /my-static-binary
CMD ["/my-static-binary"]

 
Note that you’ll have to complete this compilation in another location, like a Docker container. This is possible with another Linux image like Alpine, but BusyBox is perfect for situations where heavy extensibility isn’t needed.
Lastly, always choose the variant which best fits your needs. You can use either busybox:uclibc, busybox:glibc, or busybox:musl as required. Options one and three are statically compiled, while glibc stems from Debian.
Docker and BusyBox Equal Simplicity
BusyBox is an essential tool for developers who love simplistic Linux. It lets you create powerful, customized Linux executables within a stripped-down (yet accommodating) Linux environment. Use cases are diverse, and the BusyBox image helps reduce bloat.
Both Docker and BusyBox work well together, while being inclusive of popular, related technologies like Kubernetes. Despite the BusyBox image’s small size, it unlocks many exciting development possibilities that are continually evolving. Visit Docker Hub to learn more and quickly pull your first BusyBox image.
Quelle: https://blog.docker.com/feed/