Flock Freight builds a more efficient, resilient and environmentally sustainable shipping supply chain on Google Cloud

Commercial trucks often travel partially empty because many shippers don’t have enough cargo to fill an entire container or trailer. Although offering available space to other shippers helps minimize carbon emissions and reduce operating costs, most trucking companies can’t efficiently schedule, track, or deliver multiple freight loads.Companies have always struggled to ship over-the-road freight efficiently. However,recent economic events have created an unprecedentedlogistics and transportation crisis that continues to disrupt supply chains, delay deliveries, and significantly raise the price of basic goods. Since some stores can’t keep their shelves fully stocked, many people across the country are finding it more difficult than ever to buy the things they need at an affordable price. Although exacerbated by the pandemic, many of these supply chain issues have existed for decades. That’s why, in 2015, Flock Freight was started with the mission of reducing waste and inefficiency from the supply chain by reimagining the way freight moves. First to market with advanced algorithms that enable pooling shipments at scale, we create a new standard of service for shippers, increase revenue for carriers and reduce the impact of carbon emissions through shared truckload (STL) service. Our technology helps lower prices compared to full truckload (FTL) by enabling shippers to only pay for the space they need—and maintain full control over pickup and delivery dates. Flock Freight also optimizes travel routes to speed up deliveries compared to traditional less than truckload (LTL), while eliminating unnecessary shipping hub transfers to minimize damage to cargo.Today, thousands of shippers and trucking companies across the U.S. use Flock Freight to schedule shared truckloads, lower shipping costs, quickly deliver and track goods, and reduce their carbon footprint by up to 40%. Flock Freight further offsets carbon emissions by buying carbon credits for every FlockDirect™ guaranteed shared truckload shipment—at no extra cost to shippers.Moving Flock Freight to Google CloudWe founded Flock Freight with a small team based in southern California. We soon realized we needed a more scalable and affordable technology stack to support our rapidly growing platform and team. After joining theGoogle for Startups Cloud Program and consulting with dedicated Google startup experts, we decided to move all our data and applications toGoogle Cloud.Thehighly secure-by-design infrastructure of Google Cloud now enables thousands of Flock Freight customers to move their freight faster, cheaper, and with less damage than traditional shipping methods. Specifically, we rely onGoogle Kubernetes Engine (GKE) to support the combinatorial optimization and machine learning (ML) algorithms and services that identify, pool, and schedule shared truckloads. We also leverage GKE to rapidly develop, deploy, and manage new applications and services.In addition, we leverageCloud SQL to automate database provisioning, storage capacity management, and other time-consuming tasks. Cloud SQL easily integrates with existing apps and Google Cloud services such as GKE andPub/Sub. Lastly, we useCompute Engine to create and run virtual machines, optimize resource utilization, and lower computing costs by up to 91%. These cost savings allow us to shift more resources to R&D and rapidly develop new solutions and services for our customers.Building a greener, more resilient, and responsive supply chainThe Google for Startups Cloud Program and dedicated Google startup experts were instrumental in helping us manage cloud infrastructure cost and maintaining very high SLAs,  helping Flock Freight to focus on developing a comprehensive shipping platform that powers shared truckloads and drives positive industry change. We especially want to highlight the Google Cloud research credits we relied on to launch Flock Freight and make rapid progress toward transforming the shipping industry. To this day, we continue to work with Google Cloud Managed Services partnerDoiT International to further scale and optimize operations on Google Cloud.We’re proud of the results we’re delivering for our customers. For example, ahome improvement importer now enjoys faster, safer, and easier shipping with 99.9% damage-free service and a 97.5% on-time delivery rate. Apackaging supplier continues to maintain a 99% on-time delivery streak and decrease carbon emissions by 37%, while a mineral water companyconsistently reduces delivery expenses upwards of 50%. Nationwide demand for shared truckloads continues to increase as the shipping industry works to lower costs and alleviate supply chain disruptions. With the Flock Freight platform, companies are building a more sustainable and resilient supply chain by efficiently combining multiple shipments into shared truckloads.If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleDrive Hockey Analytics uses Google Cloud to deliver pro-level sports tracking performance to youthLearn how Drive Hockey Analytics is bringing affordable and predictive pro-level analytics to youth teams on Google Cloud.Read Article
Quelle: Google Cloud Platform

Azure Premium SSD v2 Disk Storage in preview

We are excited to announce the preview of Premium SSD v2, the next generation of Microsoft Azure Premium SSD Disk Storage. This new disk offering provides the most advanced block storage solution designed for a broad range of input/output (IO)-intensive enterprise production workloads that require sub-millisecond disk latencies as well as high input/output operations per second (IOPS) and throughput—at a low cost. With Premium SSD v2, you can now provision up to 64TiBs of storage capacity, 80,000 IOPS, and 1,200 MBPS throughput on a single disk. With best-in-class IOPS and bandwidth, Premium SSD v2 provides the most flexible and scalable general-purpose block storage in the cloud, enabling you to meet the ever-growing demands of your production workloads such as—SQL Server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big data, analytics, gaming, on virtual machines, or stateful containers. Moreover, with Premium SSD v2, you can provision granular disk sizes, IOPS, and throughput independently based on your workload needs, providing you more flexibility in managing performance and costs.

With the launch of Premium SSD v2, our Azure Disk Storage portfolio now includes one of the most comprehensive sets of disk storage offerings to satisfy workloads ranging from Tier-1 IOPS intensive workloads such as SAP HANA to general purpose workloads such as RDMS and NoSQL databases and cost-sensitive Dev/Test workloads.

Benefits of Premium SSD v2

As customers transition their production workloads to the cloud or deploy new cloud-native applications, balancing performance and cost is top of mind. For example, transaction-intensive database workloads may require high IOPS on a small disk size or a gaming application may need very high IOPS during peak hours. Similarly, big data applications like Cloudera/Hadoop may require very high throughput at a low cost. Hence, customers need the flexibility to scale their IOPS and throughput independent of the disk size. With Premium SSD v2, you can customize disk performance to precisely meet your workload requirements or seasonal demands, without the need to provision additional storage capacity.

Premium SSD v2 also enables you to provision storage capacity ranging from 1 GiB up to 64 TiB with GiB increments. All Premium SSD v2 disks provide a baseline performance of 3,000 IOPS and 125 MB/sec. If your disk requires higher performance, you can provision the required IOPS and throughput at a low cost, up to the max limits shown below. You can dynamically scale up or scale down the IOPS and throughput as needed without downtime, allowing you to manage disk performance cost-effectively while avoiding the maintenance overhead of striping multiple disks to achieve more performance. Summarizing the key benefits:

Granular disk size in 1 GiB increments.
Independent provisioning of IOPS, throughput, and GiB.
Consistent sub-millisecond latency.
Easier maintenance with scaling performance up and down without downtime.

Premium SSD v2, like all other Azure Disk Storage offerings, will provide our industry-leading data durability and high availability at general availability.

Following is a summary comparing Premium SSD v2 with the current Premium SSD and Ultra Disk.

 

Ultra Disk

Premium SSD v2

Premium SSD

Disk Size

4 GiB – 64 TiB

1 GiB – 64 TiB

4 GiB – 32 TiB

Baseline IOPS

Varies by disk size

3,000 IOPS free

Varies by disk size

Baseline throughput

Varies by disk size

125 MBPS free

Varies by disk size

Peak IOPS

160,000 IOPS

80,000 IOPS

20,000 IOPS

Peak Throughput

4,000 MBPS

1,200 MBPS

900 MBPS

Durability

99.999999999% durability

(~0% annual failure rate)

99.999999999% durability

(~0% annual failure rate)

99.999999999% durability

(~0% annual failure rate)

Supported Azure Virtual Machines

Premium SSD v2 can be used with any premium storage-enabled virtual machines sizes enabling you to leverage a diverse set of virtual machine sizes. Currently, Premium SSD v2 can only be used as data disks. Premium SSDs and Standard SSDs can be used as OS disks for virtual machines using Premium SSD v2 data disks.

Pricing

Premium SSD v2 disks are billed hourly based on the provisioned capacity, IOPS, and MBPS. Let’s take an example of a disk that you provision with 100 GiB capacity, 5000 IOPS, and 150 MB/sec throughput.

The disks are billed per GiB of the provisioned capacity. Hence, you will be charged for 100 GiB of the provisioned capacity.
The disks are billed for any additional IOPS provisioned over the free baseline of 3,000 IOPS. In this case, since you provisioned 5000 IOPS, you will be billed for the additional 2,000 IOPS.
The disks are billed for any additional throughput over the free baseline throughput of 125 MB/s. In this case, since you provisioned 150 MB/sec throughput, you will be billed for the additional 25 MB/s throughput.

You can learn more on the Azure Managed Disks pricing page.

Getting Started

Premium SSD v2 is currently available in preview in select regions. If you are interested in participating in the preview, you can request access to get started. Once enrolled in the preview program, you will be able to create and manage Premium SSD v2 via the Azure portal, PowerShell, and CLI SDKs. You can refer to the Premium SSD v2 documentation to learn more.

We look forward to hearing your feedback. Please email us at AzureDisks@microsoft.com with any questions.
Quelle: Azure

9 Tips for Containerizing Your .NET Application

Over the last five years, .NET has maintained its position as a top framework among professional developers. In Stack Overflow’s 2022 Developer Survey, .NET ranked first in the “other framework and libraries” category. Stack Overflow reserves this for developers who’ve done extensive development work with key technologies in the past year, and want to continue using them the next.
 
Data courtesy of Stack Overflow.
 
Over 60,000 developers and 3,700 companies have contributed to the .NET platform. Since its 2002 debut, .NET has supported multiple languages (C#, F#, Visual Basic), platforms (.NET Core, .NET framework, Mono), editors, and libraries for building for diverse applications. .NET provides standard sets of base class libraries and APIs common to all .NET applications.
Why is containerizing a .NET application important?
.NET was originally designed for Windows. Meanwhile, we originally based Docker around Linux. .NET has the application virtual machine (called Common Language Runtime) and other components aimed at solving build problems common to large enterprise applications from 10 to 20 years ago. The two weren’t inherently compatible on day one.
Both have since evolved to become cross-platform, open-source developer platforms. When building tiny containers with a single process running inside, using a directly compiled language is typically faster. That said, .NET has come a long way and is now container-friendly. Microsoft has made a concerted effort to enable the container system since Windows Server 2016 SP2. Its goal has been keeping up with this growing container ecosystem. Today, you can run containers on Windows hosts that aren’t just based on the Linux kernel, but also the Windows kernel.
Running your .NET application in a Docker container has numerous benefits. First, Docker containers can act as isolated test environments. .NET developers can code and test locally while ensuring consistency between development and production. Second, it eliminates deployment issues caused by missing dependencies while moving to a production environment. Third, containers let developers of all skill levels build, share, and run containerized .NET applications. Containers are immutable infrastructure, provide portability, and help improve scalability. Likewise, the modularity and lightweight nature of .NET 6 make it perfect for containers. 
Containerizing a .NET application is easy. You can do this by copying source code files and building a Docker image. We’ll also cover common concerns like image bloat, missing image tags, and poor build performance with these nine tips for containerizing your .NET application code.
Containerizing a Student Record Management Application
To better understand those concerns, let’s look at a simple student record management application. In our last blog post, you saw how easy building and deploying a student database application is via a Dockerfile and Docker Compose.
Running your application is simple. You’ll clone the GitHub project repository and use the Docker Compose CLI to bring up the complete application with the following commands:

git clone https://github.com/dockersamples/student-record-management

 
Change your directory to student-record-management to see the following Docker Compose file:

services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
volumes:
– postgres-data:/var/lib/postgresql/data
adminer:
image: adminer
restart: always
ports:
– 8080:8080
app:
build:
context: .
dockerfile: ./Dockerfile
ports:
– 5000:80
depends_on:
– db
volumes:
postgres-data:

 
We’ve defined two services in this Compose file by the name db and app attributes. The Adminer (formerly phpMinAdmin) Docker image is a fully-featured database management tool written in PHP. We’ve set up port forwarding via the ports attribute. The depends_on attribute lets us express dependencies between services. In this case, we’ll start Postgres before our core application.  
Run the following command to bring up our student record management application:

docker-compose up -d

 
Once it’s up and running, you can view the Docker Dashboard and click on the “arrow” key (shown in app-1) to quickly access the application:
 

 
Typically, developers use the following Dockerfile template to build a Docker image. A Dockerfile is a list of sequential instructions that build your container image. This image is composed of a stack of layers, and each represents an instruction in our Dockerfile. Each layer contains changes to its underlying layer.

FROM mcr.microsoft.com/dotnet/sdk:6.0

WORKDIR /src
COPY . ./src

RUN dotnet build -o /app
RUN dotnet publish -o /publish

WORKDIR /publish
ENV ASPNETCORE_URLS=http://+:80/
EXPOSE 80
CMD ["./myWebApp"]

 
The first line defines our base image, which is around 754 MB in size (or, alternatively, 994 MB for Nano Server and 6.34GB for Windows Server). The COPY copies the necessary project file from the host system to the root of the Docker image. The EXPOSE instruction tells Docker that the container listens specifically on network port 80 at runtime. Lastly, our CMD lets us configure a container that’ll run as an executable.
To build a Docker image, we’ll use the docker build command:

docker build -t student-app .

 
Let’s check the size of our new Docker image:

docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
student-app latest d3caa8643c2c 4 minutes ago 827MB

 
One key drawback of this example is that our Docker image isn’t optimized. Crucially, optimization lets teams share smaller images, boost performance, and enables easier debugging. It’s essential at every CI/CD stage including production. If you’re using Windows base images, you can expect your images to be much larger vs. Linux base images. There must be a better build approach that lets us discard unneeded files after compilation, since these aren’t required in our final image.
1) Choosing the Right .NET Docker Images
The official .NET Docker images are publicly available in the Microsoft repositories on Docker Hub. The process of identifying and picking up the right container base image while building applications can be confusing. To simplify the selection process, most images repositories provide extension tagging to help you select both a specific framework version. They also let you choose the right operating system, like a specific Linux distribution or Windows version.
Microsoft offers two categories of images. The first encompasses images used to develop and build .NET apps, while the second houses those used to run .NET apps. For example, mcr.microsoft.com/dotnet/sdk:6.0 is used during the development and build process. This image includes the compiler and any other .NET dependencies. Meanwhile, mcr.microsoft.com/dotnet/aspnet:6.0 is ideal for production environments. This image includes ASP.NET Core, with runtime only alongside ASP.NET Core optimizations, on Linux and Windows (multi-arch).
You can visit GitHub to browse available Docker images.
2) Optimize your Dockerfile for dotnet Restore
When building .NET Core apps with Docker, it’s important to consider how Docker caches layers while building your app.
A common way to leverage the build cache is to copy only the .csproj ,.sln, and nuget.config files for your app before performing a dotnet restore — instead of copying the full source code. The NuGet package restore can be one of the slowest parts of the build, and it only depends on these files. By copying them first, Docker can cache the restore result. For example, it won’t need to run again if you only change a .cs file.

FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /src

COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish
WORKDIR /publish
ENV ASPNETCORE_URLS=http://+:80/
EXPOSE 80
CMD ["./myWebApp"]

 
💁  The dotnet restore command uses NuGet to restore dependencies and project-specific tools that are specified in the project file.
3) Use a Multi-Stage Build
With multi-stage builds, Docker can use one base image for compilation, packaging, and unit tests. Another image then holds the application runtime. This makes the final image more secure and smaller in size (as it does not contain any development or debugging tools). Multi-stage Docker builds are a great way to ensure your builds are 100% reproducible and as lean as possible. You can create multiple stages within a Dockerfile and control how you build that image.
The .NET SDK includes .NET runtimes and tooling to develop, build, and package .NET applications. One best practice while creating docker images is keeping the image compact. You can containerize your .NET applications using a multi-layer approach. Each layer may contain different parts of the application like dependencies, source code, resources, and even snapshot dependencies. Alternatively, you can build any application as a separate image from the final image that contains the runnable application. To better understand this, let’s analyze the following Dockerfile.
The build stage uses SDK images to build the application and create final artifacts in the publish folder. The final stage copies artifacts from the build stage to the app folder, exposing port 80 to incoming requests and specifying the command to run the application, WebApp. In the first stage, we’ll extract the dependencies. In the second stage, we’ll copy the extracted dependencies to the final image.  Here’s a sample multi-stage Dockerfile for the student database example:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY –from=build /publish /app
WORKDIR /app
EXPOSE 80
CMD ["./myWebApp"]

The first stage is labeled build, where mcr.microsoft.com/dotnet/sdk is the base image.

docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mywebapp_app latest 1d4d9778ce14 3 hours ago 229MB

 
Our final image size shrinks dramatically to 229 MB, when compared to the single stage Dockerfile size of 827MB!
4) Use Specific Base Image tags, Instead of “Latest”
While building Docker images, we always recommended tagging them with useful tags that codify version information, intended destination (prod or test, for instance), stability, or other useful information for deploying the application in different environments. Conversely, we don’t recommend relying on the :latest tag. This :latest tag is often updated frequently and new versions can cause breaking changes. If you want to protect yourself against breaking changes, it’s best to pin to a specific version then update to newer versions when you’re ready.
For example, we’d avoid using mcr.microsoft.com/dotnet/sdk:latest as a base image. Instead, you should use specific tags like mcr.microsoft.com/dotnet/sdk:6.0, mcr.microsoft.com/dotnet/sdk:6.0-windowsservercore-ltsc2019, or others.
5) Run as a Non-root User for Security Purposes
While running an application within a Docker container, it has default access to the root for Linux or administrator privileges for Windows. This can undermine application security. You can solve this problem by adding USER instructions within your Dockerfile. The USER instruction sets the preferred user name (or UID) and optionally the user group (or GID) while running the image — and for any subsequent RUN, CMD, or ENTRYPOINT instructions.
Windows networks commonly use Active Directory (AD) to enable authentication and authorization between users, computers, and other network resources. Windows application developers often use Integrated Windows Authentication. This makes it easy for users and other services to automatically, transparently sign into the application using their credentials. Although Windows containers cannot be domain joined, they can still use Active Directory domain identities to support various authentication scenarios.
To achieve this, you can configure a Windows container to run with a group Managed Service Account (gMSA), which is a special type of service account introduced in Windows Server 2012. It’s designed to let multiple computers share an identity without requiring a password.
6) Use .dockerignore
To increase the build performance (and as a general best practice) we recommend creating a .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain the following lines:

Dockerfile*
**/[b|B]in/
**/[O|o]bj/

 
These lines exclude the  bin and obj files from the Docker build context. There are many good reasons to carefully structure a .dockerignore file, but this simple version works for now. It’s also helpful to understand how the docker build command works and what the build context means.
The build context is the place or space where the developer works. It can be a folder in Windows or a directory in Linux. In this directory, you’ll find every necessary app component like source code, configuration files, libraries, and plugins. You’ll determine which of these components to include while constructing a new image.
With the .dockerignore file, we can determine which components are vital. They’ll ultimately belong to the new image that we’re building.
For example, if we don’t want to include the bin and conf directory in our image build, we just need to indicate that within our .dockerignore file.
7) Add Health Checks to Your Containers
The HEALTHCHECK instruction tells Docker how to test a container and confirm that it’s still working. This can detect (for example) when a web server is stuck in an infinite loop and unable to handle new connections — even though the server process is still running.
When an application is deployed in production, an orchestrator like Kubernetes or a service fabric will most likely manage it. By providing the health check, you’re sharing the status of your containers with the orchestrator to permit management tasks based on your configurations. Let’s look at the following example:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY –from=build /publish /app
WORKDIR /app
EXPOSE 80
#If you’re using the Linux Container
HEALTHCHECK CMD curl –fail http://localhost || exit 1
#If you’re using Windows Container with Powershell
#HEALTHCHECK CMD powershell -command `
# try { `
# $response = iwr http://localhost; `
# if ($response.StatusCode -eq 200) { return 0} `
# else {return 1}; `
# } catch { return 1 }

CMD ["./myWebApp"]

 
When HEALTHCHECK is present in a Dockerfile, you’ll see the container’s health in the STATUS column while running docker ps. A container that passes this check displays as healthy. An unhealthy container displays as unhealthy.

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7bee4d6a652a student-app "./myWebApp" 2 seconds ago Up 1 second (health: starting) 0.0.0.0:5000-80/tcp modest_murdock

 
8) Optimize for Startup Performance
You can improve .NET app startup times and reduce latency by compiling your assemblies with Ready to Run (R2R) compilation. However, this will increase your build time as a compromise. You can do this by setting the PublishReadyToRun property, which takes effect when you publish an application.
You can add the PublishReadyToRun property in two ways:
1) Set it within your project file:

<PropertyGroup>
<PublishReadyToRun>true</PublishReadyToRun>
</PropertyGroup>

 
2) Set it using the command line:

/p:PublishReadyToRun=true

 
The default Dockerfile that comes with the sample doesn’t use R2R compilation since the application is too small to warrant it. The bulk of the IL code that’s executed in this sample application is within .NET’s libraries, which are already R2R-compiled. This example enables R2R in Dockerfile, where we pass the /p:PublishReadyToRun=true to the dotnet build and dotnet publish commands.

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app -r linux-x64 /p:PublishReadyToRun=true
RUN dotnet publish -o /publish -r linux-x64 –self-contained true –no-restore /p:PublishTrimmed=true /p:PublishReadyToRun=true /p:PublishSingleFile=true

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY –from=build /publish /app
WORKDIR /app
EXPOSE 80
HEALTHCHECK CMD curl –fail http://localhost || exit 1

CMD ["./myWebApp"]

9) Choose the Appropriate Isolation Mode For Windows Containers
There are two distinct modes of runtime isolation for Windows containers:  

Process Isolation – In this mode, multiple container instances can run concurrently in the same host with isolation on the file system, registry, network ports, process, thread ID space, and Object Manager namespace. It’s almost identical to how Linux containers run.
Hyper-V Isolation – In this mode, containers run inside a highly-optimized virtual machine, which provides hardware-level isolation between containers and hosts.

 
Most developers prefer process isolation when developing locally. It typically consumes fewer hardware resources than Hyper-V isolation. Hence, developers must account for the additional hardware needed while running the container in Hyper-V mode. However, your primary consideration when deciding to choose Hyper-V isolation is security — since it provides added hardware-level isolation. While Windows Server supports both options (default: Process Isolation), Windows 10+ only supports Hyper-V isolation.
To specify the isolation level, you should specify the –isolation flag: 

docker run -it –isolation=process mcr.microsoft.com/windows/servercore:ltsc2019 cmd

Conclusion
You’ve now seen some of the many methods for optimizing your Docker images. In any case, carefully crafting your Dockerfile is essential. If you’d like to go further, check out these bonus resources that cover recommendations and best practices for building secure, production-grade Docker images:

Docker Development Best Practices
Dockerfile Best Practices
Build Images with BuildKit
Best Practices for Scanning Images
Getting Started with Docker Extensions

 
At Docker, we’re incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. Are you working on something awesome with Docker? Send your contributions to Ajeet Singh Raina (@ajeetraina) on our Docker Community Slack channel, and we might feature your work!
 

Quelle: https://blog.docker.com/feed/