Whalecome, dear reader, to our second installment of Dear Moby. In this developer-centric advice column, our Docker subject matter experts (SMEs) answer real questions from you — the Docker community. Think Dear Abby, but better, because it’s just for developers!
Since we announced this column, we’ve received a tidal wave of questions. And you can submit your own questions too!
In this edition, we’ll be talking about the best way to develop in production environments running Kubernetes (spoiler alert: there are more ways than one!).
Without further ado, let’s dive into today’s top question.
The question
What is the best way to develop if my prod environment runs Kubernetes? – Amos
The answer
SME: Engineering Manager and Docker Captain, Michael Irwin.
First and foremost, there isn’t one “best way” to develop, as there are quite a few options, each with its own tradeoffs.
Option #1 is to simply run Kubernetes locally!
Docker Desktop allows you to spin up a Kubernetes cluster with just a few clicks. If you need more flexibility in the versioning, you can look into minikube or KinD (Kubernetes-in-Docker), which are both supported for use cases. Other fantastic tools like Tilt can also do wonders for your development experience by watching for file changes and rebuilding and redeploying container images (among other things).
Note: Docker Desktop currently only ships the latest version of Kubernetes.
The biggest advantage to this option is you can leverage very similar manifests to what’s used in your prod environment. If you mount source code into your containers for development (dev), your manifests will need to be flexible enough to support different configurations for prod versus dev. That being said, you can also test most of the system out the same way your prod environments run.
However, there are a few considerations to think about:
Docker Desktop needs more resources (CPU/memory) to run Kubernetes. There’s a good chance you’ll need to learn more about Kubernetes if you need to debug your application. This can add a bit of a learning curve.Even if you sync the capabilities of your prod cluster locally — there’s still a chance things will differ. This is typically from things like custom controllers and resources, access or security policies, service meshes, ingress and certificate management, and/or other factors that can be hard to replicate locally.
Option #2 is to simply use Docker Compose.
While Kubernetes can be used to run containers, so can many other tools. Docker Compose provides the ability to spin up an entire development environment using a much smaller and more manageable configuration. It leverages the Compose specification, “a developer-focused standard for defining cloud and platform agnostic container-based applications.”
There are a couple of advantages to using Compose. It has a more gradual learning curve and a lighter footprint. You can simply run docker compose up and have everything running! Instead of having to set up Kubernetes, apply manifests, potentially configure Helm, and more, Compose is already ready to go. This saves us from running a full orchestration system on our machines (which we wouldn’t wish on anyone).
However, using Compose does come with conditions:
It’s another tool in your arsenal. This means another set of manifests to maintain and update. If you need to define a new environment variable, you’ll need to add it to both your Compose file and Kubernetes manifests. You’ll have to vet changes against either prod or a staging environment since you’re not running Kubernetes locally.
To recap, it depends!
There are great teams building amazing apps with each approach. We’re super excited to explore how we can make this space better for all developers, so stay tuned for more!
Whale, that does it for this week’s issue. Have another question you’d like the Docker team to tackle? Submit it here!
Quelle: https://blog.docker.com/feed/
Published by