The post Introducing Virtlet: VMs and Containers on one OpenContrail network in Kubernetes — a new direction for NFV? appeared first on Mirantis | Pure Play Open Cloud.
Some time ago I had a meeting at one of our potential enterprise (non-telco) customers. The company had just announced an RFP to replace their existing OpenStack distribution. As we began the discussion about finding the #1 contributor to OpenStack in Stackalytics, we found the root cause of the problem, and realized that they didn’t need to find an OpenStack distribution from top vendor commits.
They need to run their single application workload in large scale production.
In other words, they don’t need multi-tenancy, self-service, Murano, Trove, and so on. In fact, they don’t even want OpenStack, because it is too complex to ship an immutable VM image with their app.
On the other hand, running Kubernetes instead of OpenStack wasn’t the right answer either, because their app is not ready to take its place in the microservices world, and it would take at least six months to rewrite, re-test and certify all the tooling around it.
That was the day I realized how powerful it would be to enable standard VMs in Kubernetes, along with the same SDN we have today in OpenStack. By including the best of both platforms, imagine how we could simplify the control plane stack for use cases such as Edge Computing, Video streaming, and so on, where functions are currently deployed as virtual machines. It might even give us a new direction for NFV.
That’s the idea behind Virtlet.
What is Virtlet? An overview
The previous real example demonstrates that our customers are not ready for the pure microservices world, as I described in my previous blog. To solve this problem, we’re adding a new feature to Mirantis Cloud Platform called Virtlet. Virtlet is a Kubernetes runtime server that enables you to run VM workloads based on QCOW2 images.
Virtlet was started by Mirantis k8s folks almost year ago, with the first implementation done with Flannel. In other words, Virtlet is a Kubernetes CRI (Container Runtime Interface) implementation for running VM-based pods on Kubernetes clusters. (CRI is what enables Kubernetes to run non-Docker flavors of containers, such as Rkt.)
For the sake of simplicity of deployment, Virtlet itself runs as a DaemonSet, essentially acting as a hypervisor and making the CRI proxy available to run the actual VMs This way, it’s possible to have both Docker and non-Docker pods run on the same node.
The following figure shows the Virtlet architecture:
Virtlet consists of the following components:
Virtlet manager: Implements the CRI interface for virtualization and image handling
Libvirt: The standard instance of libvirt for KVM.
vmwrapper: Responsible for preparing the environment for the emulator
Emulator: Currently qemu with KVM support (with possibility of disabling KVM for nested virtualization tests)
CRI proxy: Provides the possibility of mixing docker-shim and VM based workloads on the same k8s node
You can find more detail in the github docs, but in its latest release, Virtlet supports the following features:
Volumes: Virtlet uses a custom FlexVolume (virtlet/flexvolume_driver) driver to specify block devices for the VMs. It supports:
qcow2 ephemeral volumes
raw devices
Ceph RBD
files stored in secrets or config maps
Environment variables: You can define environment variables for your pods, and then virtlet uses cloud-init to write those values into the /etc/cloud/environment file when the VM starts up.
Demo Lab Architecture
To demonstrate how all of this works, we created a lab with:
3 OpenContrail 3.1.1.x controllers running in HA
3 Kubernetes master/minion nodes
2 Kubernetes minion nodes
The K8s nodes are running Kubernetes 1.6 with the OpenContrail Container Network Interface (CNI) plugin, and we spun up a Ubuntu VM POD via virtlet and standard deployment with Nginx container pods.
So what we wind up with is an installation where we’re running containers and virtual machines on the same Kubernetes cluster, running on the same OpenContrail virtual network.
In general, the process looks like this:
Set up the general infrastructure, including the k8s masters and minions, as well as an OpenContrail controllers. Nodes running the Virtlet DaemonSet should have a label key set to a specific value. In our case, we’re using extraRuntime=virtlet. (We’ll need this later.)
Create a pod for the VM, specifying the extraRuntime key in the nodeAffinity parameter so that it runs on a node that’s got the Virtlet DaemonSet. For the volume specify the VM image.
That’s it; there is no number 3.
Of course there’s much more to see than just those two steps, as you can see in this video:
Conclusion
So now that we’ve got the basics, we’ve got a couple of ideas of what we would like to do in the future regarding Virtlet and OpenContrail Kubernetes integration, such as:
Performance validation of VMs in Kubernetes, such as comparing containerized VMs with standard VMs on OpenStack
iSCSI Support for storage volumes
Enabling OpenContrail vRouter DPDK and SR-IOV, extending the OpenContrail CNI to make it possible to create advanced NFV integrations
CPU pinning and NUMA for Virtlet
Resource handling improvements, such as hard limits for memory, and qemu thread limits
Calico Support
As you can see, rather than pushing random commits, Mirantis is focusing on solving real problems, and only pushing those solutions back to the community. I would also like to give special thanks to Ivan Shvedunov, Dmitry Shulyak and all of the Mirantis Kubernetes team, who did an amazing job on this integration. If you want to reach us, you can find us in the Kubernetes slack channel #virtlet, or for network-related issues, you can find us on the OpenContrail Slack.
The post Introducing Virtlet: VMs and Containers on one OpenContrail network in Kubernetes — a new direction for NFV? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis
Published by