The post Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
One of Mirantis’ most popular webinars in 2018 was one we presented with Cloudify as part of our launch of MCP Edge, a version of Mirantis Cloud Platform software tuned specifically for edge cloud infrastructure. In case you missed the webinar, you can watch the recording and view the Q&A below.
Is the low latency characteristic of the edge cloud mainly a function of the cloud being close to the user?
Satish Salagame (Mirantis): The user’s proximity to the edge and avoiding multiple network hops is certainly a key component. However, the edge infrastructure design should ensure that unnecessary delays are not introduced, especially in the datapath. This is where EPA (Enhanced Platform Awareness) features like NUMA aware scheduling, CPU-pinning, huge pages all help. Also, data plane acceleration techniques such as SR-IOV and DPDK help in accelerating the data plane. This is why the Edge cloud infrastructure has lot of commonality with NFVI.
Shay Naeh (Cloudify): There are many use cases that require low latency, and the central cloud as we see it today is going to be broken into smaller edge clouds for use cases like connected cars and augmented reality, which require latency of less than 20ms. Latency is only one reason for this in the edge. The second reason for the edge itself is you don’t want to transfer all the enormous data points to the central clouds, and I call it a data tsunami of infromation for IoT, for connected cars, etc.
Satish: So you want to process everything locally, aggregate it, and send it to the central cloud just for learning, and this emanates the learning information from edge to edge. Let’s say you go to special use cases, one of the edges, so you can teach the other edges about it, and they will be informed, even though their use case was learned in another edge. So the two main reasons are the new application use cases that require low latency and the enormous data points that will be available now with 5G, IoT, and new scenarios.
Does Virtlet used in Mirantis Cloud Platform Edge solve all the problems associated with VNFs?
Satish: Virtlet is certainly one critical building block in solving some of the VNF problems we talked about. It allows a VM-based VNF to run unmodified in a k8s environment. However, it doesn’t solve all the problems. For example, if we have a complex VNF with multiple components, each running as separate VMs, and a proprietary VNFM designed for OpenStack or some other VIM, it takes some effort to adopt this VNF to the k8s/Virtlet environment. However, there will be many use cases where Virtlet can be used to design a very efficient, small footprint k8s edge cloud. Also, it provides a great transition path as more and more VNFs become containerized and cloud-native.
How does Virtlet compare with Kubevirt?
Satish: See our blog on the topic.
How does the MCP Master of Masters work with Cloudify?
Satish: The MCP Master of Masters is focused on the deployment and lifecycle management of infrastructure. The key differentiation here is that the MCP Master of Masters is focused on infrastructure orchestration and infrastructure management, whereas Cloudify is more focused on workload orchestration. In the edge cloud case, that includes edge applications and VNFs. That’s the fundamental difference between the two, and working together, they complement each other and make a powerful edge stack.
Shay: It’s not only VNFs, it can be any distributed application that you would like to run, and you can deploy it on multiple edges and manage it using Cloudify. The MCP Master of Masters will provide the infrastructure, and Cloudify will run on top of it and provision the workloads on the edges.
Satish: Obviously the MCP Master of Masters will have to communicate with Cloudify in terms of providing inventory information to the orchestrator and providing profile information for each edge cloud being managed by MCP, so that the orchestrator has all the required information to launch the edge applicaitons and VNFs appropriately in the correct edge environment.
What is the business use case for abstracting away environments with Cloudify?
Ilan Adler (Cloudify): The use cases are reducing transformation cost, reusing existing investments and components (software and hardware) to enable native and Edge, and using a Hybrid Stack to allow a smoother transition to Cloud Native Edge by allowing integration of the existing network services with new cloud native edge management based on Kubernetes.
How is this solution different from an access/core cloud management solution for a telco?
Satish: The traditional access/aggregation telco networks focused on carrying the traffic to the core for processing. However, with Edge computing, there are two important aspects:
The Edge clouds which are close to the user are processing data in the edge itself
Backhauling the data to the core cloud is prevented
Both are critical as we move to 5G.
Have you considered using a lightweight (small footprint) fast containerized VM approach like Kata Containers? The benefits are VMs with the speed of containers, that act and look like a container in K8S.
Satish: We briefly looked at Kata Containers. Our focus was on key networking capabilities and the ability to handle VNF workloads that need to run as VMs. Based on our research we found Virtlet to be the best candidate for our needs.
What’s the procedure to import a VM into a Virtlet?
Nick Chase (Mirantis): Virtlet creates VM pods that run regular qcow2 images, so the first step is to create a qcow2 image for your VM. Next, host it at an HTTPS URL, then create a pod manifest just as you would for a Docker container, specifying that the pod should run on a machine that has virtlet installed. Also, the image URI has a virtlet.cloud prefix indicating that it’s VM pod. Watch a demo of MCP Edge with Virtlet.
Regarding the networking part, do you still use OvS or proceed with the SR-IOV since it supports interworking with Calico (as of the new version of MCP)?
Satish: In the architecture we showed today, we are not using OvS. It’s a pure Kubernetes cluster with CNI-Genie, which allows us to use multiple CNIs; CNI-SRIOV for data plane acceleration; and Calico or Flannel. Our default is Calico for the networking.
From your experience in real-world scenarios, is the placement, migration (based on agreed-on SLA and user mobility), and replication of VNFs a challenging task? If yes, Why? Which VNF type is more challenging?
Satish: Yes, these are all challenging tasks, especially with complex VNFs that:
Contain multiple VNF components (VMs)
Require multiple tenant networks (Control, Management, Data planes)
Have proprietary VNF managers
Require complex on-boarding mechanisms.
Does the Cloudify entity behave as a NFVO? or an OSS/BSS?
Shay: Cloudify can also work as a NFVO, VNFm, and Service Orchestrator. In essence it’s all a function of what blueprints you choose to utilize. Cloudify is not an OSS/BSS system.
Does the Service Orchestrator include NFVO?
Shay: Yes
In “Edge Computing Orchestration” slide, there is a red arrow pointing to the public cloud. What type of things is it orchestrating in a public cloud?
Satish: It could orchestrate pretty much everything in the public cloud as well applications, networking, managed services, infrastructure, etc.
SO and e2e orchestrator are the same?
Satish: Yes
In the ETSI model, is Mirantis operating as the NFVi and ViM? And Cloudify acting as the VNFM and NFVO?
Shay: Yes. Mirantis provides the infrastructure and the capability to run workloads on top of it. Cloudify manages the lifecycle operations of each one of the VNFs (this is the role of the VNFM or VNF Manager), and it also creates the workloads and service chaining between the VNFs. This translates into a service which is the responsibility of the NFVO, which is to stitch in together multiple capabilities to provide a service. This service can be complex and span multiple edges, multiple domains and if needed connect it to some core backends, etc.
Satish: As we move to 5G and we start dealing with network slicing and complex applications, this becomes even more critical, having an intelligent orchestrator like Cloudify orchestrating the required VNFs and doing the service function chaining and doing it in a very dynamic fashion. That will be an extremely powerful thing to combine with MCP.
What is your view on other open source orchestration platforms like ONAP, OSM?
Satish: See our blog comparing different NFV orchestration platforms. Also see SWOT analyses and scorecards in our Open NFV Executive Briefing Center.
What is the function of the end to end orchestrator?
Shay: When you’re going to have multiple edges and different types of edges, you’d like to have one easy, centralized way to manage all those edges. In addition to that, you need to run different operations on different edges, and there are different models to do this. You can have a master orchestrator that can talk to a local orchestrator, and just send commands, and the local orchestrator is a control point for the master orchestrator, but still you need the master orchestrator.
Another more advanced way to do it is to have an autonomous orchestrator, that the master only delegates work to, but when there is no connection to a master orchestrator, it will work on its own, and manage the lifecycle operations of the edge, including healing, scaling, etc., autonomously and independently. When there is no connection, it will run as a local orchestrator, and when the connection resumes, it can aggregate all the information and send it to the master orchestrator.
So you need to handle many edges, possibly hundreds or thousands of edges, and you need to do it in a very efficient way that is acceptable by the use case that you are trying to orchestrate.
For the OpenStack edge deployment, what is the minimal footprint? A single node?
Satish: A single node is possible, but it is still a work in progress. Our initial goal for MCP Edge is to support a minimum of 3 – 6 nodes.
With respect to service design (say using TOSCA model), can we define a service having a mix of k8s pods and VM pods?
Nick: I would assume yes because the VM pods are treated as first-class containers, right?
Shay: Yes, definitely. Moreover, Cloudify can actually be the glue that can create a service chain between Kubernetes workloads, pods and VMs, as well as external services like databases and others. We implement the service broker interface, which provides a way for cloud-native Kubernetes services and pods to access external services as if they were internal native services. This is using the service broker API, and tomorrow you can bring the service into Kubernetes, and it will be transparent, because you implemented it in a cloud-native way. The service provider exposes a catalog, which can access an external service, for example one on Amazon that can run a database. That should be very easy.
How is a new edge site provisioned/introduced? Is some automation possible by the Master of Masters?
Satish: Yes, provisioning of a new edge cloud and subsequent LCM will be handled by the Master of Masters in an automated way. The Master of Masters will have multiple edge cloud configurations and using those configurations (blueprints), it will be able to provision multiple edge clouds.
Would this become an alternative to OpenStack, which manages VMs today? If not, how would OpenStack be used with Edge cloud?
Satish: Depending on the use cases, an edge cloud may consist of any of the following:
Pure k8s cluster with Virtlet
Pure OpenStack cluster
combination of OpenStack + k8s clusters
The overall architecture will depend on the use cases and edge applications to be supported.
NFVO can be part of the end to end orchestrator?
Satish: Yes
Is the application orchestration dynamic?
Satish: Yes, you can have it be dynamic based on inputs in the TOSCA blueprint.
How do you ensure an End-to-end SLA for a critical application connecting between Edge clouds?
Satish: One way to do this is by creating a network slice with the required end-to-end SLA characteristics and launch the critical edge application in that slice.
The post Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis
Published by