PyTorch on Azure: Deep learning in the oil and gas industry

This blog post was co-authored by Jürgen Weichenberger, Chief Data Scientist, Accenture and Mathew Salvaris, Senior Data Scientist, Microsoft

Drilling for oil and gas is one of the most dangerous jobs on Earth. Workers are exposed to the risk of events ranging from small equipment malfunctions to entire off shore rigs catching on fire. Fortunately, the application of deep learning in predictive asset maintenance can help prevent natural and human made catastrophes.

We have more information than ever on our equipment thanks to sensors and IoT devices, but we are still working on ways to process the data so it is valuable for preventing these catastrophic events. That’s where deep learning comes in. Data from multiple sources can be used to train a predictive model that helps oil and gas companies predict imminent disasters, enabling them to follow a proactive approach.

Using the PyTorch deep learning framework on Microsoft Azure, Accenture helped a major oil and gas company implement such a predictive asset maintenance solution. This solution will go a long way in protecting their staff and the environment.

What is predictive asset maintenance?

Predictive asset maintenance is a core element of the digital transformation of chemical plants. It is enabled by an abundance of cost-effective sensors, increased data processing, automation capabilities, and advances in predictive analytics. It involves converting information from both real-time and historical data into simple, accessible, and actionable insights. This is in order to enable the early detection and elimination of defects that would otherwise lead to malfunction. For example, by simply detecting an early defect in a seal that connects the pipes, we can prevent a potential failure that can result in a catastrophic collapse of the whole gas turbine.

Under the hood, predictive asset maintenance combines condition-based monitoring technologies, statistical process control, and equipment performance analysis to enable data from disparate sources across the plant to be visualized clearly and intuitively. This allows operations and equipment to be better monitored, processes to be optimized, better controlled, and energy management to be improved.

It is worth noting that the predictive analytics at the heart of this process do not tell the plant operators what will happen in the future with complete certainty. Instead, they forecast what is likely to happen in the future with an acceptable level of reliability. It can also provide “what-if” scenarios and an assessment of risks and opportunities.

Figure 1 – Asset maintenance maturity matrix (Source: Accenture)

The challenge with oil and gas

Event prediction is one of the key elements in predictive asset maintenance. For most prediction problems there are enough examples of each pattern to create a model to identify them. Unfortunately, in certain industries like oil and gas where everything is geared towards avoiding failure, the sought-after examples of failure patterns are rare. This means that most standard modelling approaches either perform no better than experienced humans or fail to work at all.

Accenture’s solution with PyTorch and Azure

Although there only exists a small number of failure examples, there exists a wealth of times series and inspection data that can be leveraged.

Figure 2 – Approach for Predictive Maintenance (Source : Accenture)

After preparing the data in stage one, a two-phase deep learning solution was built with PyTorch in stage two. First, a recurrent neural network (RNN) was trained in combination with a long short-term memory (LSTM) architecture which is phase one of stage two. The neural network architecture used in the solution was inspired by Koprinkova-Hristova et al 2011 and Aydin and Guldamlasioglu 2017. This RNN timeseries model forecasts important variables, such as the temperature of an important seal. These forecasts are then fed into a classifier algorithm (random forest) to identify the variable is outside of the safe range and if so, the algorithm produces a ranking of potential causes which experts can examine and address. This effectively enables experts to address the root causes of potential disasters before they occur.

The following is a diagram of the system that was used for training and execution of the solution:  

Figure 3 – System Architecture

The architecture above was chosen to ensure the customer requirement of maximum flexibility in modeling, training, and in the execution of complex machine learning workflows are using Microsoft Azure. At the time of implementation, the services that fit these requirements were HDInsights and Data Science Virtual Machines (DSVM). If the project was implemented today, Azure Machine Learning service would have been used for training/inferencing with HDInsights or Azure Databricks for data processing.

PyTorch was used due to the extreme flexibility in designing the computational execution graphs, and not being bound into a static computation execution graph like in other deep learning frameworks. Another important benefit of PyTorch is that standard python control flow can be used and models can be different for every sample. For example, tree-shaped RNNs can be created without much effort. PyTorch also enables the use of Python debugging tools, so programs can be stopped at any point for inspection of variables, gradients, and more. This flexibility was very beneficial during training and tuning cycles.

The optimized PyTorch solution resulted in faster training time by over 20 percent compared to other deep learning frameworks along with 12 percent faster inferencing. These improvements were crucial in the time critical environment that team was working in. Please note, that the version tested was PyTorch 0.3.

Overview of benefits of using PyTorch in this project:

Training time

Reduction in average training time by 22 percent using PyTorch on the outlined Azure architecture.

Debugging/bug fixing

The dynamic computational execution graph in combination with Python standard features reduced the overall development time by 10 percent.

Visualization

The direct integration into Power BI enabled a high end-user acceptance from day one.

Experience using distributed training

The dynamic computational execution graph in combination with flow control allowed us to create a simple distributed training model and gain significant improvements in overall training time.

How did Accenture operationalize the final model?

Scalability and operationalization were key design considerations from day one of the project, as the customer wanted to scale out the prototype to several other assets across the fleet. As a result, all components within the system architecture were chosen with those as criteria. In addition, the customer wanted to have the ability to add more data sources using Azure Data Factory. Azure Machine Learning service and its model management capability were used to operationalize the final model. The following diagram illustrates the deployment workflow used.

Figure 4 – Deployment workflow

The deployment model was also integrated into a Continuous Integration/Continuous Delivery (CI/CD) workflow as depicted below.

Figure 5 – CI/CD workflow

PyTorch on Azure: Better together

The combination of Azure AI offerings with the capabilities of PyTorch proved to be a very efficient way to train and rapidly iterate on the deep learning architectures used for the project. These choices yielded a significant reduction in training time and increased productivity for data scientists.

Azure is committed to bringing enterprise-grade AI advances to developers using any language, any framework, and any development tool. Customers can easily integrate Azure AI offerings into any part of their machine learning lifecycles to productionize their projects at scale, without getting locked into any one tool or platform.
Quelle: Azure

Azure Stack IaaS – part one

This blog post was co-authored by Daniel Savage, Principal Program Manager, Azure Stack and Tiberiu Radu, Senior Program Manager, Azure Stack.

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform

When we discuss Azure Stack with our customers, they see the value in Azure Stack providing cloud-native capabilities to their datacenters. They see the opportunity to modernize their apps and address the unique solutions Azure Stack can deliver, but they often pause as they ponder where to begin. They wonder how to get value from the investments they have in apps currently running on virtual machines (VM). They wonder, “Does Azure Stack help me here? What if I am not quite ready for Platform-as-a-Service?” These questions are difficult, but the answers become more clear when they understand that Azure Stack at its core is an IaaS platform.

Azure Stack allows customers to run their own instance of Azure in their datacenter. Organizations pick Azure Stack as part of their cloud strategy because it helps them handle situations when the public cloud won’t work for them. The three most common reasons use Azure Stack are because of poor network connectivity to the public cloud, regulatory or contractual requirements, or backend systems that cannot be exposed to the Internet.

Azure Stack has created a lot of excitement around new hybrid application patterns, consistent Azure APIs to simplify DevOps practices and processes, the extensive Azure ecosystem available through the Marketplace, and the option to run Azure PaaS Services locally, such as App Services and IoT Hub. Underlying all of these are some exciting IaaS capabilities and we are so exciting to be kicking off a new blog series to show it off. 

Welcome to the Azure Stack IaaS blog series!

To learn more, please see the below resources:

Azure Stack use cases
Azure IaaS overview

IaaS is more than virtual machines

People often think of IaaS as simply virtual machines, but IaaS is more. When you deploy a VM in Azure or Azure Stack, the machine comes with a software defined network including DNS, public IPs, firewall rules (also called network security groups), and many other capabilities. The VM deployment also creates disks for your VMs on software defined storage running in Blob Storage. In the Azure Stack portal image, you can see how this full software defined infrastructure is displayed after you have deployed a VM:

To learn more, please see below for product overviews:

Azure Virtual Machines
Azure Virtual Networks
Azure Managed Disks
Azure Storage

IaaS is the foundation for PaaS Services

Did you know that the Azure PaaS services are powered by IaaS VMs behind the scenes? As a user you don’t see these VMs, but they deliver the capabilities like Event Hubs or Azure Kubernetes Service (AKS). This same Azure IaaS is the foundation of PaaS in Azure Stack. Not only can you use it to deliver your applications, Azure PaaS services will use IaaS VMs to deliver solutions on Azure Stack.

Take Event Hubs, currently in private preview, as an example. An Azure Stack administrator downloads the Event Hubs resource provider from the Marketplace and installs it. Installation creates a new admin subscription and a set of IaaS resources. The administrator sees things like virtual networks, DNS zones, and virtual machine scale sets in the administration portal:

However, when one of your developers deploys their Event Hub in Azure Stack, they don’t see the behind-the-scenes IaaS VMs and resources in their subscription, they just see the Event Hub:

Modernize your apps through operations

Often people think that application modernization involves writing or changing application code, or that modernization means rearchitecting the entire application. In most cases, the journey starts with small steps. When you run your VMs in Azure or Azure Stack, you can modernize your operations.

In addition to the underlying infrastructure, Azure and Azure Stack offers a full set of integrated and intelligent services. These services support the management for your VMs, self-service capabilities, enhance deployment, and enable infrastructure-as-code. With Azure Stack, you empower your teams. 

Over the next couple of blog posts we will go into more detail about these areas. Here is a chart of the cloud capabilities you can utilize to modernize your IaaS VM operations:

What’s next in this blog series

We hope you come back to read future posts in this blog series. Here are some of our planned upcoming topics:

Fundamentals of IaaS
Start with what you already have
Do it yourself
Pay for what you use
It takes a team
If you do it often, automate it
Protect your stuff
Build on the success of others
Journey to PaaS

Quelle: Azure

Azure IoT Edge runtime available for Ubuntu virtual machines

Azure IoT Edge is a fully managed service that allows you to deploy Azure and third-party services—edge modules—to run directly on IoT devices, whether they are cloud-connected or offline. These edge modules are container-based and offer functionality ranging from connectivity to analytics to storage—allowing you to deploy modules entirely from the Azure portal without writing any code. You can browse existing edge modules in the Azure Marketplace.

Today, we’re excited to offer the open-source Azure IoT Edge runtime preinstalled on Ubuntu virtual machines to make it even easier to get started, simulate an edge device, and scale out your automated testing.

Why use virtual machines?

Azure IoT Edge deployments are built to scale so that you can deploy globally to any number of devices and simulate the workload with virtual devices which is an important step to verify if your solution is ready for mass deployment. The easiest way to do this is by creating simulated devices with Azure virtual machines (VMs) running the Azure IoT Edge runtime to scale your testing from the earliest stages of development—even before you have production hardware.

Azure VMs are:
•    Scalable/automatable: deploy as many as you need
•    Persistent: cloud- managed, rather than locally
•    Flexible: any operating system and elastic resources
•    Easy to use: deploy with simple command line instructions or template

Azure IoT Edge on Ubuntu VM

On first boot, the Azure IoT Edge on Ubuntu VM preinstalls the latest version of the Azure IoT Edge runtime, so you will always have the newest features and fixes. It also includes a script to set the connection string and then restarts the runtime, which can be triggered remotely through the Azure VM portal or Azure command line, allowing you to easily configure and connect the IoT Edge device without starting a secure shell (SSH) or remote desktop session. This script will wait to set the connection string until the IoT Edge client is fully installed so that you don’t have to build that into your automation.

The initial offering is based on Ubuntu Server 16.04 LTS, but other operating systems and versions will be added based on user feedback. We’d love to hear your thoughts in the comments.

Getting started

You can deploy the Azure IoT Edge on Ubuntu VM through the Azure Marketplace, Azure Portal, or Azure Command-line. Let me show you how to use the Azure Marketplace and the Portal.

Azure Marketplace

The quickest way to set up a single instance is to use the Azure Marketplace:

1.  Navigate to the Marketplace with our short link or by searching “Azure IoT Edge on Ubuntu” on the Azure Marketplace

2.  Select “GET IT NOW” and then “Continue” on the next dialog.

3.  Once in the Azure Portal, select “Create” and follow the wizard to deploy the VM.

If it’s your first time trying out a VM, it’s easiest to use a password and enable the SSH in the public inbound port menu.
If you have a resource intensive workload, you should upgrade the virtual machine size by adding more CPUs and/or memory.

4.  Once the virtual machine is deployed, configure it to connect to your IoT Hub by:

Copying your device connection string from your IoT Edge device created in your IoT Hub (You can follow the “Register a new Azure IoT Edge device from the Azure portal” how-to guide if you aren’t familiar with this process)
Selecting your newly created virtual machine resource from the Azure Portal and opening the “run command” option

Select the “RunShellScript” option

Execute the script below via the command window with your device connection string:/etc/iotedge/configedge.sh “{device_connection_string}”
Select “Run”
Wait a few moments, and the screen should then provide a success message indicating the connection string was set successfully.

5.  Voila! Your IoT Edge on Ubuntu VM is now connected to IoT Hub

Azure portal

If you’re already working in the Azure portal, you can search for “Azure IoT Edge” and select “Ubuntu Server 16.04 LTS + Azure IoT Edge runtime” to begin the VM creation workflow. From there, complete steps 3-5 in the Marketplace instructions above.

 

If you’d like to learn how you can deploy these virtual machines at scale, check out the “Deploy from Azure CLI” section in the Run Azure IoT Edge on Ubuntu Virtual Machines article.

Now that you have created an IoT Edge device with your virtual machine and connected it to your IoT Hub, you can deploy modules to it like any other IoT Edge device. For example, if you go to the IoT Edge Module Marketplace and select the “Simulated Temperature Sensor,” you can deploy this module to the new device and see data flowing in just a few clicks! Next, try deploying your own workloads to the virtual machine and let us know how we can further simplify your IoT Edge testing experience in the comments section below or on user voice. 

Get started with Azure IoT Edge on Ubuntu virtual machines today!
Quelle: Azure

What’s the Difference Between OpenShift and Kubernetes?

Over on the Red Hat Blog, Brian “redbeard” Harrington has laid out an excellent new post explaining just how Kubernetes, Red Hat OpenShift and OKD all relate to one another. From his post: At CoreOS we considered Kubernetes to be the “kernel” of distributed systems. We recognized that a well designed job scheduler, operating across […]
The post What’s the Difference Between OpenShift and Kubernetes? appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

A little light reading: the latest on technology from around the Google-verse

As we collectively dive into 2019, we’ve already come across some great cloud-related reads from around the broader Google world. Here are a few stories to help you stay informed—and get inspired—about interesting technologies, projects and initiatives on everything from application development and Knative to renewable energy and AI research.Brighten up your day with renewables newsA 40,000-strong solar panel farm in Taiwan is now part of our plan to meet our renewable energy goals. This is our first renewable energy project in Asia, and we’ll purchase 100% of the output of a 10-megawatt solar array. Located about 100 kilometers away from our data center on the west side of Taiwan, solar panels will be mounted several feet in the sky around commercial fishing ponds. Take a peek at the solar farm site here.See Cloud Functions through Firebase eyesHere’s a look, with plenty of visuals, at how one mobile developer uses Firebase for app dev, along with Cloud Functions as a back end. Since Firebase is a Google product, it integrates with other Google products, so you can access Cloud Functions from within Firebase’s console, or vice versa. You may often switch between the consoles during development, as well as writing and deploying code via either the Firebase CLI or Cloud CLI (known as gcloud). Read the nitty-gritty details here.Go back to school at the libraryGrow with Google is a digital skills training and education program for students, teachers, job seekers, startups and others. It’s spreading its wings with a big push to bring in-person Grow with Google workshops to libraries in every U.S. state in 2019. The workshops and tools are all free, in true library spirit, and there’s also room for creative new ideas through a grant to the American Library Association. Check out the workshop list.See what researchers accomplished in 2018The accomplishments of Google researchers last year are inspiring, from creating assistive techniques in email to earthquake aftershock prediction. It’s worth scrolling down for some great details about how quantum computing is developing as well as how computational photography powers Night Sight in Pixel phones. Do your own investigation into this post.Try a little light listening: All about KnativeThough this podcast doesn’t exactly count as “reading,” it’s a great primer for understanding Knative, which simplifies Kubernetes for developing serverless apps. . Knative came out of the idea that developers don’t necessarily need to see every detail of Kubernetes tools to use it effectively. Knative is a higher level of abstraction that’s focused on making the Kubernetes experience easier for developers, and the podcast covers features like eventing and scale-to-zero capabilities. Plus, you’ll learn how to pronounce “Knative.” Hear from the developers on Knative details. And join the Knative community on GitHub.That’s a wrap for January! What have you been reading lately? Tell us your recommendations here.
Quelle: Google Cloud Platform

Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A

The post Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
One of Mirantis’ most popular webinars in 2018 was one we presented with Cloudify as part of our launch of MCP Edge, a version of Mirantis Cloud Platform software tuned specifically for edge cloud infrastructure. In case you missed the webinar, you can watch the recording and view the Q&A below.
Is the low latency characteristic of the edge cloud mainly a function of the cloud being close to the user?
Satish Salagame (Mirantis): The user’s proximity to the edge and avoiding multiple network hops is certainly a key component. However, the edge infrastructure design should ensure that unnecessary delays are not introduced, especially in the datapath. This is where EPA (Enhanced Platform Awareness) features like NUMA aware scheduling, CPU-pinning, huge pages all help. Also, data plane acceleration techniques such as SR-IOV and DPDK help in accelerating the data plane. This is why the Edge cloud infrastructure has lot of commonality with NFVI.
Shay Naeh (Cloudify): There are many use cases that require low latency, and the central cloud as we see it today is going to be broken into smaller edge clouds for use cases like connected cars and augmented reality, which require latency of less than 20ms. Latency is only one reason for this in the edge. The second reason for the edge itself is you don’t want to transfer all the enormous data points to the central clouds, and I call it a data tsunami of infromation for IoT, for connected cars, etc.
Satish: So you want to process everything locally, aggregate it, and send it to the central cloud just for learning, and this emanates the learning information from edge to edge. Let’s say you go to special use cases, one of the edges, so you can teach the other edges about it, and they will be informed, even though their use case was learned in another edge. So the two main reasons are the new application use cases that require low latency and the enormous data points that will be available now with 5G, IoT, and new scenarios.
Does Virtlet used in Mirantis Cloud Platform Edge solve all the problems associated with VNFs?
Satish: Virtlet is certainly one critical building block in solving some of the VNF problems we talked about. It allows a VM-based VNF to run unmodified in a k8s environment. However, it doesn’t solve all the problems. For example, if we have a complex VNF with multiple components, each running as separate VMs, and a proprietary VNFM designed for OpenStack or some other VIM, it takes some effort to adopt this VNF to the k8s/Virtlet environment. However, there will be many use cases where Virtlet can be used to design a very efficient, small footprint k8s edge cloud. Also, it provides a great transition path as more and more VNFs become containerized and cloud-native.
How does Virtlet compare with Kubevirt?
Satish: See our blog on the topic.
How does the MCP Master of Masters work with Cloudify?
Satish: The MCP Master of Masters is focused on the deployment and lifecycle management of infrastructure. The key differentiation here is that the MCP Master of Masters is focused on infrastructure orchestration and infrastructure management, whereas Cloudify is more focused on workload orchestration. In the edge cloud case, that includes edge applications and VNFs. That’s the fundamental difference between the two, and working together, they complement each other and make a powerful edge stack.
Shay: It’s not only VNFs, it can be any distributed application that you would like to run, and you can deploy it on multiple edges and manage it using Cloudify. The MCP Master of Masters will provide the infrastructure, and Cloudify will run on top of it and provision the workloads on the edges.
Satish: Obviously the MCP Master of Masters will have to communicate with Cloudify in terms of providing inventory information to the orchestrator and providing profile information for each edge cloud being managed by MCP, so that the orchestrator has all the required information to launch the edge applicaitons and VNFs appropriately in the correct edge environment.
What is the business use case for abstracting away environments with Cloudify?
Ilan Adler (Cloudify): The use cases are reducing transformation cost, reusing existing investments and components (software and hardware) to enable native and Edge, and using a Hybrid Stack to allow a smoother transition to Cloud Native Edge by allowing integration of the existing network services with new cloud native edge management based on Kubernetes.
How is this solution different from an access/core cloud management solution for a telco?
Satish: The traditional access/aggregation telco networks focused on carrying the traffic to the core for processing. However, with Edge computing, there are two important aspects:

The Edge clouds which are close to the user are processing data in the edge itself
Backhauling the data to the core cloud is prevented

Both are critical as we move to 5G.
Have you considered using a lightweight (small footprint) fast containerized VM approach like Kata Containers? The benefits are VMs with the speed of containers, that act and look like a container in K8S.
Satish: We briefly looked at Kata Containers. Our focus was on key networking capabilities and the ability to handle VNF workloads that need to run as VMs. Based on our research we found Virtlet to be the best candidate for our needs.
What’s the procedure to import a VM into a Virtlet?
Nick Chase (Mirantis): Virtlet creates VM pods that run regular qcow2 images, so the first step is to create a qcow2 image for your VM. Next, host it at an HTTPS URL, then create a pod manifest just as you would for a Docker container, specifying that the pod should run on a machine that has virtlet installed. Also, the image URI has a virtlet.cloud prefix indicating that it’s VM pod. Watch a demo of MCP Edge with Virtlet.
Regarding the networking part, do you still use OvS or proceed with the SR-IOV since it supports interworking with Calico (as of the new version of MCP)?
Satish: In the architecture we showed today, we are not using OvS. It’s a pure Kubernetes cluster with CNI-Genie, which allows us to use multiple CNIs; CNI-SRIOV for data plane acceleration; and Calico or Flannel. Our default is Calico for the networking.
From your experience in real-world scenarios, is the placement, migration (based on agreed-on SLA and user mobility), and replication of VNFs a challenging task? If yes, Why? Which VNF type is more challenging?
Satish: Yes, these are all challenging tasks, especially with complex VNFs that:

Contain multiple VNF components (VMs)
Require multiple tenant networks (Control, Management, Data planes)
Have proprietary VNF managers
Require complex on-boarding mechanisms.

Does the Cloudify entity behave as a NFVO? or an OSS/BSS?
Shay: Cloudify can also work as a NFVO, VNFm, and Service Orchestrator. In essence it’s all a function of what blueprints you choose to utilize. Cloudify is not an OSS/BSS system.
Does the Service Orchestrator include NFVO?
Shay: Yes
In “Edge Computing Orchestration” slide, there is a red arrow pointing to the public cloud. What type of things is it orchestrating in a public cloud?
Satish: It could orchestrate pretty much everything in the public cloud as well applications, networking, managed services, infrastructure, etc.
SO and e2e orchestrator are the same?
Satish: Yes
In the ETSI model, is Mirantis operating as the NFVi and ViM? And Cloudify acting as the VNFM and NFVO?
Shay: Yes. Mirantis provides the infrastructure and the capability to run workloads on top of it. Cloudify manages the lifecycle operations of each one of the VNFs (this is the role of the VNFM or VNF Manager), and it also creates the workloads and service chaining between the VNFs. This translates into a service which is the responsibility of the NFVO, which is to stitch in together multiple capabilities to provide a service. This service can be complex and span multiple edges, multiple domains and if needed connect it to some core backends, etc.
Satish: As we move to 5G and we start dealing with network slicing and complex applications, this becomes even more critical, having an intelligent orchestrator like Cloudify orchestrating the required VNFs and doing the service function chaining and doing it in a very dynamic fashion. That will be an extremely powerful thing to combine with MCP.
What is your view on other open source orchestration platforms like ONAP, OSM?
Satish: See our blog comparing different NFV orchestration platforms. Also see SWOT analyses and scorecards in our Open NFV Executive Briefing Center.
What is the function of the end to end orchestrator?
Shay: When you’re going to have multiple edges and different types of edges, you’d like to have one easy, centralized way to manage all those edges. In addition to that, you need to run different operations on different edges, and there are different models to do this. You can have a master orchestrator that can talk to a local orchestrator, and just send commands, and the local orchestrator is a control point for the master orchestrator, but still you need the master orchestrator.
Another more advanced way to do it is to have an autonomous orchestrator, that the master only delegates work to, but when there is no connection to a master orchestrator, it will work on its own, and manage the lifecycle operations of the edge, including healing, scaling, etc., autonomously and independently. When there is no connection, it will run as a local orchestrator, and when the connection resumes, it can aggregate all the information and send it to the master orchestrator.
So you need to handle many edges, possibly hundreds or thousands of edges, and you need to do it in a very efficient way that is acceptable by the use case that you are trying to orchestrate.
For the OpenStack edge deployment, what is the minimal footprint? A single node?
Satish: A single node is possible, but it is still a work in progress. Our initial goal for MCP Edge is to support a minimum of 3 – 6 nodes.
With respect to service design (say using TOSCA model), can we define a service having a mix of k8s pods and VM pods?
Nick: I would assume yes because the VM pods are treated as first-class containers, right?
Shay: Yes, definitely. Moreover, Cloudify can actually be the glue that can create a service chain between Kubernetes workloads, pods and VMs, as well as external services like databases and others. We implement the service broker interface, which provides a way for cloud-native Kubernetes services and pods to access external services as if they were internal native services. This is using the service broker API, and tomorrow you can bring the service into Kubernetes, and it will be transparent, because you implemented it in a cloud-native way. The service provider exposes a catalog, which can access an external service, for example one on Amazon that can run a database. That should be very easy.
How is a new edge site provisioned/introduced? Is some automation possible by the Master of Masters?
Satish: Yes, provisioning of a new edge cloud and subsequent LCM will be handled by the Master of Masters in an automated way. The Master of Masters will have multiple edge cloud configurations and using those configurations (blueprints), it will be able to provision multiple edge clouds.
Would this become an alternative to OpenStack, which manages VMs today? If not, how would OpenStack be used with Edge cloud?
Satish: Depending on the use cases, an edge cloud may consist of any of the following:

Pure k8s cluster with Virtlet
Pure OpenStack cluster
combination of OpenStack + k8s clusters

The overall architecture will depend on the use cases and edge applications to be supported.
NFVO can be part of the end to end orchestrator?
Satish: Yes
Is the application orchestration dynamic?
Satish: Yes, you can have it be dynamic based on inputs in the TOSCA blueprint.
How do you ensure an End-to-end SLA for a critical application connecting between Edge clouds?
Satish: One way to do this is by creating a network slice with the required end-to-end SLA characteristics and launch the critical edge application in that slice.
The post Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis