The post Edge Computing Challenges appeared first on Mirantis | Pure Play Open Cloud.
There is a lot of talk around edge computing. What is it? What will it mean to the telco industry? Who else will benefit from it? There’s also a large amount of speculation about identifying the killer application that will spark massive scale deployment of edge computing resources.
In many ways, edge computing is just a logical extension of existing software defined datacenter models. The primary goal is to provide access to compute, storage and networking resources in a standardised way, whilst abstracting the complexity of managing those resources away from applications. The key factor that is missing in many of these discussions, however, is a clear view of how we will be expected to deploy, manage and gain a clear picture of these edge resources.
The key challenge here is that those resources need to be managed in a consistent and effective way in order to ensure that application developers and owners can rely on the infrastructure, and will be able to react to changes or issues in the infrastructure in a predictable way.
The value of cloud infrastructure software such as Openstack is the provision of standardised APIs that developers can utilise to get access to resources, regardless of what they are or how they need to be managed.
With the advent of technologies such as Kubernetes, the challenge of managing the infrastructure in no way lessens; we still need to be able to understand what resources we have available, control access to them, and lifecycle manage them.
In order to enable the future goal of providing distributed ubiquitous compute resources to all who need them, where they need them, and when they need them, we have to look deeper into what is required for an effective edge compute solution.
What is Edge Computing?
Finding a clear definition of edge computing can be challenging; there are many opinions on what constitutes the edge. Some definitions will narrow the definition of cloud, claiming that edge only includes devices that are required to support low latency workload, or that are the last computation point before the consumer, whilst others will include the consumer device or an IOT device, even if latency is not an issue.
As it appears that everyone has a slightly different perspective on what edge computing entails, in order to have a common understanding the following is the definition of edge computing used for this discussion.
In this discussion we take a broad interpretation of Edge Computing, including all compute devices that provide computing resources, that are not located in core or regional data centers, that bring computing resources closer to the end user or data collection devices.
For example, consider this hierarchy:
There are a number of different levels, starting with core data centers, which generally consists of fewer locations, each containing a large number of nodes and workloads. These core data centers feed into (or are fed by, depending on the direction of traffic!) the regional data centers.
Regional data centers tend to be more numerous and more widely distributed than core data centers, but they are also smaller and consist of a smaller — though still significant — number of nodes and workloads.
From there we move down the line to edge compute locations; these locations are still clouds, consisting of a few to a few dozen nodes and hosting a few dozen workloads, and existing in potentially hundreds of thousands of locations, such as cell towers or branch offices.
These clouds serve the far edge layer, also known as “customer premise equipment”. These are single servers or routers that can exist in hundreds of thousands, or even millions of locations, and serve a relatively small number of workloads. Those workloads are then accessed by individual consumer devices.
Finally the consumer or deep edge layer is where the services provided in the other layers are consumed from or from where the data is collected and processed.
Edge Use Cases
There are a large number of potential use cases for edge computing, with more being identified all the time. For example:As you can see here, we can roughly divide edge use cases into third party applications and telco operator applications.
Third party applications are those that are more likely to be accessed by end users, such as providing wireless access points in a public stadium, connected cars, or on the business end, connecting the enterprise to RAN.
Operator applications, on the other hand, are more of an internal concern. They consist of applications such as geo-fencing of data, data reduction at the edge to enable more efficient analytics, or Mobile Core.
All of these applications, however, fall into the “low latency requirements” category. Other edge use cases that don’t involve latency might consist of a supermarket that hosts an edge cloud that communicates with scanners customers can use to check out their groceries as they shop, or an Industrial IOT scenario in which hundreds or thousands of sensors feed information from different locations in a manufacturing plant to the plant’s local edge cloud, which then aggregates the data and sends it to the regional cloud.
Edge Essential Requirements
The delivery of any compute service has a number of requirements that need to be met. With edge computing, the same delivery of a massively distributed compute service takes all those requirements and compounds them, not only because of the scale, but also because access (both physically and via the network) may be restricted due to device/cloud location.
So taking this into account, what are the requirements for edge computing?
Area
Detail
Security (isolation)
Effective isolation of workloads is critical to ensure not only that workloads will not interfere with each other’s resources, but also that they can not access each other’s data in a multi-tenanted environment.
Clear access control and RBAC policies and systems are required to support appropriate separation of duties and to prevent unauthorised access by both good and bad actors.
Cryptographic Identification and authentication of edge compute resources are also required.
Resource management
The system must provide the ability to manage the physical and virtual resources required to provide the resources to consumers in a consistent way, with minimal input from administrators.
Operators must be able to manage all resources remotely, with no need for local hands.
Telemetry Data
The system must provide a clear understanding of resource availability and consumption, in such a way that provides applications with the data necessary to make programmatic decisions about application distribution and scaling. This requires:
Providing applications with data on inbound demand
Providing applications with geographic data that is relevant to application decisions
Operations
Low impact, zero application downtime infrastructure operations are critical.
Low or (preferably) zero touch infrastructure operations tooling must be available.
An efficient edge system requires a very high degree of automation and self-healing capabilities.
The system must consist of self contained operations with minimal dependencies on remote systems that could be impacted by low network bandwidth, latency or outage.
Open Standards
A key feature of edge systems is the ability to rapidly deploy new and diverse workloads and integrate them with a number of different environments. Basing the solution on open standards allows for this flexibility and supports standardisation.
Open standards should be used in all areas that affect the deployment and management of workloads, enabling easy and rapid certification of workloads, such as:
a common standard for the abstraction of APIs simplifies development and deployment.
standardised virtualisation or container engines
Stability and Predictability
Edge compute platforms need to behave predictably in different scenarios to ensure a consistent usage experience.
The stability of edge compute solutions is critical; this encompasses graceful recovery from errors, as well as being able to handle harsh environmental conditions with potentially unpredictable utilities and other external services.
Performance
Predictable and clearly advertised performance of Edge compute systems is critical for the effective and appropriate hosting of applications. For example, it should be clear whether the environment provides access to specialised hardware components such as SmartNICs and Network Accelerators.
The performance requirements for Edge computers systems are driven by the applications needs. For example, a gaming application may need lots of CPU and GPU power and very low latency network connections, but a data logger may be based on a low power CPU and can trickle feed the collected data over time.
Abstraction
Edge systems must provide a level of abstraction for infrastructure components in order to support effective application/workload portability over multiple platforms. Common standard APIs typically drive this portability.
Sound familiar?
If you’re thinking that this sounds a lot like the theory behind cloud computing, you’re right. In many ways, “edge” is simply cloud computing taken a bit further out of the datacenter. The distinction certainly imposes new requirements, but the good news is that your cloud skills can be brought to bear to get you started.
If this seems overwhelming, don’t worry, we’re here for you! Please don’t hesitate to contact us and see how Mirantis can help you plan and execute your edge computing architecture.
The post Edge Computing Challenges appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis