TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
Cloud Native Ecosystem / Edge Computing / Kubernetes

KubeEdge and Its Role in Multi-Access Edge Computing

KubeEdge, a Cloud Native Computing Foundation (CNCF) sandbox project, is designed to extend Kubernetes from cloud to edge.
Jun 19th, 2020 7:41am by
Featued image for: KubeEdge and Its Role in Multi-Access Edge Computing

Futurewei sponsored this post.

Anni Lai
Anni leads Open Source Operations and Marketing for Futurewei’s Open Source projects — covering Kubernetes, Cloud Native, Cloud-Edge, OpenStack, Open Storage, and AI/Deep Learning. She previously served on the OpenStack Foundation board, and is currently serving on the CNCF and OCI boards.

KubeEdge, a Cloud Native Computing Foundation (CNCF) sandbox project, is designed to extend Kubernetes from cloud to edge. It provides fundamental infrastructure support for enabling the deployment and orchestration of cloud native services on edge nodes, and metadata synchronization with the cloud.

KubeEdge aims to meet the following three major challenges in Edge Computing:

  • Network reliability between cloud and edge.
  • Resource constraint on edge nodes.
  • Highly distributed and scalability challenges of edge architecture.

KubeEdge has a control plane on the cloud side and worker nodes on the edge side. Native container applications can be orchestrated from cloud to edge. Cloud and edge nodes are loosely coupled, so in case of network connection failure between the cloud and the edge, the agent at edge can autonomously manage services and IoT devices at the edge. Once the network connection between the cloud and the edge is resumed, the metadata can be re-synchronized to ensure persistence. KubeEdge 1.2 included additional network enhancements to ensure more reliable messaging and data transmission between cloud and edge.

The Same Kubernetes User Experience at Both Cloud and Edge

KubeEdge was built based on Kubernetes and follows the same open and extensible architecture:

  • Modularized computing platform at edge. Beehive is used as the messaging framework, based on go-channels for communication between modules of KubeEdge. EdgeMesh provides ServiceMesh at edge, enabling services running on different pods, nodes and locations.
  • KubeEdge has integrated with Kubernetes CRI, CSI, CNI interfaces connecting to runtime, storage, and network resources. In addition, KubeEdge is open to integrate with other CNCF projects — such as Envoy, Prometheus and etcd.
  • KubeEdge 1.3, recently released in May 2020, added more features to improve the pod logging, monitoring, management, and the maintainability of edge nodes from cloud.
  • One of the key challenges that KubeEdge addresses is to manage edge nodes that are geographically dispersed. KubeEdge enables “centralized management” of remote edge nodes and the applications running on them. This is a major remote management capability.
  • Moving forward, the KubeEdge project team will include new features such as edge-to-edge communication and data analytics framework on the edge.

Extensible Infrastructure for IoT and Edge

Based on Kubernetes, KubeEdge is highly extensive. The current agent memory footprint on the edge is about 10MB during runtime. The edge hardware can be as small as a Raspberry Pi, or as large as a multicore server or a cluster. KubeEdge uses the Eclipse Mosquitto message broker that implements the MQTT protocol, which makes it suitable for IoT messaging (such as with low power sensors) or mobile device (such as phones, embedded computers, or microcontrollers).

MEC

Multi-access Edge Computing (MEC), previously known as Mobile Edge Computing, is defined by the European Telecommunications Standards Institute (ETSI). It is a network architecture concept that enables cloud computing and IT capabilities to run at the edge of the network. Applications or services running on the edge nodes — which are closer to end users — instead of on the cloud, can enjoy the benefits of lower latency and enhanced end-user experience. There is a plethora of potential vertical and horizontal use cases for MEC, such as Autonomous Vehicle (AV), Augmented Reality (AR) and Virtual Reality (VR), Gaming, and Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) enabled applications like autonomous navigation, remote monitoring by using Natural Language Processing (NLP) or facial recognition, video analysis, and more.

MEC provides real-time, low-latency access to Radio Access Network (RAN), which presents a great opportunity for telcos to open their network and regional data center resources to a new ecosystem and value chain. In the last couple years, hyperscale cloud providers are moving beyond their public clouds by extending their cloud capabilities and services to edge (via gateways and on-prem servers/data center edge) or even device edge (constrained device edge or smart device edge running IoT devices with some application logic). Examples include Microsoft Azure IoT Edge and AWS IoT Greengrass. More recently, we’re seeing hyperscale cloud providers and telcos forming partnerships addressing the opportunities coming from MEC. For example, Google and AT&T, Microsoft and AT&T, and AWS and Verizon.

While all these hyperscale, proprietary public clouds are taking advantage of the opportunities that MEC has to offer, KubeEdge offers a viable open source edge solution to enable MEC.

MEC is a Good Use Case for KubeEdge

Akraino, a Linux Foundation Edge project, is formed to create and publish “well tested” open source Telco Edge Stacks for a broad variety of user cases — such as 5G, AI, Edge IaaS/PaaS, and IoT addressing both providers and enterprise edge domains. This is accomplished via community contributed and tested blueprints, followed by the release of the API white paper with tested blueprints by the AkrainoAPI sub-committee for industry adoption.

In April 2020, the “Akraino KubeEdge Edge Service Family (Type 1: ML Inference Offloading)” blueprint, to showcase an end-to-end ML Inference Offloading solution stack utilizing KubeEdge in a MEC environment (supporting both Intel X86 and Arm architectures), was proposed by a working team consisting of committers from Arm, China Mobile, Futurewei, and Signalogic, and was successfully accepted as an Akraino Incubation project.

The Use Case

The Akraino KubeEdge Edge Service can be deployed at enterprise edges or as a cloud edge extension interfacing to the core telco network. It offers support for the following use cases:

  • ML offloading for inference and training in image recognition for mobile phones
  • Automatic Speech Recognition (ASR) field operation
  • Manufacture production line defect inspection

The blueprint will propose an end-to-end edge stack solution, and address the following MEC challenges:

  • Limited footprint on the edge — with more business logic running on the edge, much data will be generated on the edge.
  • Network reliability between the cloud and edge
  • Edge application mobility with context transfer requirement
  • Data privacy between the edge and the cloud
  • Overall efficiency and scalability

This blueprint project is still in the early days and welcomes everyone’s support and participation. Its goal is to create an open source MEC solution for everyone to use.

For more details regarding this blueprint, please refer to its web page.

For more details regarding KubeEdge, please refer to its website and GitHub profile.

Amazon Web Services, the Cloud Native Computing Foundation and Linux Foundation are sponsors of The New Stack.

Feature image via Pixabay.

At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.