KubeEdge v1.15.0 released! Added Windows edge node support, object model-based device management, DMI data plane support and other functions

On October 13, 2023, Beijing time, KubeEdge released v1.15.0. The new version adds multiple enhanced functions, which have been greatly improved in edge node management, edge application management, edge device management, etc.

KubeEdge v1.15.0 new features:

  • Support Windows edge nodes

  • New version of device management API v1beta1 based on object model released

  • Mapper-Framework, a custom development framework for Mapper that carries the DMI data plane, is released

  • Support edge nodes to run static Pods

  • Support more Kubernetes native plug-ins to run on edge nodes

New Features Overview

Support Windows edge nodes

As edge computing application scenarios continue to expand, more and more types of devices are involved, including many sensors, cameras and industrial control equipment based on Windows operating systems. Therefore, the new version of KubeEdge supports running edge nodes on Windows. Cover more usage scenarios.

In version v1.15.0, KubeEdge supports edge nodes running on Windows Server 2019, and supports Windows containers running on edge nodes, successfully expanding the usage scenarios of KubeEdge to the Windows ecosystem.

The Windows version of EdgeCore configuration adds a windowsPriorityClass field, which defaults to NORMAL_PRIORITY_CLASS. Users can download the Windows version of the EdgeCore installation package [1] on the Windows edge host. After decompressing it, execute the following command to complete the registration and access of the Windows edge node. The user can confirm the status of the edge node by executing kubectl get nodes on the cloud. and manage edge Windows applications.

edgecore.exe --defaultconfig > edgecore.yaml
edgecore.exe --config edgecore.yaml

For more information, please refer to:

https://github.com/kubeedge/kubeedge/pull/4914

https://github.com/kubeedge/kubeedge/pull/4967

New version of device management API v1beta1 based on object model released

In the v1.15.0 version, the device management API based on the object model, including Device Model and Device Instance, has been upgraded from v1alpha2 to v1beta1, and configurations related to edge device data processing have been added. The northbound device API is combined with the southbound DMI interface to implement Device data handling, major updates to the API include:

  • In Device Model, new fields such as device attribute description, device attribute type, device attribute value range, and device attribute unit are added according to the object model standard.

// ModelProperty describes an individual device property / attribute like temperature / humidity etc.
type ModelProperty struct {
   // Required: The device property name.
   Name string `json:"name,omitempty"`
   // The device property description.
   // + optional
   Description string `json:"description,omitempty"`
   // Required: Type of device property, ENUM: INT,FLOAT,DOUBLE,STRING,BOOLEAN,BYTES
   Type PropertyType `json:"type,omitempty"`
   // Required: Access mode of property, ReadWrite or ReadOnly.
   AccessMode PropertyAccessMode `json:"accessMode,omitempty"`
   // + optional
   Minimum string `json:"minimum,omitempty"`
   // + optional
   Maximum string `json:"maximum,omitempty"`
   //The unit of the property
   // + optional
   Unit string `json:"unit,omitempty"`
}
  • All protocol configurations built into Device Instance are removed, including Modbus, Opc-UA, Bluetooth, etc. Users can set their own protocols through extensible Protocol configuration to achieve device access with any protocol. Mapper for built-in protocols such as Modbus, Opc-UA, Bluetooth, etc. will not be removed from the mappers-go repository, and will be updated to the corresponding latest version and maintained.

type ProtocolConfig struct {
   // Unique protocol name
   // Required.
   ProtocolName string `json:"protocolName,omitempty"`
   //Any configuration data
   // + optional
   // + kubebuilder:validation:XPreserveUnknownFields
   ConfigData *CustomizedValue `json:"configData,omitempty"`
}

typeCustomizedValue struct {
   Data map[string]interface{} `json:"-"`
}
  • Data processing-related configurations have been added to the device properties of Device Instance, including fields such as device reporting frequency, data collection frequency, whether attributes are reported to the cloud, and pushed to edge databases. Data processing will be performed in Mapper.

type DeviceProperty struct {
   ...
   // Define how frequent mapper will report the value.
   // + optional
   ReportCycle int64 `json:"reportCycle,omitempty"`
   // Define how frequent mapper will collect from device.
   // + optional
   CollectCycle int64 `json:"collectCycle,omitempty"`
   // whether be reported to the cloud
   ReportToCloud bool `json:"reportToCloud,omitempty"`
   // PushMethod represents the protocol used to push data,
   // please ensure that the mapper can access the destination address.
   // + optional
   PushMethod *PushMethod `json:"pushMethod,omitempty"`
}

For more information, please refer to:

https://github.com/kubeedge/kubeedge/pull/4999

https://github.com/kubeedge/kubeedge/pull/4983

Mapper custom development framework Mapper-Framework that carries DMI data surface is released

In version v1.15.0, support is provided for the DMI data plane part, which is mainly carried in the southbound Mapper development framework Mapper-Framework. Mapper-Framework provides a new Mapper automatic generation framework, which integrates DMI device data management (data plane) capabilities, allowing devices to process data at the edge or in the cloud, improving the flexibility of device data management. Mapper-Framework can automatically generate users’ Mapper projects, simplify the complexity of user design and implementation of Mapper, and improve the development efficiency of Mapper.

  • DMI device data plane management capability support

The v1.15.0 version of DMI provides support for data plane capabilities and enhances the edge’s ability to process device data. Device data at the edge can be pushed directly to the user database or user application according to the configuration, or it can be reported to the cloud through the cloud edge channel. Users can also actively pull device data through the API. The device data management method is more diversified, which solves the problem of Mapper frequently reporting device data to the cloud, which can easily cause cloud-side communication congestion. It can reduce the data volume of cloud-side communication and reduce the risk of cloud-side communication blocking. The DMI data plane system architecture is shown in the figure below:

Picture

  • Mapper automatically generates the framework Mapper-Framework

Version v1.15.0 proposes a new Mapper automatic generation framework Mapper-Framework. The framework has integrated functions such as Mapper registering with the cloud, the cloud delivering Device Model and Device Instance configuration information to the Mapper, and device data transmission reporting, which greatly simplifies user design and implementation of Mapper development work, and facilitates users to experience the cloud brought by the KubeEdge edge computing platform. Native device management experience.

For more information, please refer to: https://github.com/kubeedge/kubeedge/pull/5023

Support edge nodes to run Kubernetes static Pods

The new version of KubeEdge supports Kubernetes’ native static Pod capability, which is consistent with the operation method in Kubernetes. Users can write the Pod’s Manifests file in the form of JSON or YAML in the specified directory of the edge host, and Edged will monitor the files in this directory. To create/delete edge static Pods and create mirrored Pods in the cluster.

The default directory for static Pods is /etc/kubeedge/manifests. You can also specify the directory by modifying the staticPodPath field of the EdgeCore configuration.

For more information, please refer to: https://github.com/kubeedge/kubeedge/pull/4825

Support more Kubernetes native plug-ins to run on edge nodes

The v1.15.0 version of KubeEdge supports more native plug-ins to run on edge nodes. KubeEdge provides a highly scalable Kubernetes native non-resource API transparent transmission framework, which satisfies the dependence of native plug-ins on such APIs. The plug-in can obtain cluster version and other information from the edge node’s MetaServer. MetaServer will cache the requested data to ensure that the edge node can still provide normal services when the network is interrupted.

Under the current framework, community developers will more easily open up more non-resource APIs. Developers only need to pay attention to the APIs that plug-ins depend on, and do not need to consider how requests are delivered to edge nodes.

For more information, please refer to: https://github.com/kubeedge/kubeedge/pull/4825

Upgrade Kubernetes dependencies to v1.26

The new version upgrades the dependent Kubernetes version to v1.26.7, and you can use the features of the new version in the cloud and edge.

For more information, please refer to: https://github.com/kubeedge/kubeedge/pull/4929

Version upgrade notes

  • The Device API of the new version v1beta1 is not compatible with the v1alpha1 version. If you need to use device management features in KubeEdge v1.15.0, you need to update the yaml configuration of the Device API.

  • If you use containerd as the edge container runtime, you need to upgrade the containerd version to v1.6.0 or higher. KubeEdge v1.15.0 no longer supports containerd 1.5 and earlier versions.

    Reference: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#cri-api-removal

  • In KubeEdge v1.14, EdgeCore has removed support for dockershim, the edge runtime only supports the remote type, and uses containerd as the default runtime. If you want to continue using docker as the edge runtime, you need to install cri-dockerd and during starting EdgeCore, set runtimeType=remote and remote-runtime-endpoint=unix:///var/run/cri-dockerd. sock.

    Reference: https://github.com/kubeedge/kubeedge/issues/4843

Acknowledgments

Thanks to the KubeEdge Community Technical Steering Committee (TSC) and all SIG members for their support and contribution to the development of v1.15.0. In the future, KubeEdge will continue to develop and evolve in terms of new scenario exploration and support, stability, security, scalability, etc. !

Related links

[1] Windows version EdgeCore installation package:

https://github.com/kubeedge/kubeedge/releases/download/v1.15.0/kubeedge-v1.15.0-windows-amd64.tar.gz

[2] Release Notes: https://github.com/kubeedge/kubeedge/blob/master/CHANGELOG/CHANGELOG-1.15.md

Attachment: KubeEdge community contribution and technical exchange address

KubeEdge website: https://kubeedge.io

GitHub address: https://github.com/kubeedge/kubeedge

Slack address: https://kubeedge.slack.com

Mailing list: https://groups.google.com/forum/#!forum/kubeedge

Weekly community meeting: https://zoom.us/j/4167237304

Twitter: https://twitter.com/KubeEdge

Document address: https://docs.kubeedge.io/en/latest/