Unlocking the benefits of containers: why the right orchestration tools are critical for maintaining consistency within edge environments

By Valentin Viennot, Product Manager at Canonical.

  • 3 years ago Posted in

Containers have become a core part of the contemporary business, especially those that operate cloud services, by helping them to orchestrate their workflows at speed and scale. With the rise of Kubernetes, a complete platform for deploying, monitoring and managing apps and services across cloud environments, life has become easier. According to 451 Research, 90 percent of companies are planning to standardise Kubernetes within three to five years, across various organisational types, highlighting the demand for managing containers, microservices and distributed platforms in one place.

There is still a long way for companies to go, however, in order to reach full Kubernetes adoption. According to Canonical’s global Kubernetes and Cloud Native Operations Report, 85 percent of enterprises have yet to hit this milestone. Showing the multi-dimensional nature of today’s cloud native technology landscape, the survey found that, while 46 percent of respondents report using Kubernetes in production, only 16 percent use Kubernetes exclusively.

It is the same story for containers directly at the edge – where in a recent industry poll, just 10 percent of respondents confirmed they deployed containers this way. With organisations facing many complexities in implementing containers and managing a multitude of edge clouds, the uptake by enterprises has been slow to say the least, with reluctance linked to compatibility issues, scalability and select use-cases.

Choosing the right orchestration tools is critical for enterprises to unlock the benefits of long-term containers and maintain consistency within edge environments. By building the edge to the central cloud, businesses can open the door to smarter infrastructure, dynamic orchestration, and automation, leading to reduced costs, improved efficiencies and consistency. Taken together, both large and small companies can bring the edge closer to central clouds and reap the benefits of stable systems for real-time or latency-sensitive applications.

The need for containers near the edge

The requirement for a small operating system is vital. Whether in an IoT or a micro cloud context, the majority of devices used at the edge have restricted real estate. When considering the continuous necessity for software patches, alongside the need to gain from iterative updates, to prevent growing vulnerabilities in security, the demand for cloud-native software has become ever-more important. Containerisation technologies and container orchestration allows developers to update and deploy atomic security updates or new features, all without affecting the day-to-day systems of IoT and edge solutions.

Individual IoT projects can include millions of nodes and sensors, requiring real-time computing power. With various applications requiring cloud-like elasticity to accompany this high availability of compute resources, containers and Kubernetes offer a contingency

framework for IoT solutions. However, with this increase in infrastructure, the need to manage the physical device, messages and massive data tonnage, also promptly scales up. To bring cloud-native support for microservices applications closer to the consumer and facilitate the data and messaging-intensive characteristics of IoT, micro clouds (e.g. a combination of LXD + MicroK8s) can help to boost flexibility, resulting in a technological approach that encourages innovation and reliability throughout the cyber-physical voyage of an IoT device.

Barriers to uptake

The uptake on Kubernetes at the edge has been slow for several reasons. One reason is that it has not been optimised for all use cases. Let's split them into two classes of compute: IoT, with EdgeX applications, and micro clouds, serving computing services near consumers. IoT applications often see Docker containers used in a non-ideal way. OCIs were designed to enable cloud elasticity with the rise of microservices; not to make the most of physical devices while still isolating an application and its updates, which is something you would find in snaps.

Another reason is the lack of trusted provenance. Edge is everywhere and at the centre of everything, operating across applications and industries. This is why software provenance is critical. The rise of containers, in general, coincided with a rise of open-source projects with a wide range of dependencies – though there needs to be one trusted provider that can commit to be the interface between open-source software and enterprises using it. Containers are an easy and adaptable solution to package and distribute this software in trusted channels, assuming you can trust the provenance.

The third factor relates to the move from development to strict field production constraints. Docker containers are still popular with developers and technical audiences - it is a brilliant tool to accelerate, standardise, and improve the quality of software projects. Containers are also having great successes in cloud production environments, mainly thanks to Kubernetes and platform adoption.

In edge environments, the production constraints are much stricter than anywhere else and business models are not those of software-as-a-service (SaaS). There is a need for minimal container images created for the edge, with the proper support and security commitments to maintain safety. In the past, containers were designed for horizontal scaling of (largely) single function, stateless work units, deployed on clouds. But in this case, the edge makes sense where there is a sensitivity to bandwidth, latency, or jittery requirements. In short, Canonical’s approach to edge computing is open-source micro clouds. They provide the same capabilities and APIs as cloud computing, trading exponential elasticity against low latency, resiliency, privacy and governance of the real-world applications. While containers don’t necessarily need ‘edge’ elements, they need to mature and come from a trusted provider with matching security and support guarantees. For the other half of Edge, IoT, we recommend using snaps.

Key benefits of the technology

The case for bringing containers to the edge lies in three main approaches. The first is compatibility, contributing a layer between the hosting platform and the applications. The process allows them to live on many platforms and for longer.

The second is security. Although running services in a container is not enough to establish security, workload isolation is a security improvement in many respects. The last is transactional updates, delivering software in smaller chunks without taking care of entire platform dependencies.

Kubernetes containers also have innate benefits that naturally benefit the system. One example is elasticity; in the case of micro clouds, some elasticity is needed as demand may vary, and accessing cloud-like APIs is one of the main goals in most use cases. Flexibility is another benefit; being able to dynamically change what application is available and at what scale is a typical micro cloud requirement, which Kubernetes helps with sufficiently.

The future of Kubernetes

Kubernetes is developing to become more efficient and robust. Thanks to an increase in more lightweight and purpose-built versions of the software, Kubernetes’ support for scalability and portability will become even more associated with edge use cases, alongside the huge numbers of nodes, devices and sensors available.

Looking towards the future of cloud-native software, Kubernetes is driving innovation and advantages in IoT and edge hardware, and its lightweight and scalable nature is also enabling improvements in hardware, including Raspberry Pi or the Jetson Nano. For any enterprise with the right specs in mind, containers at the edge is a solution well-positioned to offer many benefits, and is soon set to become common practice.

Containers and Kubernetes are the driving force behind how the industry is reinventing the way we...
Containerised applications are fast becoming an established fact in the IT infrastructure of global...
Programs used to be made by creating large monolithic scripts, however, a lot has changed in the...
Where does open source software stand today? That is a question that many are asking, with opinions...
By Radhesh Balakrishnan, general manager, OpenStack, Red Hat.
We asked Adam & Chris, the founders of Deeplearning4j? —?first commercial-grade,...