THE CLOUD-NATIVE REFERENCE MODEL

The previous chapters elaborated the evolution of virtualization and the impact created by various cloud-based service offerings in different business verticals. To exploit the benefits that are promised by the cloud-based service offerings, the application must support infrastructure-agnostic instantiation and automated lifecycle management. To be more precise, the service or the application must be

  • Infrastructure agnostic: Capable of being instantiated on the cloud
  • DN ready: Capable of automated lifecycle management

Migrating an application from a standalone server or virtual machine to the cloud is not a straightforward task. It requires that the application stack be developed based on the cloud-native principles.

Application Development Framework

One of the critical principles of the cloud-native reference model is to follow microservice architecture when developing cloud-native applications. Many benefits are gained by decomposing the full-stack complex application into a loosely coupled stack of small and autonomous functions or microservices:

  • Software agility: Allows developers to modify an existing microservice or rapidly develop a new microservice with minimal dependency on other processes

  • Environment agnostic portability: Allows users to deploy the service in testing and producing public or private clouds with ease

  • Reliability: Allows users to run multiple instances of the relevant process for resiliency

By following the microservice architecture, each microservice is developed as a self-contained, lightweight function with service-level granularity. The computing resources required to run these microservices are satisfied by a container, so it is common to see a stack of microservices being run as containers. Communication between different functions within the application is facilitated by APIs using a new concept called Service mesh. Service mesh is a configurable infrastructure layer that provides network-based communication between different functions. Each of the functions also uses northbound APIs and southbound APIs to communicate with other external services, such as Orchestrator, operations support system (OSS), business support system (BSS) services, and the like.

This ability to decompose the application into loosely coupled independent containers enables the users to deploy the workloads in an infrastructure-agnostic manner.

Application Development Framework

One of the critical principles of the cloud-native reference model is to follow microservice architecture when developing cloud-native applications. Many benefits are gained by decomposing the full-stack complex application into a loosely coupled stack of small and autonomous functions or microservices:

  • Software agility: Allows developers to modify an existing microservice or rapidly develop a new microservice with minimal dependency on other processes

  • Environment agnostic portability: Allows users to deploy the service in testing and producing public or private clouds with ease

  • Reliability: Allows users to run multiple instances of the relevant process for resiliency

By following the microservice architecture, each microservice is developed as a self-contained, lightweight function with service-level granularity. The computing resources required to run these microservices are satisfied by a container, so it is common to see a stack of microservices being run as containers. Communication between different functions within the application is facilitated by APIs using a new concept called Service mesh. Service mesh is a configurable infrastructure layer that provides network-based communication between different functions. Each of the functions also uses northbound APIs and southbound APIs to communicate with other external services, such as Orchestrator, operations support system (OSS), business support system (BSS) services, and the like.

This ability to decompose the application into loosely coupled independent containers enables the users to deploy the workloads in an infrastructure-agnostic manner.

Automated Orchestration and Management

“Software defined” is a buzzword that finds its way into any new technology portfolio. It describes the ability to automate the service orchestration and management functionality based on the business intent. Running the microservice as a container inherently derives the automated lifecycle management capability of the containers. The use of orchestrators to manage the lifecycle of each container allows developers to create, update, or decommission services independently without affecting other services within the same application stack. Kubernetes and Docker Swarm are two well-known orchestration engines that are used in the industry. These container orchestration tools are discussed in more detail later in this chapter.

Container Runtime and Provisioning

Container Runtime is the layer that is responsible for resource management, such as the container image and compute resource. Container Runtime is a collection of API-driven scripts and tools that play a crucial role in executing the container image by requesting the underlying kernel to allocate the required resources for the deployed container.

Container Runtime initializes the container image and sets up the initial configuration and other operational primitives before enabling the container task. runc is a lightweight and commonly used container runtime developed as part of the Open Container Initiative (OCI) specifications.

The Container Runtime Interface (CRI) is the interface between the orchestrator and the container runtime.

CONTAINER DEPLOYMENT AND ORCHESTRATION OVERVIEW

The industry is witnessing an evolution in which different network services are decomposed into microservices. Soon, there will be a plethora of services running as containers in an infrastructure-agnostic manner. Cloud-native network functions (CNFs) are built as self-contained container images that are a collection of relevant files packed together as a filesystem bundle. Depending on the type of tools used for container image packaging, various deployment tools are available for container lifecycle management.

Depending on numerous factors, such as business intent, resource availability, and deployment model, the magnitude of applications hosted as containers may vary. On one end of the spectrum, a Cisco Unified Computing System (UCS) platform in a data center may host several thousand containers; on the other end of the spectrum, a couple of open agent applications, such as Puppet and Chef, may be hosted as containers on Cisco Edge Routers.

Running a couple of containers on one edge router is manageable, but running a couple of containers on 1000 such edge routers leads to operational challenges. Soon, medium to large networks may see thousands of containers running. Identifying a failed container in such a large network is like looking for a needle in a haystack. Manually deploying and operating a large number of containers is humanly impossible. The industry has learned from experience that the efficiency of the network and service deployment are significantly improved by automation; this applies to container lifecycle management, too. Automated container orchestration is an essential part of the success of this new CNF architecture.

By definition, orchestration is an automated process of workload lifecycle management that includes scheduling workload and scaling resources based on demand. Many orchestration tools are developed to automate the lifecycle management of the containers that primarily rely on certain basic, yet critical, components. highlights these essential components and the interaction between those components to execute the container image and spawn-up containers.

A container image is composed of a base image, executable source code, binaries, libraries, and a manifest file. The binaries and libraries are dependent files required to run the source code, and the manifest file defines the configuration and properties for the containers.

The container runtime manager (also known as “container platforms”) is the component that is responsible for fetching the relevant container image from the centralized registry, and it leverages the runtime component to spawn up and manage the container. Upon receiving the instruction from the manager, the container runtime reads the manifest file, which is part of the container image package, and makes relevant resource allocations, such as namespace and cgroup creation, by collaborating with the underlying kernel.

Different types of container runtime managers are evolving, but this book will focus on some of the commonly used runtime managers supported in various Cisco product portfolios.

Linux Containers (LXC)

A Linux container, also known as LXC, is a userspace interface for the Linux kernel that provides a method to run multiple isolated Linux containers on a Linux host. The containers that are deployed using LXC are usually full-stack Linux servers. LXC supports a range of Linux distributions and a variety of 32-bit and 64-bit based processors.

LXC is not primarily used as an enterprise-grade container platform because its applicability is limited to Linux distributions and does not work for other host operating systems. Other container platforms are developed in a host-agnostic manner, and they overtook LXC. Although LXC is not largely deployed in the commercial environment, it plays a crucial role in hosting applications on the Cisco platforms. Cisco IOS-XE, IOS-XR, and NXOS architectures are built on top of Cisco-customized Linux distribution (MontaVista or Wind River Linux). This architecture choice makes the platforms natively capable of instantiating LXC containers.

Extending the native Linux bash shell on Cisco platforms to host service applications is not a viable option because it raises numerous challenges. The service application must be customized or built for the specific Linux distribution. Customizing open-source applications to fit such specific distributions is always a challenge. Providing complete root access to the host kernel is not because it raises serious security concerns. Cisco created application hosting capability support using KVM/LXC by addressing the preceding challenges without compromising the security concerns.

 

Leave a Reply

Your email address will not be published. Required fields are marked *