Trilogix Cloud

Kubernetes and the Micro-services ‘way of thinking’

Kubernetes (K8s) use-cases are centred on scaling.  Architectures which are Containerised, with hundreds, or even thousands of Containers require immutable scalability.  The management and orchestration complexity of such systems is best fulfilled by K8s and often seen as a target model for ‘cloud native’ deployments.


Kubernetes is well known for supporting the automation of configuring, deploying, and scaling of microservice-based applications that are implemented using containers. Also, microservices-based applications orchestrated by Kubernetes are highly automated in their deployment and management, as well as their maintenance, so that it’s possible to create applications that are highly responsive and adaptive to spikes in network traffic and needs for other resources.


The Kubernetes and cloud-native applications offer a design pattern, containing several principles that apply to successful cloud native focused businesses and processes.  There are 3 key principles:  scalability, flexibility, and differentiation.


Scalability: Cloud-native architecture uses a different design philosophy than traditional on-premises IT.  In the cloud, everything is controlled by APIs. It is possible to create and configure a compute platform and application cluster using APIs and configuration files in a highly automated fashion. Once such a platform or application cluster is running, it can scale bigger or smaller as need be. If you need a hundred more, you can spin them up on-demand and turn them off the same way.


Flexibility: Containers support this cloud-native architecture (and the system of multiple small applications called microservices) by enabling an instance of an application to be run anywhere — on premises, in the cloud, or at the edge — based on just a small human readable text file that describes the application and its minimum software requirements.


Containers are defined with just enough information to run an application. Unlike virtual machines and bare metal servers, an operating system and full software stack is not required. With containers, there’s a single shared copy that can be used wherever the container is invoked.  Containers are therefore lighter weight and faster to instantiate because they don’t need to instantiate common infrastructure. This gives enterprises flexibility in how they deploy software across their business.


Microservices are small pieces of code which makes them less complex and easier to debug than components with large amounts of code. Microservices are also stateless and independently scalable. This means that an instance of a microservice can be restarted, or scaled, without the need to notify other software using the functionality provided by the microservice. If your computer freezes, you may lose whatever you were working on; if a microservice goes down, it can come right back up with no loss of a state or continuity.


Differentiation: Container and microservices paradigms allow companies to get past common infrastructure issues and usage and focus on those product and service differentiations that customers will pay for – and that lead to higher rates of satisfaction.  The key point of differentiation is the application – Container code and potential portability.  Enterprises can now take containerized application definitions and deploy the application anywhere, from Amazon to Microsoft Azure to Google to RedHat OpenShift to HPE Ezmeral.



Essentially, Kubernetes allows companies to manage their microservices and cloud-native architectures seamlessly. With Kubernetes, enterprises create containers which access system resources via APIs. The Container Storage Interface (CSI) and Container Network Interface (CNI) are the two most common such APIs. Using the CNI API and CNI module implementations, regardless of the hardware in use, containers ask for a network connection and the CNI module communicates with the underlying software and hardware to do what is necessary. Similar behavior occurs with the CSI API and storage.


The abstraction of the application from the underlying infrastructure demonstrates the power of Kubernetes. It allows the application developer to focus on what is important and ignore what’s not.  This allows the business to build differentiation and competitive advantage.


Leave a Comment

Your email address will not be published. Required fields are marked *