Trilogix Cloud

Kubernetes and Storage


Kubernetes is a necessary infrastructure to manage decomposed applications focused on providing discrete functionality, in a lightweight container.  This was called the SOA or service-oriented-architecture, now renamed ‘micro-services’ but essentially the same concept and pattern.  Given the scale of containerisation within larger firms, and the operational issues surrounding immutable deployments of micro-services, Kubernetes provides the essential foundation for implementing micro-services.


Kubernetes is derived from Google ‘Borg’ a relatively rudimentary Linux container concept, with Borg cluster and container controllers.  Some years ago, as it was unclear what the future of managed Containers might look like, with OpenStack, which came out of NASA and Rackspace Hosting, and Mesos (Apache), which came out of Twitter being adopted.  But it was Kubernetes, inspired by Borg and adopting a universal container format derived from Docker, which has won.


Kubernetes is now quite complex.  It must be built into the same security regimen and detailed design as other software in the enterprise.  Kubernetes needs to have even more resiliency than existing platforms, ranging from backup, high availability clustering within the data centre and disaster recovery across data centres.  These layers of control, collectively called data governance, must apply to Kubernetes and the applications and data it controls.  Moreover, all of the data that is embodied in the Kubernetes stack has to be discoverable in the same fashion as it is for other platforms.


There are no shortcuts in the enterprise, and while there are many ways to build or buy or rent a Kubernetes stack, how a Kubernetes distribution interfaces with the existing data resiliency, security, governance, and discovery frameworks is what determines what can be deployed in production and what will remain a test and development platform at best and a science project at worst.



Given that Kubernetes is a fast-changing open source project and someone has to make sure that interfaces to various established data services in the enterprise are not broken, or do not have their performance hindered, by this rapid change. These data services – container storage, backup, recovery, disaster recovery, security (including encryption and key management), data governance (making sure people can only see the data they are supposed to), and data discovery (making massive sets of data searchable so people can make use of it) – have to always work, all the time – no exceptions, no excuses.


That storage is different with Kubernetes not just because the community wants it to be easier, but also because of the nature of storage for containers that might be a bit more ephemeral than what enterprises are used to when they buy and configure storage for systems.


The point is, organizations that are moving from proofs of concept with containerized applications into production need to think through all of these data services, and how they will be fulfilled, before they even think about going into production with containers on a Kubernetes controller.


Edited from Source

Leave a Comment

Your email address will not be published. Required fields are marked *