• Tue. Nov 26th, 2024

How to get truly portable Kubernetes storage

Byadmin

May 31, 2022




In this podcast, we look at the challenges for storage inherent in the very fluid requirements of containerised applications in a hybrid cloud setting.

We talk about the drive towards microservices and need for agility and portability of applications and the storage that they require. That means persistent container-native storage and the kinds of advanced storage services – snapshots, replication, and so on – that enterprise applications require.
Grant Caley talks about the need for container-native storage and building storage in infrastructure-as-code. 

Adshead: What are the challenges presented to storage by the rise of the microservices world?
Caley: It’s interesting because microservices bring application simplicity in terms of development, scale on demand. They also potentially open up portability of applications across a hybrid cloud.
It really is a different way of developing, and I think when you look at those from a storage perspective, scale is obviously a challenge, and in the past we’ve always deployed the likes of a virtual machine, attached storage, maybe a shared storage resource – very traditional.
With microservices, this needs to be driven by the developer, because they’re the person attaching the storage and consequently it [should be] simple to do, but it also needs to be scalable so you can launch new pods, nodes, etc and have the storage available as you scale or as a shared resource to that microservice.
That’s one challenge. I think the second challenge is around portability of storage. Microservices are designed to be simple and portable across distributions. They really are infrastructure-as-code. You deploy the same application on OpenShift, AKS, GKS, Kubernetes and that really makes for the portability of applications across distributions.
And the portability works OK when your microservice is stateless, but when you want to build mission-critical apps, you need persistent data storage underneath that. So, how do you deliver this on-premise or as a shared resource?  Equally, how do you make that data portable to the microservice, from pod to pod, cluster to cluster, but also across hybrid cloud distributions as well?
And then I think the last thing I mentioned there was simplicity. How do you standardise storage operations across microservices? How do you leverage advanced storage features such as snapshots, cloning, replication, etc? And how do you integrate data protection into what can be a complex application based on tens or even hundreds of microservices?
Coordinating the provisioning, the backup, the DR [disaster recovery] across applications and across on-prem and cloud and even across distribution types – these are the real challenges storage has to meet for the microservices world and how we build applications.
Adshead: What solutions are there to delivering storage for microservices-based operations?
Caley: The first level is, people are maybe familiar with the term CSI or container storage interface. Basically, what that does is for a microservice, it brings storage provisioning natively into the container environment.
That CSI interface should offer the ability to provision persistent storage, different storage classes, by performance levels, data protection levels, [and] it should also offer data protection, backup, replication options, etc. And the key thing is, that should then enable you to standardise your storage provisioning across different Kubernetes distributions. It should be easy to drive and driveable to the actual developers for themselves.
So, that’s the first thing I think storage has to deliver. It has to deliver that interface into the microservices world to standardise that.
The second thing I think you really need to think about is that the storage offerings that are available for microservices should be deliverable consistently, irrespective of where you build that microservice.
So, you should be able to offer standardised data management on-premise in your datacentre, across OpenShift, Kubernetes for example, but equally you should have the ability to offer a standardised storage persistent offering across AWS, Azure and Google as well, so really a hybrid cloud offering.
That’s really important because it means you’re not having to reinvent the wheel, even though you’re building microservices on-premise maybe for production or in the cloud for development, or vice versa.
That should be a standard offering that storage should be available consistently across those different environments.
Really, you also need to be able to have the tools to orchestrate storage across hybrid cloud, on-premise and in the public cloud. You should be able to orchestrate the provisioning, your backup, your DR, irrespective of the Kubernetes distribution you choose to build in.
And, importantly, those tools should be resource-aware.
So, microservices is all about the resources that are pulled together to build an application. The tooling that should be available to connect the storage to the application should be fully resource-aware and should ultimately enable application portability across distributions, so you have this ability to not just attach persistent storage, but to take that storage and data wherever you decide to deploy that microservices application.
The future of how – and delivering it today is absolutely key – is how can microservices and Kubernetes abstract application delivery across any platform and make it truly infrastructure-as-code?

Having the ability, for example, to build an application in AWS and then drag it into Google, that’s really key and you can do that with microservices, but what’s difficult and what needs to be attached in is the ability to also pull back the persistent data with that infrastructure-as-code as you drag it from one distribution to another.
We’ve been able to abstract the compute and that’s what microservices and Kubernetes do already. You can run the same applications on whichever distribution you want – OpenShift, EKS, GKS, etc – you can run it on top of whatever sits underneath that, bare metal, virtualised or a platform-as-a-service, it doesn’t really matter, but the storage component has always been the harder piece to do.
How do you build the same abstraction for storage so that it can effectively be dragged and used as infrastructure-as-code across any distribution that you desire to plug into?
The answer to that is to have Kubernetes do that storage virtualisation or abstraction itself.
So, to be able to deploy software-defined storage natively inside Kubernetes, so that if you need more storage, Kubernetes scales that storage for you, so that Kubernetes becomes the auto-scaling, provisioning engine but the storage itself is actually software-defined as a container and a service running inside Kubernetes.
And that scalability is absolutely key, I think, to making microservices of the future truly portable across a hybrid cloud world, so that it’s not just the compute that is portable as infrastructure-as-code, but also the storage is deployed as software-defined storage within that Kubernetes stack, and gains that portability as well.
So, I think that really key for the future is being able to do that and actually being able to do that today. We don’t want to wait for the future to deploy some of these great things, but really what it will open up is a portability around microservices that includes not just the compute but includes the persistent data storage that is wrapped underneath that compute as well.



Source link