Using Kubernetes is a great experience, operating it in production is way less simple. And building a managed Kubernetes platform is even worse…
In November 2018 we released the beta versi on of our Managed Kubernetes service. It was the outcome of a journey that took us from Kubernetes users to build a fully managed Kubernetes service, become a certified Kubernetes platform and learn lots of things about building, operating and taming Kubernetes at scale…
As the beta is running now, and the lasts issues are being worked out for the final release, we take some time to share some of the lessons we have learnt, the technological choices that we have taken and the tooling we have built in the process.
In today’s post we will introduce our Managed Kubernetes, explaining why do we built it. In next posts we will look deeper at some aspects of the architecture, like scaling the ETCD or how we start customers Kubernetes masters inside our master Kubernetes’ worker nodes…
And of course, if you want to know more about our Managed Kubernetes, if you would like to see a post about a particular matter, don’t hesitate to leave a comment!
The Kubernetes journey
The first time you play with Minikube if often astonishing. No more worrying about managing the instances, no need to monitor if the containers are running, you stop an instance and Kubernetes re-creates the containers in another instance… It’s a kind of magic!
Then, as a new believer, you tell yourself that you should try to build a true cluster, and deploy some bigger apps on it. You create some VM, you learn to use kubeadm
and some time later you have spawned a fresh Kubernetes cluster to deploy your apps on. The magic is still there, but you begin to feel that, like in most tales, magic comes with a price…
Putting Kubernetes in production?
And when you try to deploy your first production Kubernetes cluster on-premises, on a hypervisor or bare-metal platform, you discover than the price could be a bit steep…
Deploying the Kubernetes cluster is only the beginning, in order to considerer it prod ready, you also need to ensure that:
- The installation process is automatisable and repeatable
- The upgrade/rollback process is safe
- A recovery procedure exists, is documented and tested
- Performance is predictable and consistent, specially when using persistent volumes
- The cluster is operable, with enough traces, metrics and logs to detect and debug failures and problems
- The service is secure and high-available
Our answer to this operational complexity
Well, if you though deploying your new Kubernetes cluster was going to give you this whole NoOps thing, it seems you were wrong. Keeping the magic metaphor, learning to master magic takes a long time and it’s not without risk…
So, as many powerful technologies, Kubernetes apparent simplicity and versatility in the Dev side comes with a high complexity in the Ops side. No wonder that most users looks at the managed Kubernetes front when they need to upgrade from proof-of-concept to production.
At OVH, as a user-focused company, we wanted to answer that demand by creating our managed Kubernetes solution, fully based on open source, without vendor-locking, fully compatible with any pure Kubernetes solution. Our objective was to give our users a fully managed turnkey Kubernetes cluster, ready to use, without the hassle of installation or operation.
On the shoulders of giants…
So we wanted to build a managed Kubernetes solution, but how to do it? The first step was simple: we needed to be sure that the underlying infrastructure was rock solid, so we decided to base it on our own, OpenStack based, Public Cloud offer.
Building our platform over a mature, high available, standards based product as OVH Public Cloud, allowed us to concentrate our efforts in the real problem we had in hand: creating a highly scalable, easy to operate, CNCF certified, managed Kubernetes service.
What’s next?
In the next posts in the series, we are going to dive into the architecture of OVH Managed Kubernetes service, detailing some of our technological choices, explaining why we took them and how we made it work.
We will begin with one of our boldest decisions: running Kubernetes over Kubernetes, or as we like to call it, the Kubinception.