<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>OVHcloud Platform Archives - OVHcloud Blog</title>
	<atom:link href="https://blog.ovhcloud.com/tag/ovhcloud-platform/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.ovhcloud.com/tag/ovhcloud-platform/</link>
	<description>Innovation for Freedom</description>
	<lastBuildDate>Wed, 31 Jul 2019 10:08:14 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Deploying a FaaS platform on OVH Managed Kubernetes using OpenFaaS</title>
		<link>https://blog.ovhcloud.com/deploying-a-faas-platform-on-ovh-managed-kubernetes-using-openfaas/</link>
		
		<dc:creator><![CDATA[Horacio Gonzalez]]></dc:creator>
		<pubDate>Fri, 24 May 2019 16:40:47 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[FaaS]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OpenFaaS]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Platform]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=15487</guid>

					<description><![CDATA[Several weeks ago, I was taking part in a meetup about Kubernetes, when one of the attendees made a remark that resonated deeply with me&#8230; Hey, Horacio, that Kubernetes thing is rather cool, but what I would have loved to see is a Functions-as-a-Service platform. Most of my apps could be easily done with a [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploying-a-faas-platform-on-ovh-managed-kubernetes-using-openfaas%2F&amp;action_name=Deploying%20a%20FaaS%20platform%20on%20OVH%20Managed%20Kubernetes%20using%20OpenFaaS&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p>Several weeks ago, I was taking part in a meetup about Kubernetes, when one of the attendees made a remark that resonated deeply with me&#8230;</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;"><em>Hey, Horacio, that Kubernetes thing is rather cool, but what I would have loved to see is a Functions-as-a-Service platform. Most of my apps could be easily done with a database and several serverless functions!</em></p></blockquote>



<p>It wasn&#8217;t the first time I&#8217;d got that question&#8230;</p>



<p>Being, above all, a web developer, I can definitely relate. Kubernetes is a wonderful product – you can install complicated web architectures with a click –&nbsp;but what about the <em>database + some functions</em> model?</p>



<p>Well, you can also do it with Kubernetes!</p>



<p>That&#8217;s the beauty of the rich Kubernetes ecosystem: you can find projects to address many different use cases, from <a href="https://www.ovh.com/fr/blog/deploying-game-servers-with-agones-on-ovh-managed-kubernetes/" data-wpel-link="exclude">game servers with Agones</a> to FaaS platforms&#8230;</p>



<h3 class="wp-block-heading">There is an Helm chart for that!</h3>



<p>Saying <em>&#8220;You can do it with Kubernetes!&#8221;</em> is almost the new &#8220;<em>There is an app for that!&#8221;</em>, but it doesn&#8217;t help a lot of people who are looking for solutions. As the question had come up several times, we decided to prepare a small tutorial on how to deploy and use a FaaS platform on OVH Managed Kubernetes.</p>



<p>We began by testing several FaaS platform on our Kubernetes. Our objective was to find the following solution:</p>



<ul class="wp-block-list"><li>Easy to deploy (ideally with a simple <a href="https://github.com/helm/helm" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Helm chart)</a></li><li>Manageable with both an UI and a CLI, because different customers have different needs</li><li>Auto-scalable, in both the upscaling and downscaling senses</li><li>Supported by comprehensive documentation</li></ul>



<p>We tested lots of platforms, like <a href="https://kubeless.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubeless</a>, <a href="https://github.com/apache/incubator-openwhisk" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenWhisk</a>, <a href="https://github.com/openfaas/faas" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenFaaS</a> and <a href="https://github.com/fission/fission" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Fission</a>, and I must say that all of them performed quite well.&nbsp;In the end though, the one that scored the best in terms of our objectives was OpenFaaS, so we decided to use it as the reference for this blog post.</p>



<h3 class="wp-block-heading">OpenFaaS –&nbsp;a Kubernetes-native FaaS platform</h3>



<div class="wp-block-image"><figure class="aligncenter"><img fetchpriority="high" decoding="async" width="745" height="167" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/CAA4B336-0797-4587-B92D-6F83A5C7197B.png" alt="OpenFaaS" class="wp-image-15505" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/CAA4B336-0797-4587-B92D-6F83A5C7197B.png 745w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/CAA4B336-0797-4587-B92D-6F83A5C7197B-300x67.png 300w" sizes="(max-width: 745px) 100vw, 745px" /></figure></div>



<p><a href="https://github.com/openfaas/faas" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenFaaS</a> is an open-source framework for building serverless functions with Docker and Kubernetes. The project is already mature, popular and active, with more than 14k stars on GitHub, hundreds of contributors, and lots of users (both corporate and private).</p>



<p>OpenFaaS is very simple to deploy, using a Helm chart (including an operator for CRDs, i.e. <code>kubectl get functions</code>). It has both a CLI and a UI, manages auto-scaling effectively, and its documentation is really comprehensive (with a Slack channel to discuss it, as a nice bonus!).</p>



<p>Technically, OpenFaaS is composed of several functional blocks:</p>



<ul class="wp-block-list"><li>The <em>Function Watchdog.</em>&nbsp;A tiny golang HTTP server that transforms any Docker image into a serverless function</li><li>The <em>API Gateway</em>, which provides&nbsp;an external route into functions and collects metrics</li><li>The <em>UI Portal</em>, which creates and invokes functions</li><li>The <em>CLI</em> (essentially a REST client for the <em>API Gateway</em>), which can deploy any container as a function</li></ul>



<p>Functions can be written in many languages (although I mainly used JavaScript, Go and Python for testing purposes), using handy templates or a simple Dockerfile.</p>



<div class="wp-block-image"><figure class="aligncenter"><img decoding="async" width="1024" height="665" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798-1024x665.png" alt="OpenFaaS Architecture" class="wp-image-15508" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798-1024x665.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798-300x195.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798-768x499.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798.png 1118w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure></div>



<h3 class="wp-block-heading">Deploying OpenFaaS on OVH Managed Kubernetes</h3>



<p>There are several ways to install OpenFaaS on a Kubernetes cluster. In this post we&#8217;re looking at the easiest one: installing with <a href="https://helm.sh/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Helm</a>.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;">If you need information on how to install and use Helm on your OVH Managed Kubernetes cluster, you can follow <a href="https://docs.ovh.com/gb/en/kubernetes/installing-helm/" data-wpel-link="exclude">our tutorial</a>.</p></blockquote>



<p>The official Helm chart for OpenFaas is <a href="https://github.com/openfaas/faas-netes/blob/master/chart/openfaas" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">available on the faas-netes repository</a>.</p>



<h3 class="wp-block-heading">Adding the OpenFaaS Helm chart</h3>



<p>The OpenFaaS Helm chart isn&#8217;t available in Helm&#8217;s standard <code>stable</code> repository, so you&#8217;ll need to add their repository to your Helm installation:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">helm repo add openfaas https://openfaas.github.io/faas-netes/
helm repo update</code></pre>



<h3 class="wp-block-heading">Creating the namespaces</h3>



<p>OpenFaaS guidelines recommend creating two <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">namespaces</a>, one for OpenFaaS core services and one for the functions:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml</code></pre>



<h3 class="wp-block-heading">Generating secrets</h3>



<p>A FaaS platform that&#8217;s open to the internet seems like a bad idea. That&#8217;s why we are generating secrets, to enable authentication on the gateway:</p>



<pre class="wp-block-code language-bash"><code lang="bash" class="language-bash"># generate a random password
PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)

kubectl -n openfaas create secret generic basic-auth \
    --from-literal=basic-auth-user=admin \
    --from-literal=basic-auth-password="$PASSWORD"</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;"><strong>Note:</strong> you will need this password later in the tutorial (to access the UI portal, for example). You can view it at any point in the terminal session with&nbsp;<code>echo $PASSWORD</code>.</p></blockquote>



<h3 class="wp-block-heading">Deploying the Helm chart</h3>



<p>The Helm chart can be deployed in three modes: <code>LoadBalancer</code>, <code>NodePort</code> and <code>Ingress</code>. For our purposes, the simplest way is simply using our external Load Balancer, so we will deploy it in <code>LoadBalancer</code>, with the <code>--set serviceType=LoadBalancer</code> option.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;">If you want to better understand the difference between these three modes, you can read our <a href="https://www.ovh.com/fr/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/" data-wpel-link="exclude">Getting external traffic into Kubernetes – ClusterIp, NodePort, LoadBalancer, and Ingress</a> blog post.</p></blockquote>



<p>Deploy the Helm chart as follows:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">helm upgrade openfaas --install openfaas/openfaas \
    --namespace openfaas  \
    --set basic_auth=true \
    --set functionNamespace=openfaas-fn \
    --set serviceType=LoadBalancer</code></pre>



<p>As suggested in the install message, you can verify that OpenFaaS has started by running:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"</code></pre>



<p>If it&#8217;s working, you should see a list of the available OpenFaaS <code>deployment</code> objects:</p>



<pre class="wp-block-code console"><code class="">$ kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
alertmanager   1         1         1            1           33s
faas-idler     1         1         1            1           33s
gateway        1         1         1            1           33s
nats           1         1         1            1           33s
prometheus     1         1         1            1           33s
queue-worker   1         1         1            1           33s
</code></pre>



<h3 class="wp-block-heading">Install the FaaS CLI and log in to the API Gateway</h3>



<p>The easiest way to interact with your new OpenFaaS platform is by installing <code>faas-cli</code>, the command line client for OpenFaaS on a Linux or Mac (or in a WSL linux terminal in Windows):</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">curl -sL https://cli.openfaas.com | sh</code></pre>



<p>You can now use the CLI to log in to the gateway. The CLI will need the public URL of the OpenFaaS <code>LoadBalancer</code>, which you can get via <code>kubectl</code>:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl get svc -n openfaas gateway-external -o wide</code></pre>



<p>Export the URL to an&nbsp;<code>OPENFAAS_URL</code> variable:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">export OPENFAAS_URL=[THE_URL_OF_YOUR_LOADBALANCER]:[THE_EXTERNAL_PORT]</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;"><strong>Note:</strong> you will need this URL later on the tutorial, for example to access the UI portal. You can see it at any moment in the terminal session by doing <code>echo $OPENFAAS_URL</code>.</p></blockquote>



<p>And connect to the gateway:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">echo -n $PASSWORD | ./faas-cli login -g $OPENFAAS_URL -u admin --password-stdin</code></pre>



<p>Now your&#8217;re connected to the gateway, and you can send commands to the OpenFaaS platform.</p>



<p>By default, there is no function installed on your OpenFaaS platform, as you can verify with the <code>faas-cli list</code> command.</p>



<p>In my own deployment (URLs and IP changed for this example), the preceding operations gave:</p>



<pre class="wp-block-code console"><code class="">$ kubectl get svc -n openfaas gateway-external -o wide
 NAME               TYPE           CLUSTER-IP    EXTERNAL-IP                        PORT(S)          AGE     SELECTOR
 gateway-external   LoadBalancer   10.3.xxx.yyy   xxxrt657xx.lb.c1.gra.k8s.ovh.net   8080:30012/TCP   9m10s   app=gateway

 $ export OPENFAAS_URL=xxxrt657xx.lb.c1.gra.k8s.ovh.net:8080

 $ echo -n $PASSWORD | ./faas-cli login -g $OPENFAAS_URL -u admin --password-stdin
 Calling the OpenFaaS server to validate the credentials...
 WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.
 credentials saved for admin http://xxxrt657xx.lb.c1.gra.k8s.ovh.net:8080

$ ./faas-cli version
  ___                   _____           ____
 / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) |  __/ | | |  _| (_| | (_| |___) |
 \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/
      |_|
CLI:
 commit:  b42d0703b6136cac7b0d06fa2b212c468b0cff92
 version: 0.8.11
Gateway
 uri:     http://xxxrt657xx.lb.c1.gra.k8s.ovh.net:8080
 version: 0.13.0
 sha:     fa93655d90d1518b04e7cfca7d7548d7d133a34e
 commit:  Update test for metrics server
Provider
 name:          faas-netes
 orchestration: kubernetes
 version:       0.7.5 
 sha:           4d3671bae8993cf3fde2da9845818a668a009617

$ ./faas-cli list Function Invocations Replicas </code></pre>



<h3 class="wp-block-heading">Deploying and invoking functions</h3>



<p>You can easily deploy functions on your OpenFaaS platform using the CLI, with this command:&nbsp;<code>faas-cli up</code>.</p>



<p>Let&#8217;s try out&nbsp;<a href="https://raw.githubusercontent.com/openfaas/faas/master/stack.yml" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">some sample functions</a> from the OpenFaaS repository:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">./faas-cli deploy -f https://raw.githubusercontent.com/openfaas/faas/master/stack.yml</code></pre>



<p>Running a <code>faas-cli list</code> command now will show the deployed functions:</p>



<pre class="wp-block-code console"><code class="">$ ./faas-cli list
Function                          Invocations     Replicas
base64                            0               1    
echoit                            0               1    
hubstats                          0               1    
markdown                          0               1    
nodeinfo                          0               1    
wordcount                         0               1    
</code></pre>



<p>As an example, let&#8217;s invoke&nbsp;<code>wordcount</code> (a function that takes the syntax of the unix <a href="https://en.wikipedia.org/wiki/Wc_(Unix)" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><code>wc</code></a> command, giving us the number of lines, words and characters of the input data):</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">echo 'I love when a plan comes together' | ./faas-cli invoke wordcount</code></pre>



<pre class="wp-block-code console"><code class="">
$ echo 'I love when a plan comes together' | ./faas-cli invoke wordcount
       1         7        34
</code></pre>



<h3 class="wp-block-heading">Invoking a function without the CLI</h3>



<p>You can use the <code>faas-cli describe</code> command to get the public URL of your function, and then call it directly with your favorite HTTP library (or the good old <code>curl</code>):</p>



<pre class="wp-block-code console"><code class="">$ ./faas-cli describe wordcount
Name:                wordcount
Status:              Ready
Replicas:            1
Available replicas:  1
Invocations:         1
Image:               functions/alpine:latest
Function process:    
URL:                 http://xxxxx657xx.lb.c1.gra.k8s.ovh.net:8080/function/wordcount
Async URL:           http://xxxxx657xx.lb.c1.gra.k8s.ovh.net:8080/async-function/wordcount
Labels:              faas_function : wordcount
Annotations:         prometheus.io.scrape : false

$ curl -X POST --data-binary "I love when a plan comes together" "http://xxxxx657xx.lb.c1.gra.k8s.ovh.net:8080/function/wordcount"
       0         7        33
</code></pre>



<h3 class="wp-block-heading">Containers everywhere&#8230;</h3>



<p>The most attractive part of a FaaS platform is being able to deploy your own functions.<br>In OpenFaaS, you can write your these function in many languages, not just the usual suspects (JavaScript, Python, Go etc.). This is because in OpenFaaS, you can deploy basically any container as a function, although this does mean you need to package your functions as containers in order to deploy them.</p>



<p>That also means that in order to create your own functions, you need to have <a href="https://www.docker.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Docker</a> installed on your workstation, and you will need to push the images in a Docker registry (either the official one or a private one).</p>



<p>If you need a private registry, you can <a href="https://docs.docker.com/registry/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">install one</a> on your OVH Managed Kubernetes cluster. For this tutorial we are choosing to deploy our image on the official Docker registry.</p>



<h2 class="wp-block-heading">Writing our first function</h2>



<p>For our first example, we are going to create and deploy a <em>hello word</em> function in JavaScript, using <a href="https://nodejs.org/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NodeJS</a>. Let&#8217;s begin by creating and scaffolding the function folder:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">mkdir hello-js-project
cd hello-js-project
../faas-cli new hello-js --lang node</code></pre>



<p>The CLI will download a JS function template from OpenFaaS repository, generate a function description file (<code>hello-js.yml</code> in this case) and a folder for the function source code (<code>hello-js</code>). For NodeJS, you will find a <code>package.json</code> (to declare eventual dependencies to your function, for example) and a <code>handler.js</code> (the function main code) in this folder.</p>



<p>Edit <code>hello-js.yml</code> to set the name of the image you want to upload to the Docker registry:</p>



<pre title="hello-js.yaml" class="wp-block-code"><code lang="yaml" class="language-yaml">version: 1.0
provider:
  name: openfaas
  gateway: http://6d6rt657vc.lb.c1.gra.k8s.ovh.net:8080
functions:
  hello-js:
    lang: node
    handler: ./hello-js
    image: ovhplatform/openfaas-hello-js:latest</code></pre>



<p>The function described in the <code>handler.js</code> file is really simple. It exports a function with two parameters: a <code>context</code> where you will receive the request data, and a <code>callback</code> that you will call at the end of your function and where you will pass the response data.</p>



<pre title="handler.js" class="wp-block-code"><code lang="javascript" class="language-javascript">"use strict"

module.exports = (context, callback) => {
    callback(undefined, {status: "done"});
}</code></pre>



<p>Let&#8217;s edit it to send back our <em>hello world</em> message:</p>



<pre title="handler.js" class="wp-block-code"><code lang="javascript" class="language-javascript">"use strict"

module.exports = (context, callback) => {
    callback(undefined, {message: 'Hello world'});
}</code></pre>



<p>Now you can build the Docker image and push it to the public Docker registry:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash"># Build the image
../faas-cli build -f hello-js.yml
# Login at Docker Registry, needed to push the image
docker login     
# Push the image to the registry
../faas-cli push -f hello-js.yml</code></pre>



<p>With the image in the registry, let&#8217;s deploy and invoke the function with the OpenFaaS CLI:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash"># Deploy the function
../faas-cli deploy -f hello-js.yml
# Invoke the function
../faas-cli invoke hello-js</code></pre>



<p>Congratulations! You have just written and deployed your first OpenFaaS function.</p>



<h3 class="wp-block-heading">Using the OpenFaaS UI Portal</h3>



<p>You can test the UI Portal by pointing your browser to your OpenFaaS gateway URL (the one you have set on the <code>$OPENFAAS_URL</code> variable), and entering the <code>admin</code>&nbsp;user and the password you have set on the <code>$PASSWORD</code> variable when prompted to.</p>



<figure class="wp-block-image"><img decoding="async" width="963" height="579" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/ui-portal-01.jpg" alt="OpenFaaS UI Portal" class="wp-image-15495" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-01.jpg 963w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-01-300x180.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-01-768x462.jpg 768w" sizes="(max-width: 963px) 100vw, 963px" /></figure>



<p>In the UI Portal, you will find the list of the deployed functions. For each function, you can find its description, invoke it and see the result.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="828" height="768" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/ui-portal-02.jpg" alt="OpenFaaS UI Portal" class="wp-image-15496" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-02.jpg 828w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-02-300x278.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-02-768x712.jpg 768w" sizes="auto, (max-width: 828px) 100vw, 828px" /></figure>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="832" height="899" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/ui-portal-03.jpg" alt="OpenFaaS UI Portal" class="wp-image-15497" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-03.jpg 832w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-03-278x300.jpg 278w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-03-768x830.jpg 768w" sizes="auto, (max-width: 832px) 100vw, 832px" /></figure>



<h3 class="wp-block-heading">Where do we go from here?</h3>



<p>So you now have a working OpenFaaS platform on your OVH Managed Kubernetes cluster.</p>



<p>To learn more about OpenFaaS, and how you can get the most out of it, please refer to the <a href="https://docs.openfaas.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official OpenFaaS documentation</a>. You can also follow the <a href="https://github.com/openfaas/workshop" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenFaaS workshops</a>&nbsp;for more practical tips and advice.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploying-a-faas-platform-on-ovh-managed-kubernetes-using-openfaas%2F&amp;action_name=Deploying%20a%20FaaS%20platform%20on%20OVH%20Managed%20Kubernetes%20using%20OpenFaaS&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Deploying game servers with Agones on OVH Managed Kubernetes</title>
		<link>https://blog.ovhcloud.com/deploying-game-servers-with-agones-on-ovh-managed-kubernetes/</link>
		
		<dc:creator><![CDATA[Horacio Gonzalez]]></dc:creator>
		<pubDate>Fri, 12 Apr 2019 10:01:44 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Agones]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Platform]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=15322</guid>

					<description><![CDATA[One of the key advantages of usisng Kubernetes is the formidable ecosystem around it. From Rancher to Istio, from Rook to Fission, from gVisor to KubeDB, the Kubernetes ecosystem is rich, vibrant and ever-growing. We are getting to the point where for most deployment needs we can say there is a K8s-based open-source project for [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploying-game-servers-with-agones-on-ovh-managed-kubernetes%2F&amp;action_name=Deploying%20game%20servers%20with%20Agones%20on%20OVH%20Managed%20Kubernetes&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p>One of the key advantages of usisng Kubernetes is the formidable ecosystem around it. From <a href="http://rancher.com/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Rancher</a> to <a href="https://istio.io/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Istio</a>, from <a href="https://rook.io/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Rook</a> to <a href="https://fission.io/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Fission</a>, from <a href="https://gvisor.dev/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">gVisor</a> to <a href="https://kubedb.com/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">KubeDB</a>, the Kubernetes ecosystem is rich, vibrant and ever-growing. We are getting to the point where for most deployment needs we can say <em>there is a K8s-based open-source project for that</em>.</p>



<p>One of the latests additions to this ecosystem is the <a href="https://agones.dev" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Agones</a> project, an open-source, multiplayer, dedicated game-server hosting built on Kubernetes, developed by Google in collaboration with <a href="https://www.ubisoft.com/en-us/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Ubisoft</a>. The project was <a href="https://cloud.google.com/blog/products/gcp/introducing-agones-open-source-multiplayer-dedicated-game-server-hosting-built-on-kubernetes" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">announced in March</a>, and has already made quite a bit of noise&#8230;</p>



<p>In the OVH Platform Team we are fans of both online gaming and Kubernetes, so we told ourselves that we needed to test Agones. And what better way to test it than deploying it on our <a href="https://www.ovh.com/fr/kubernetes/" rel="nofollow" data-wpel-link="exclude">OVH Managed Kubernetes</a> service, installing a <a href="http://www.xonotic.org/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Xonotic</a> game server cluster and playing some old-school deathmatches with collegues?</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="/blog/wp-content/uploads/2019/04/DD101A52-234E-460C-8B52-B723DE785563.jpeg" alt="Agones on OVH Managed Kubernetes" width="599" height="301"/></figure></div>



<p>And of course, we needed to write about it to share the experience&#8230;</p>



<h3 class="wp-block-heading">Why Agones?</h3>



<p>Agones (<a href="https://www.merriam-webster.com/dictionary/agones" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">derived from the Greek word <em>agōn</em></a>, contests held during public festivals or more generally &#8220;contest&#8221; or &#8220;competition at games&#8221;) aims to replace the usual proprietary solutions to deploy, scale and manage game servers.</p>



<p>Agones enriches Kubernetes with a <a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Custom Controller</a> and a <a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Custom Resource Definition</a>. With them, you can standardise Kubernetes tooling and APIs to create, scale and manage game server clusters.</p>



<h4 class="wp-block-heading">Wait, what game servers are you talking about?</h4>



<p>Well, Agones&#8217;s main focus is online multiplayer games such as <a href="https://en.wikipedia.org/wiki/First-person_shooter" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">FPS</a>s and <a href="https://en.wikipedia.org/wiki/Multiplayer_online_battle_arena" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">MOBA</a>s, fast-paced games requiring dedicated, low-latency game servers that synchronize the state of the game between players and serve as a source of truth for gaming situations.</p>



<p>These kinds of games ask for relatively ephemeral dedicated gaming servers, with every match running on a server instance. The servers need to be stateful (they must keep the game status), with the state usually held in memory for the duration of the match.</p>



<p>Latency is a key concern, as the competitive real-time aspects of the games ask for quick responses from the server. That means that the connection from the player device to the game server should be the most direct possible, ideally bypassing any intermediate server such as a load-balancer.</p>



<h4 class="wp-block-heading">And how do you connect the players to the right server?</h4>



<p>Every game publisher used to have their own proprietary solutions, but most on them follow a similar flow, with a matchmaking service that groups players into a match, deals with a cluster manager to provision a dedicated instance of game server and send to the players its IP address and port, to allow them to directly connect to the server and play the game.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="/blog/wp-content/uploads/2019/04/1779D5AB-BF4B-4588-99E7-0BC6A888AE33.jpeg" alt="Online gaming matchmaking and game server asignation" width="407" height="232"/></figure></div>



<p>Agones and it&#8217;s Custom Controller and Custom Resource Definition replaces the complex cluster management infrastructure with a standardised, Kubernetes-based tooling and APIs. The matchmaker services interact with these APIs to spawn new game server pods and get their IP address and ports to the concerned players.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="/blog/wp-content/uploads/2019/04/3D4C3CDD-5938-4CD8-89AE-8A97D7BF540F.jpeg" alt="Online gaming matchmaking and game server asignation with " width="449" height="397"/></figure></div>



<h4 class="wp-block-heading">The cherry on the cake</h4>



<p>Using Kubernetes for these tasks also gives some nice additional bonus, like being able to deploy the full gaming infrastructure in a developer environnement (or even in a <a href="https://github.com/kubernetes/minikube" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">minikube</a>), or easily clone it to deploy in a new data center or cloud region, but also offering a whole platform to host all the additional services needed to build a game: account management, leaderboards, inventory&#8230;</p>



<p>And of course, the simplicity of operating Kubernetes-based platforms, especially when they dynamic, heterogeneous and distributed, as most online gaming platforms.</p>



<h3 class="wp-block-heading">Deploying Agones on OVH Managed Kubernetes</h3>



<p>There are several ways to install Agones in a Kubernetes cluster. For our test we chose the easiest one: installing with <a href="https://helm.sh/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Helm</a>.</p>



<h4 class="wp-block-heading">Enabling creation of RBAC resources</h4>



<p>The first step to install Agones is to setup a service account with enough permissions to create some special RBAC resource types.</p>



<pre class="wp-block-code"><code class="">kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole=cluster-admin --serviceaccount=kube-system:default</code></pre>



<p>Now we have the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Cluster Role Binding</a> needed for the installation.</p>



<h4 class="wp-block-heading">Installing the Agones chart</h4>



<p>Now let&#8217;s continue by adding Agones repository to Helm&#8217;s repository list.</p>



<pre class="wp-block-code"><code class="">helm repo add agones https://agones.dev/chart/stable</code></pre>



<p>And then installing the stable Agones chart:</p>



<pre class="wp-block-code"><code class="">helm install --name my-agones --namespace agones-system agones/agones</code></pre>



<p>The installation we have just done isn&#8217;t suited for production, as the <a href="https://agones.dev/site/docs/installation/helm/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">official install instructions</a> recommend running Agones and the game servers in separate, dedicated pools of nodes. But for the needs of our test, the basic setup is enough.</p>



<h3 class="wp-block-heading">Confirming Agones started successfully</h3>



<p>To verify that Agones is running on our Kubernetes cluster, we can look at the pods in the <code>agones-system</code> namespace:</p>



<pre class="wp-block-code"><code class="">kubectl get --namespace agones-system pods</code></pre>



<p>If everything is ok, you should see an <code>agones-controller</code> pod with a <em>Running</em> status:</p>



<pre class="wp-block-code console"><code class="">$ kubectl get --namespace agones-system pods
NAME                                 READY   STATUS    RESTARTS   AGE
agones-controller-5f766fc567-xf4vv   1/1     Running   0          5d15h
agones-ping-889c5954d-6kfj4          1/1     Running   0          5d15h
agones-ping-889c5954d-mtp4g          1/1     Running   0          5d15h
</code></pre>



<p>You can also see more details using:</p>



<pre class="wp-block-code"><code class="">kubectl describe --namespace agones-system pods</code></pre>



<p>Looking at the <code>agones-controller</code> description, you should see something like:</p>



<pre class="wp-block-code console"><code class="">$ kubectl describe --namespace agones-system pods
Name:               agones-controller-5f766fc567-xf4vv
Namespace:          agones-system
[...]
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
</code></pre>



<p>Where all the <code>Conditions</code> should have status <code>True</code>.</p>



<h3 class="wp-block-heading">Deploying a game server</h3>



<p>The Agones <em>Hello world</em> is rather boring, a simple <a href="https://github.com/GoogleCloudPlatform/agones/tree/release-0.9.0/examples/simple-udp" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">UDP echo server</a>, so we decided to skip it and go directly to something more interesting: a <a href="https://github.com/GoogleCloudPlatform/agones/blob/release-0.9.0/examples/xonotic" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Xonotic game server</a>.</p>



<p><a href="https://www.xonotic.org/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Xonotic</a> is an open-source multi-player FPS, and a rather good one, with lots of interesting game modes, maps, weapons and customization options.</p>



<p>Deploying a Xonotic game server over Agones is rather easy:</p>



<pre class="wp-block-code"><code class="">kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/agones/release-0.9.0/examples/xonotic/gameserver.yaml</code></pre>



<p>The game server deployment can take some moments, so we need to wait until its status is <code>Ready</code> before using it. We can fetch the status with:</p>



<pre class="wp-block-code"><code class="">kubectl get gameserver</code></pre>



<p>We wait until the fetch gives a <code>Ready</code> status on our game server:</p>



<pre class="wp-block-code console"><code class="">$ kubectl get gameserver
NAME      STATE   ADDRESS         PORT   NODE       AGE
xonotic   Ready   51.83.xxx.yyy   7094   node-zzz   5d
</code></pre>



<p>When the game server is ready, we also get the address and the port we should use to connect to our deathmatch game (in my example, <code>51.83.xxx.yyy:7094</code>).</p>



<h3 class="wp-block-heading">It&#8217;s frag time</h3>



<p>So now that we have a server, let&#8217;s test it!</p>



<p>We downloaded the Xonotic client for our computers (it runs on Windows, Linux and MacOS, so there is no excuse), and lauched it:</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-1024x576.png" alt="xonotic" class="wp-image-15335" width="768" height="432" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-1024x576.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-768x432.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-1200x675.png 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13.png 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure></div>



<p>Then we go to the <em>Multiplayer</em> menu and enter the address and port of our game server:</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-1024x576.png" alt="" class="wp-image-15336" width="768" height="432" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-1024x576.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-768x432.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-1200x675.png 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41.png 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure></div>



<p>And we are ready to play!</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-1024x576.png" alt="" class="wp-image-15337" width="768" height="432" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-1024x576.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-768x432.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-1200x675.png 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36.png 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure></div>



<h4 class="wp-block-heading">And on the server side?</h4>



<p>On the server side, we can spy how things are going for our game server, using <code>kubectl logs</code>. Let&#8217;s begin by finding the pod running the game:</p>



<pre class="wp-block-code"><code class="">kubectl get pods</code></pre>



<p>We see that our game server is running in a pod called <code>xonotic</code>:</p>



<pre class="wp-block-code console"><code class="">$ kubectl get pods 
NAME      READY   STATUS    RESTARTS   AGE
xonotic   2/2     Running   0          5d15h
</code></pre>



<p>We can then use <code>kubectl logs</code> on it. In the pod there are two containers, the main <code>xonotic</code> one and a Agones <em>sidecar</em>, so we must specify that we want the logs of the <code>xonotic</code> container:</p>



<pre class="wp-block-code console"><code class="">$ kubectl logs xonotic
Error from server (BadRequest): a container name must be specified for pod xonotic, choose one of: [xonotic agones-gameserver-sidecar]
$ kubectl logs xonotic xonotic
>>> Connecting to Agones with the SDK
>>> Starting health checking
>>> Starting wrapper for Xonotic!
>>> Path to Xonotic server script: /home/xonotic/Xonotic/server_linux.sh 
Game is Xonotic using base gamedir data
gamename for server filtering: Xonotic
Xonotic Linux 22:03:50 Mar 31 2017 - release
Current nice level is below the soft limit - cannot use niceness
Skeletal animation uses SSE code path
execing quake.rc
[...]
Authenticated connection to 109.190.xxx.yyy:42475 has been established: client is v6xt9/GlzxBH+xViJCiSf4E/SCn3Kx47aY3EJ+HOmZo=@Xon//Ks, I am /EpGZ8F@~Xon//Ks
LostInBrittany is connecting...
url_fclose: failure in crypto_uri_postbuf
Receiving player stats failed: -1
LostInBrittany connected
LostInBrittany connected
LostInBrittany is now spectating
[BOT]Eureka connected
[BOT]Hellfire connected
[BOT]Lion connected
[BOT]Scorcher connected
unconnected changed name to [BOT]Eureka
unconnected changed name to [BOT]Hellfire
unconnected changed name to [BOT]Lion
unconnected changed name to [BOT]Scorcher
[BOT]Scorcher picked up Strength
[BOT]Scorcher drew first blood! 
[BOT]Hellfire was gunned down by [BOT]Scorcher's Shotgun
[BOT]Scorcher slapped [BOT]Lion around a bit with a large Shotgun
[BOT]Scorcher was gunned down by [BOT]Eureka's Shotgun, ending their 2 frag spree
[BOT]Scorcher slapped [BOT]Lion around a bit with a large Shotgun
[BOT]Scorcher was shot to death by [BOT]Eureka's Blaster
[BOT]Hellfire slapped [BOT]Eureka around a bit with a large Shotgun, ending their 2 frag spree
[BOT]Eureka slapped [BOT]Scorcher around a bit with a large Shotgun
[BOT]Eureka was gunned down by [BOT]Hellfire's Shotgun
[BOT]Hellfire was shot to death by [BOT]Lion's Blaster, ending their 2 frag spree
[BOT]Scorcher was cooked by [BOT]Lion
[BOT]Eureka turned into hot slag
[...]
</code></pre>



<h4 class="wp-block-heading">Add some friends&#8230;</h4>



<p>The next step is mostly enjoyable: asking the collegues to connect to the server and doing a true deathmatch like in Quake 2 times.</p>



<h3 class="wp-block-heading">And now?</h3>



<p>We have a working game server, but we have barely uncovered the possibilities of Agones: deploying a <a href="https://agones.dev/site/docs/reference/fleet/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">fleet</a> (a set of warm GameServers that are available to be allocated from), testing the <a href="https://agones.dev/site/docs/reference/fleetautoscaler/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">FleetAutoscaler</a> (to automatically scale up and down a Fleet in response to demand), making some dummy <a href="https://agones.dev/site/docs/tutorials/allocator-service-go/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">allocator service</a>. In future blog posts we will dive deeper into it, and explore those possibilities.</p>



<p>And in a wider context, we are going to continue our exploratory journey on Agones. The project is still very young, an early alpha, but it shows some impressive perspectives.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploying-game-servers-with-agones-on-ovh-managed-kubernetes%2F&amp;action_name=Deploying%20game%20servers%20with%20Agones%20on%20OVH%20Managed%20Kubernetes&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Getting external traffic into Kubernetes &#8211; ClusterIp, NodePort, LoadBalancer, and Ingress</title>
		<link>https://blog.ovhcloud.com/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/</link>
		
		<dc:creator><![CDATA[Horacio Gonzalez]]></dc:creator>
		<pubDate>Fri, 22 Feb 2019 15:20:44 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Platform]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=14674</guid>

					<description><![CDATA[For the last few months, I have been acting as Developer Advocate for the OVH Managed Kubernetes beta, following our beta testers, getting feedback, writing docs and tutorials, and generally helping to make sure the product matches our users' needs as closely as possible.

In the next few posts, I am going to tell you some stories about this beta phase. We'll be taking a look at feedback from some of our beta testers, technical insights, and some fun anecdotes about the development of this new service.



Today, we'll start with one of the most frequent questions I got during the early days of the beta: How do I route external traffic into my Kubernetes service? The question came up a lot as our customers began to explore Kubernetes, and when I tried to answer it, I realised that part of the problem was the sheer number of possible answers, and the concepts needed to understand them.<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fgetting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress%2F&amp;action_name=Getting%20external%20traffic%20into%20Kubernetes%20%26%238211%3B%20ClusterIp%2C%20NodePort%2C%20LoadBalancer%2C%20and%20Ingress&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[<p>For the last few months, I have been acting as <strong>Developer Advocate</strong> for the <strong><a href="https://labs.ovh.com/kubernetes-k8s" data-wpel-link="exclude">OVH Managed Kubernetes beta</a></strong>, following our beta testers, getting feedback, writing docs and tutorials, and generally helping to make sure<strong> the product matches our users&#8217; needs</strong> as closely as possible.</p>
<p>In the next few posts, I am going to tell you some <strong>stories about this beta phase</strong>. We&#8217;ll be taking a look at feedback from some of our beta testers, technical insights, and some fun anecdotes about the development of this new service.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium wp-image-14708" src="/blog/wp-content/uploads/2019/02/1FEEF258-644A-481F-B324-7C05AD45B8CC-300x169.png" alt="" width="300" height="169" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/1FEEF258-644A-481F-B324-7C05AD45B8CC-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/1FEEF258-644A-481F-B324-7C05AD45B8CC-768x432.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/1FEEF258-644A-481F-B324-7C05AD45B8CC.png 885w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>Today, we&#8217;ll start with one of the most frequent questions I got during the early days of the beta: <em><strong>How do I route external traffic into my Kubernetes service?</strong> </em>The question came up a lot as our customers began to explore Kubernetes, and when I tried to answer it, I realised that part of the problem was the <strong>sheer number of</strong> <strong>possible answers</strong>, and the <strong>concepts</strong> needed to understand them.</p>
<p>Related to that question was a <strong>feature request</strong>: most users wanted a load balancing tool<i>. </i>As the beta phase is all about confirming the stability of the product and validating the feature set prioritisation, we were able to quickly confirm <code>LoadBalancer</code>as a key feature of our first commercial release.</p>
<p>To try to better answer the external traffic question, and to make the adoption of <code>LoadBalancer</code>easier, we wrote a tutorial and added some drawings, which got nice feedback. This helped people to understand the concept underlaying the routing of external traffic on Kubernetes.</p>
<p>This blog post is an expanded version of this tutorial. We hope that you will find it useful!</p>
<h2 id="some-concepts-clusterip-nodeport-ingress-and-loadbalancer" class="code-line" data-line="26">Some concepts:  <code>ClusterIP</code>,  <code>NodePort</code>,  <code>Ingress</code> and  <code>LoadBalancer</code></h2>
<p class="code-line" data-line="28">When you begin to use Kubernetes for real-world applications, one of the first questions to ask is how to get external traffic into your cluster. The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official documentation</a> offers a comprehensive (but rather dry) explanation of this topic, but here we are going to explain it in a more practical, need-to-know way.</p>
<p class="code-line" data-line="30">There are several ways to route external traffic into your cluster:</p>
<ul>
<li class="code-line" data-line="32">
<p class="code-line" data-line="32">Using Kubernetes proxy and <code>ClusterIP</code>: The default Kubernetes <code>ServiceType</code> is <code>ClusterIp</code>, which exposes the <code>Service</code> on a cluster-internal IP. To reach the <code>ClusterIp</code> from an external source, you can open a Kubernetes proxy between the external source and the cluster. This is usually only used for development.</p>
</li>
<li class="code-line" data-line="34">
<p class="code-line" data-line="34">Exposing services as <code>NodePort</code>: Declaring a <code>Service</code> as <code>NodePort</code>exposes it on each Node’s IP at a static port (referred to as the <code>NodePort</code>). You can then access the <code>Service</code> from outside the cluster by requesting <code>&lt;NodeIp&gt;:&lt;NodePort&gt;</code>. This can also be used for production, albeit with some limitations.</p>
</li>
<li class="code-line" data-line="36">
<p class="code-line" data-line="36">Exposing services as <code>LoadBalancer</code>: Declaring a <code>Service</code> as <code>LoadBalancer</code> exposes it externally, using a cloud provider’s load balancer solution. The cloud provider will provision a load balancer for the <code>Service</code>, and map it to its automatically assigned <code>NodePort</code>. This is the most widely used method in production environments.</p>
</li>
</ul>
<h3 id="using-kubernetes-proxy-and-clusterip" class="code-line" data-line="38">Using Kubernetes proxy and <code>ClusterIP</code></h3>
<p class="code-line" data-line="40">The default Kubernetes <code>ServiceType</code> is <code>ClusterIp</code>, which exposes the <code>Service</code> on a cluster-internal IP. To reach the <code>ClusterIp</code> from an external computer, you can open a Kubernetes proxy between the external computer and the cluster.</p>
<p class="code-line" data-line="42">You can use <code>kubectl</code> to create such a proxy. When the proxy is up, you&#8217;re directly connected to the cluster, and you can use the internal IP (ClusterIp) for that<code>Service</code>.</p>
<p><figure id="attachment_14701" aria-describedby="caption-attachment-14701" style="width: 376px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14701" src="/blog/wp-content/uploads/2019/02/1D7F7733-BE79-4408-919C-C9D8F8AF3A9E.jpeg" alt="kubectl proxy and ClusterIP" width="376" height="600" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/1D7F7733-BE79-4408-919C-C9D8F8AF3A9E.jpeg 502w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/1D7F7733-BE79-4408-919C-C9D8F8AF3A9E-188x300.jpeg 188w" sizes="auto, (max-width: 376px) 100vw, 376px" /><figcaption id="caption-attachment-14701" class="wp-caption-text">kubectl proxy and ClusterIP</figcaption></figure></p>
<div class="imageFrame">
<p class="code-line" data-line="46">This method isn&#8217;t suitable for a production environment, but it&#8217;s useful for development, debugging, and other quick-and-dirty operations.</p>
</div>
<h3 id="exposing-services-as-nodeport" class="code-line" data-line="52">Exposing services as <code>NodePort</code></h3>
<p class="code-line" data-line="54">Declaring a service as <code>NodePort</code> exposes the <code>Service</code> on each Node’s IP at the <code>NodePort</code> (a fixed port for that <code>Service</code>, in the default range of 30000-32767). You can then access the <code>Service</code> from outside the cluster by requesting <code>&lt;NodeIp&gt;:&lt;NodePort&gt;</code>. Every service you deploy as <code>NodePort</code> will be exposed in its own port, on every Node.</p>
<p><figure id="attachment_14702" aria-describedby="caption-attachment-14702" style="width: 500px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14702" src="/blog/wp-content/uploads/2019/02/BDFD96AE-11F9-4079-B375-250FA40B7CE9.jpeg" alt="NodePort" width="500" height="542" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/BDFD96AE-11F9-4079-B375-250FA40B7CE9.jpeg 738w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/BDFD96AE-11F9-4079-B375-250FA40B7CE9-277x300.jpeg 277w" sizes="auto, (max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-14702" class="wp-caption-text">NodePort</figcaption></figure></p>
<div class="imageFrame">
<p class="code-line" data-line="58">
</div>
<p class="code-line" data-line="62">It&#8217;s rather cumbersome to use <code>NodePort</code>for <code>Services</code>that are in production. As you are using non-standard ports, you often need to set-up an external load balancer that listens to the standard ports and redirects the traffic to the <code>&lt;NodeIp&gt;:&lt;NodePort&gt;</code>.</p>
<h3 id="exposing-services-as-loadbalancer" class="code-line" data-line="65">Exposing services as <code>LoadBalancer</code></h3>
<p class="code-line" data-line="67">Declaring a service of type <code>LoadBalancer</code> exposes it externally using a cloud provider’s load balancer. The cloud provider will provision a load balancer for the <code>Service</code>, and map it to its automatically assigned <code>NodePort</code>. How the traffic from that external load balancer is routed to the <code>Service</code> pods depends on the cluster provider.</p>
<p><figure id="attachment_14703" aria-describedby="caption-attachment-14703" style="width: 500px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14703" src="/blog/wp-content/uploads/2019/02/81CC04AA-9585-4FCD-A53C-1C1CACDCBAB4.jpeg" alt="LoadBalancer" width="500" height="559" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/81CC04AA-9585-4FCD-A53C-1C1CACDCBAB4.jpeg 716w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/81CC04AA-9585-4FCD-A53C-1C1CACDCBAB4-269x300.jpeg 269w" sizes="auto, (max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-14703" class="wp-caption-text">LoadBalancer</figcaption></figure></p>
<div class="imageFrame">
<p class="code-line" data-line="71">
</div>
<p class="code-line" data-line="76">The <code>LoadBalancer</code> is the best option for a production environment, with two caveats:</p>
<ul>
<li class="code-line" data-line="78">Every <code>Service</code> that you deploy as <code>LoadBalancer</code> will get it&#8217;s own IP.</li>
<li class="code-line" data-line="79">The <code>LoadBalancer</code> is usually billed based on the number of exposed services, which can be expensive.</li>
</ul>
<blockquote class="code-line" data-line="81">
<p class="code-line" data-line="81">We are currently offering the OVH Managed Kubernetes LoadBalancer service as a free preview, until the end of summer 2019.</p>
</blockquote>
<h3 id="what-about-ingress" class="code-line" data-line="84">What about <code>Ingress</code>?</h3>
<p class="code-line" data-line="86">According to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official documentation</a>, an <code>Ingress</code> is an API object that manages external access to the services in a cluster (typically HTTP). So what&#8217;s the difference between this and <code>LoadBalancer</code> or <code>NodePort</code>?</p>
<p class="code-line" data-line="88"><code>Ingress</code> isn&#8217;t a type of <code>Service</code>, but rather an object that acts as a <a href="https://en.wikipedia.org/wiki/Reverse_proxy" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">reverse proxy</a> and single entry-point to your cluster that routes the request to different services. The most basic <code>Ingress</code> is the <a href="https://github.com/kubernetes/ingress-nginx" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NGINX Ingress Controller</a>, where the NGINX takes on the role of reverse proxy, while also functioning as SSL.</p>
<p><figure id="attachment_14699" aria-describedby="caption-attachment-14699" style="width: 450px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14699" src="/blog/wp-content/uploads/2019/02/AF5F301F-ADDE-4ED7-9B80-4E2BA51DA6DB-225x300.png" alt="Ingress" width="450" height="600" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/AF5F301F-ADDE-4ED7-9B80-4E2BA51DA6DB-225x300.png 225w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/AF5F301F-ADDE-4ED7-9B80-4E2BA51DA6DB.png 600w" sizes="auto, (max-width: 450px) 100vw, 450px" /><figcaption id="caption-attachment-14699" class="wp-caption-text">Ingress</figcaption></figure></p>
<p class="code-line" data-line="90">Ingress is exposed to the outside of the cluster via <code>ClusterIP</code> and Kubernetes proxy, <code>NodePort</code>, or <code>LoadBalancer</code>, and routes incoming traffic according to the configured rules.</p>
<p><figure id="attachment_14706" aria-describedby="caption-attachment-14706" style="width: 450px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14706" src="/blog/wp-content/uploads/2019/02/61EC6FAF-0BDF-4273-9DAD-13480B755E02.png" alt="Ingress behind LoadBalancer" width="450" height="600" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/61EC6FAF-0BDF-4273-9DAD-13480B755E02.png 600w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/61EC6FAF-0BDF-4273-9DAD-13480B755E02-225x300.png 225w" sizes="auto, (max-width: 450px) 100vw, 450px" /><figcaption id="caption-attachment-14706" class="wp-caption-text">Ingress behind LoadBalancer</figcaption></figure></p>
<p class="code-line" data-line="92">The main advantage of using an <code>Ingress</code> behind a <code>LoadBalancer</code> is the cost: you can have lots of services behind a single <code>LoadBalancer</code>.</p>
<h2 data-line="92">Which one should I use?</h2>
<p>Well, that&#8217;s the one million dollar question, and one which will probably elicit a different response depending on who you ask!</p>
<p>You could go 100% <code>LoadBalancer</code>, getting an individual <code>LoadBalancer</code> for each service. Conceptually, it&#8217;s simple: every service is independent, with no extra configuration needed. The downside is the price (you will be paying for one <code>LoadBalancer</code> per service), and also the difficulty of managing lots of different IPs.</p>
<p>You could also use only one <code>LoadBalancer</code> and an <code>Ingress </code>behind it. All your services would be under the same IP, each one in a different path. It&#8217;s a cheaper approach, as you only pay for one <code>LoadBalancer</code>, but if your services don&#8217;t have a logical relationship, it can quickly become chaotic.</p>
<p>If you want my personal opinion, I would try to use a combination of the two&#8230;</p>
<p>An approach I like is having a <code>LoadBalancer</code> for every related set of services, and then routing to those services using an <code>Ingress</code>behind the  <code>LoadBalancer</code>. For example, let&#8217;s say you have two different microservice-based APIs, each one with around 10 services. I would put one <code>LoadBalancer</code> in front of one <code>Ingress</code> for each API, the <code>LoadBalancer</code>being the single public entry-point, and the<code>Ingress</code> routing traffic to the API&#8217;s different services.</p>
<p>But if your architecture is quite complex (especially if you&#8217;re using microservices), you will soon find that manually managing everything with <code>LoadBalancer</code> and <code>Ingress</code> is  rather  cumbersome. If that&#8217;s the case, the answer could be to delegate those tasks to a service mesh&#8230;</p>
<h2>What&#8217;s a service mesh?</h2>
<p>You may have heard of <a href="https://istio.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Istio</a> or <a href="https://linkerd.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Linkerd</a>, and how they make it easier to build microservice architectures on Kubernetes, adding nifty perks like A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.</p>
<p>Istio, Linkerd, and similar tools are service meshes, which allow you to build networks of microservices and define their interactions, while simultaneously adding some high-value features that make the setup and operation of microservice-based architectures easier.</p>
<p>There&#8217;s a lot to talk about when it comes to using service meshes on Kubernetes, but as they say, that&#8217;s a story for another time&#8230;</p>
<p>&nbsp;<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fgetting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress%2F&amp;action_name=Getting%20external%20traffic%20into%20Kubernetes%20%26%238211%3B%20ClusterIp%2C%20NodePort%2C%20LoadBalancer%2C%20and%20Ingress&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" /></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Why OVH Managed Kubernetes?</title>
		<link>https://blog.ovhcloud.com/why-ovh-managed-kubernetes/</link>
		
		<dc:creator><![CDATA[Horacio Gonzalez]]></dc:creator>
		<pubDate>Thu, 17 Jan 2019 16:22:15 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Platform]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=14087</guid>

					<description><![CDATA[Using Kubernetes is a great experience, operating it in production is way less simple. And building a managed Kubernetes platform is even worse…<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fwhy-ovh-managed-kubernetes%2F&amp;action_name=Why%20OVH%20Managed%20Kubernetes%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<blockquote class="wp-block-quote is-style-large is-layout-flow wp-block-quote-is-layout-flow"><p>Using Kubernetes is a great experience, operating it in production is way less simple. And building a managed Kubernetes platform is even&nbsp;worse…</p></blockquote>



<p>In November 2018 we released the beta versi on of our <a href="https://www.ovh.com/fr/kubernetes/" target="_blank" rel="noreferrer noopener" data-wpel-link="exclude">Managed Kubernetes service</a>. It was the outcome of a journey that took us from Kubernetes users to build a fully managed Kubernetes service, become a certified Kubernetes platform and learn lots of things about building, operating and taming Kubernetes at scale…</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="885" height="443" src="/blog/wp-content/uploads/2019/01/kubernetesblog02.jpg" alt="" class="wp-image-14278" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/01/kubernetesblog02.jpg 885w, https://blog.ovhcloud.com/wp-content/uploads/2019/01/kubernetesblog02-300x150.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/01/kubernetesblog02-768x384.jpg 768w" sizes="auto, (max-width: 885px) 100vw, 885px" /></figure>



<p>As the <strong>beta is running now,</strong> and the lasts issues are being worked out for the final release, we take some time to share some of the <strong>lessons we have learnt</strong>, the <strong>technological choices</strong> that we have taken and the <strong>tooling</strong> we have built in the process.</p>



<p>In today’s post we will introduce our Managed Kubernetes, explaining why do we built it. In next posts we will look deeper at some aspects of the architecture, like scaling the ETCD or how we start customers Kubernetes masters inside our master Kubernetes’ worker nodes…</p>



<p>And of course, if you want to know more about our Managed Kubernetes, if you would like to see a post about a particular matter, <strong>don’t hesitate to leave a comment</strong>!</p>



<h3 class="wp-block-heading">The Kubernetes journey&nbsp;</h3>



<p>The first time you play with Minikube if often <strong>astonishing</strong>. No more worrying about managing the instances, no need to monitor if the containers are running, you stop an instance and Kubernetes re-creates the containers in another instance… <strong>It’s a kind of magic</strong>!</p>



<p>Then, as a new believer, you tell yourself that you should try to build a true cluster, and deploy some bigger apps on it. You create some VM, you learn to use <code>kubeadm</code> and some time later you have spawned a fresh Kubernetes cluster to deploy your apps on. The magic is still there, but you begin to feel that, like in most tales, <strong>magic comes with a price</strong>…</p>



<h3 class="wp-block-heading" id="mce_6">Putting Kubernetes in production?</h3>



<p>And when you try to deploy your first production Kubernetes cluster on-premises, on a hypervisor or bare-metal platform, you discover than the price could be a bit steep…&nbsp;</p>



<p>Deploying the Kubernetes cluster is only the beginning, in order to considerer it <em><strong>prod ready</strong>,</em> you also need to ensure that:&nbsp;</p>



<ul class="wp-block-list"><li>The installation process is <strong>automatisable</strong> and <strong>repeatable</strong></li><li>The <strong>upgrade/rollback</strong> process is safe&nbsp;</li><li>A <strong>recovery procedure</strong> exists, is documented and tested</li><li><strong>Performance</strong> is predictable and consistent, specially when using persistent volumes</li><li>The cluster is <strong>operable</strong>, with enough <strong>traces</strong>, <strong>metrics</strong> and <strong>logs</strong> to detect and debug failures and problems</li><li>The service is <strong>secure</strong> and <strong>high-available</strong></li></ul>



<h3 class="wp-block-heading">Our answer to this operational complexity</h3>



<p>Well, if you though deploying your new Kubernetes cluster was going to give you this whole <a href="https://www.youtube.com/watch?v=ajT90pC3ris" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">NoOps</a> thing, it seems you were wrong. Keeping the magic metaphor, learning to master magic takes a long time and it’s not without risk…</p>



<p>So, as many powerful technologies, Kubernetes apparent simplicity and versatility in the Dev side comes with a <strong>high complexity in the Ops side</strong>. No wonder that most users looks at the managed Kubernetes front when they need to upgrade from proof-of-concept to production.</p>



<p>At OVH, as a user-focused company, we wanted to answer that demand by creating <strong>our managed Kubernetes solution</strong>, fully based on open source, without vendor-locking, fully compatible with any pure Kubernetes solution. Our objective was to give our users a fully managed turnkey Kubernetes cluster, ready to use, without the hassle of installation or operation.</p>



<h3 class="wp-block-heading">On the shoulders of&nbsp;giants…</h3>



<p>So we wanted to build a <a href="https://www.ovh.com/fr/kubernetes/" data-wpel-link="exclude">managed Kubernetes solution</a>, but how to do it? The first step was simple: we needed to be sure that the underlying infrastructure was rock solid, so we decided to base it on our own, OpenStack based, <a href="https://www.ovh.com/world/public-cloud/instances/technologies/" target="_blank" rel="noreferrer noopener" data-wpel-link="exclude">Public Cloud offer</a>.</p>



<div class="wp-block-image"><figure class="alignright is-resized"><img loading="lazy" decoding="async" src="/blog/wp-content/uploads/2019/01/certified_kubernetes_color-222x300.png" alt="Certified Kubernetes Hosting" class="wp-image-14145" width="111" height="150"/></figure></div>



<p>Building our platform over a mature, high available, standards based product as OVH Public Cloud, allowed us to concentrate our efforts in the real problem we had in hand: creating a highly scalable, easy to operate, <a href="https://landscape.cncf.io/category=certified-kubernetes-hosted&amp;grouping=category&amp;selected=ovh-managed-kubernetes-service" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">CNCF certified</a>, managed Kubernetes service.</p>



<h3 class="wp-block-heading">What’s next?</h3>



<p>In the next posts in the series, we are going to dive into the architecture of <a href="https://www.ovh.com/fr/blog/why-ovh-managed-kubernetes/" data-wpel-link="exclude"><strong>OVH Managed Kubernetes service</strong></a>, detailing some of our technological choices, explaining why we took them and how we made it work.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/01/kubinception-01-1024-777x1024.jpg" alt="" class="wp-image-14142" width="583" height="768" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/01/kubinception-01-1024-777x1024.jpg 777w, https://blog.ovhcloud.com/wp-content/uploads/2019/01/kubinception-01-1024-228x300.jpg 228w, https://blog.ovhcloud.com/wp-content/uploads/2019/01/kubinception-01-1024-768x1013.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/01/kubinception-01-1024.jpg 1024w" sizes="auto, (max-width: 583px) 100vw, 583px" /></figure></div>



<p>We will begin with one of our boldest decisions: running <strong>Kubernetes over Kubernetes</strong>, or as we like to call it, the <em>Kubinception</em>.&nbsp;</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fwhy-ovh-managed-kubernetes%2F&amp;action_name=Why%20OVH%20Managed%20Kubernetes%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
