<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>OVHcloud Managed Kubernetes Archives - OVHcloud Blog</title>
	<atom:link href="https://blog.ovhcloud.com/tag/ovhcloud-managed-kubernetes/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.ovhcloud.com/tag/ovhcloud-managed-kubernetes/</link>
	<description>Innovation for Freedom</description>
	<lastBuildDate>Fri, 06 Feb 2026 15:23:05 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Moving Beyond Ingress: Why should OVHcloud Managed Kubernetes Service (MKS) users start looking at the Gateway API?</title>
		<link>https://blog.ovhcloud.com/moving-beyond-ingress-why-should-ovhcloud-managed-kubernetes-service-mks-users-start-looking-at-the-gateway-api/</link>
		
		<dc:creator><![CDATA[Aurélie Vache&nbsp;and&nbsp;Antonin Anchisi]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 09:26:36 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=30016</guid>

					<description><![CDATA[For years, the Kubernetes Ingress API, and the popular Ingress NGINX controller (ingress-nginx), have been the default way to expose applications running inside a Kubernetes cluster. But the ecosystem is changing: the Kubernetes SIG network has announced the retirement of Ingress NGINX in March 2026. After March 2026 the Ingress NGINX will no longer get [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmoving-beyond-ingress-why-should-ovhcloud-managed-kubernetes-service-mks-users-start-looking-at-the-gateway-api%2F&amp;action_name=Moving%20Beyond%20Ingress%3A%20Why%20should%20OVHcloud%20Managed%20Kubernetes%20Service%20%28MKS%29%20users%20start%20looking%20at%20the%20Gateway%20API%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large is-resized"><img fetchpriority="high" decoding="async" width="1024" height="680" src="https://blog.ovhcloud.com/wp-content/uploads/2025/12/Gribouillis-2025-12-02-13.47.59.631-1024x680.png" alt="" class="wp-image-30084" style="width:669px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/12/Gribouillis-2025-12-02-13.47.59.631-1024x680.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/12/Gribouillis-2025-12-02-13.47.59.631-300x199.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/12/Gribouillis-2025-12-02-13.47.59.631.png 1505w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>For years, the Kubernetes <strong>Ingress</strong> API, and the popular Ingress NGINX controller (ingress-nginx), have been the default way to expose applications running inside a Kubernetes cluster.</p>



<p>But the ecosystem is changing: the Kubernetes SIG network has announced the <a href="https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">retirement of Ingress NGINX</a> in March 2026.</p>



<p>After <strong>March 2026 </strong>the Ingress NGINX will no longer get new features, new releases, security patches and bug fixes.</p>



<p>Furthermore, the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubernetes project <strong>recommends using Gateway instead of Ingress</strong></a>.</p>



<p>The Ingress API has already been frozen, which means it is no longer being developed, and will have no further changes or updates made to it. The Kubernetes project has no plans to remove Ingress from Kubernetes.</p>



<p>While OVHcloud Managed Kubernetes Service (MKS) does not yet provide a native <strong>GatewayClass</strong>, you can already benefit from Gateway API capabilities today by deploying your own controller 💪 .</p>



<p>Also, until Gateway API becomes fully integrated with OpenStack providers, there is an <strong>intermediate option</strong>: using a <strong>modern, actively maintained Ingress controller</strong> other than ingress-nginx.</p>



<h3 class="wp-block-heading">The limitations of the current Ingress controller model</h3>



<p>The traditional Kubernetes Ingress model was intentionally simple: define an <code>Ingress</code>, install an <code>Ingress Controller</code>, and let it configure a single proxy (usually Nginx) to route traffic.</p>



<p>This design works, but it comes with limitations:</p>



<p>&#8211; Single Monolithic “Entry Point”: All HTTP routing for the entire cluster goes through <strong>one shared proxy</strong>. It adds complexity, configuration conflicts and scaling challenges.<br>&#8211; Protocol limitations: only <strong>HTTP and HTTPS</strong>.Support for gRPC, HTTP/2, TCP, UDP or TLS passthrough is inconsistent and controller-specific.<br>&#8211; Heavy Reliance on Annotations: Advanced features (timeouts, rewrites, header handling…) rely on custom annotations.<br>&#8211; Strong 3rd parties and cloud Load Balancers support: Every <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Ingress controllers</a> (3rd parties providers) come with their specialized annotations.</p>



<p>Finally, as mentioned, the most used Ingress controller, Ingress NGINX, will be retired in March 2026.</p>



<h3 class="wp-block-heading">A Transitional Solution: Using a Modern Ingress Controller (Traefik, Contour, HAProxy…)</h3>



<p>Before moving to the Gateway API, as a transitional solution, OVHcloud MKS users can simply replace Ingress Nginx with a <strong>modern, actively maintained Ingress controller</strong>.</p>



<p>This allows you to:</p>



<p>&#8211; keep using your existing <code>Ingress</code> manifests<br>&#8211; keep the same architecture: Service type LoadBalancer → OVHcloud Public Cloud Load Balancer → Ingress Controller<br>&#8211; avoid relying on unsupported or deprecated components<br>&#8211; gain features (better gRPC support, built‑in dashboards, improved L7 behaviour&#8230;)</p>



<h4 class="wp-block-heading">Popular alternatives:</h4>



<p><a href="https://doc.traefik.io/traefik/providers/kubernetes-ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><strong>Traefik</strong></a>:<br>&#8211; Very easy to deploy<br>&#8211; Excellent support for HTTP/2, gRPC, WebSockets<br>&#8211; Built‑in dashboard<br>&#8211; Supports both Ingress and Gateway API<br>&#8211; Actively maintained<br>&#8211; Seamless migration from NGINX Ingress Controller to Traefik with <a href="https://doc.traefik.io/traefik/reference/routing-configuration/kubernetes/ingress-nginx/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NGINX annotation compatibility</a></p>



<p><strong><a href="https://projectcontour.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Contour</a> (Envoy)</strong>:<br>&#8211; Envoy-based Ingress Controller<br>&#8211; Excellent performance<br>&#8211; Good stepping‑stone toward Gateway API</p>



<p><a href="https://www.haproxy.com/documentation/kubernetes-ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><strong>HAProxy Ingress</strong></a>:<br>&#8211; Extremely performant<br>&#8211; Enterprise-grade L7 routing<br>&#8211; Optional Gateway API support</p>



<p><strong><a href="https://docs.nginx.com/nginx-gateway-fabric/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NGINX Gateway Fabric</a> (NGF)</strong>:<br>&#8211; The successor to Ingress NGINX<br>&#8211; Built directly around Gateway API<br>&#8211; Still maturing but a strong long‑term candidate</p>



<p>If you are interested, you can read the more<a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> exhaustive list of Ingress controllers</a>.</p>



<h3 class="wp-block-heading">Installing an Alternative Ingress Controller on OVHcloud MKS</h3>



<p>We will show you how to install <strong>Traefik</strong>, as an alternative Ingress controller and use it to spawn a single OVHcloud Public Cloud Load Balancer (based on OpenStack Octavia).</p>



<p>Install Traefik:</p>



<pre class="wp-block-code"><code class="">helm repo add traefik https://traefik.github.io/charts<br>helm repo update<br><br>helm install traefik traefik/traefik --namespace traefik --create-namespace --set service.type=LoadBalancer</code></pre>



<p>This automatically triggers:<br>&#8211; the OpenStack CCM (used by OVHcloud)<br>&#8211; the creation of an OVHcloud Public Cloud Load Balancer<br>&#8211; exposure of Traefik through a public IP</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="179" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-1024x179.png" alt="" class="wp-image-30035" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-1024x179.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-300x52.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-768x134.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-1536x268.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-2048x358.png 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>After several seconds, the Load Balancer will be active.</p>



<p>Check that Traefik is running:</p>



<pre class="wp-block-code"><code class="">$ kubectl get all -n traefik<br>NAME                           READY   STATUS    RESTARTS   AGE<br>pod/traefik-6777c5db85-pddd6   1/1     Running   0          31s<br><br>NAME              TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE<br>service/traefik   LoadBalancer   10.3.129.188   &lt;pending&gt;     80:30267/TCP,443:30417/TCP   31s<br><br>NAME                      READY   UP-TO-DATE   AVAILABLE   AGE<br>deployment.apps/traefik   1/1     1            1           31s<br><br>NAME                                 DESIRED   CURRENT   READY   AGE<br>replicaset.apps/traefik-6777c5db85   1         1         1       31s</code></pre>



<p>Then in order to use it, create an <code>ingress.yaml</code> file with the following content:</p>



<pre class="wp-block-code"><code class="">apiVersion: networking.k8s.io/v1<br>kind: Ingress<br>metadata:<br>  name: my-app-ingress<br>  namespace: default<br>  annotations:<br>    kubernetes.io/ingress.class: "traefik"  # Specifies Traefik as the ingress controller<br>spec:<br>  rules:<br>    - host: my-app.local<br>      http:<br>        paths:<br>          - path: /<br>            pathType: Prefix<br>            backend:<br>              service:<br>                name: my-app-service<br>                port:<br>                  number: 80</code></pre>



<p>And apply it in your cluster:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f ingress.yaml</code></pre>



<p>Using this type of alternative provides a <strong>fully supported, modern Ingress Controller</strong> while you prepare a long‑term transition to the Gateway API.</p>



<h3 class="wp-block-heading">Gateway API: A modern, flexible networking model</h3>



<p>The <strong>Gateway API</strong> is the next-generation Kubernetes networking specification. It introduces clearer roles and more flexible architectures.</p>



<p>Gateway API splits responsibilities across:<br>&#8211; <strong>GatewayClass</strong>: defines the type of gateway and which controller manages it<br>&#8211; <strong>Gateway</strong>: the actual entry point (e.g., a Load Balancer)<br>&#8211; <strong>Routes</strong>: routing rules, protocol-specific (HTTPRoute, TLSRoute, GRPCRoute, TCPRoute…)</p>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="800" height="700" src="https://blog.ovhcloud.com/wp-content/uploads/2025/12/image-1.png" alt="" class="wp-image-30065" style="width:558px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/12/image-1.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/12/image-1-300x263.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/12/image-1-768x672.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></figure>



<p>Gateway API supports:<br>&#8211; HTTP(S)<br>&#8211; HTTP/2<br>&#8211; gRPC<br>&#8211; TCP<br>&#8211; TLS passthrough<br>…in a consistent and portable way.</p>



<p>Unlike Ingress, Gateway API is explicitly designed to allow providers like OVHcloud, AWS, GCP, Azure to:<br>&#8211; provision Load Balancers (LB)<br>&#8211; manage listeners<br>&#8211; expose multiple ports<br>&#8211; integrate with their LB features<br>This paves the way for native OVHcloud <strong>GatewayClass</strong> support.</p>



<h3 class="wp-block-heading">How does it work today on OVHcloud MKS?</h3>



<p>OVHcloud MKS relies on the OpenStack Cloud Controller Manager (CCM) to provision OVHcloud <strong>Public Cloud</strong> Load Balancers in response to a Service of type <code>LoadBalancer</code>.</p>



<p>Since MKS does not yet include a native <code>GatewayClass</code>, you can use Gateway API today as follows:</p>



<p>1. You deploy an existing Gateway Controller (Envoy Gateway, Traefik, Contour/Envoy…) and its GatewayClass.<br>2. The controller deploys a Data Plane proxy inside the cluster.<br>3. To expose that proxy, you still have to create a <code>Service</code> of type <strong>LoadBalancer</strong> (and your app of course).<br>4. The CCM provisions an OVHcloud Public Cloud Load Balancer and forwards traffic to your proxy.</p>



<p>Thanks to that, you will have a fully functional Gateway API. The workflow is very similar to that which is required for using NGINX Ingress controller.</p>



<h3 class="wp-block-heading">Using the Gateway API on OVHcloud MKS today</h3>



<p>You can already use the Gateway API by deploying your preferred controller.</p>



<p>Here’s an example using<a href="https://gateway.envoyproxy.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> Envoy Gateway</a>, one of the most future-proof options.</p>



<p>Install Gateway API CRDs:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/latest/download/standard-install.yaml</code></pre>



<p>Deploy Envoy Gateway:</p>



<pre class="wp-block-code"><code class="">helm install eg oci://docker.io/envoyproxy/gateway-helm -n envoy-gateway-system --create-namespace</code></pre>



<p>You should have a result like this:</p>



<pre class="wp-block-code"><code class="">$ helm install eg oci://docker.io/envoyproxy/gateway-helm -n envoy-gateway-system --create-namespace<br><br>Pulled: docker.io/envoyproxy/gateway-helm:1.6.0<br>Digest: sha256:5c55e7844ae8cff3152ca00330234ef61b1f9fa3d466f50db2c63a279f1cd1df<br>NAME: eg<br>LAST DEPLOYED: Mon Dec  1 16:27:07 2025<br>NAMESPACE: envoy-gateway-system<br>STATUS: deployed<br>REVISION: 1<br>TEST SUITE: None<br>NOTES:<br>**************************************************************************<br>*** PLEASE BE PATIENT: Envoy Gateway may take a few minutes to install ***<br>**************************************************************************<br><br>Envoy Gateway is an open source project for managing Envoy Proxy as a standalone or Kubernetes-based application gateway.<br><br>Thank you for installing Envoy Gateway! 🎉<br><br>Your release is named: eg. 🎉<br><br>Your release is in namespace: envoy-gateway-system. 🎉<br><br>To learn more about the release, try:<br><br>  $ helm status eg -n envoy-gateway-system<br>  $ helm get all eg -n envoy-gateway-system<br><br>To have a quickstart of Envoy Gateway, please refer to https://gateway.envoyproxy.io/latest/tasks/quickstart.<br><br>To get more details, please visit https://gateway.envoyproxy.io and https://github.com/envoyproxy/gateway.</code></pre>



<p>Check the Envoy gateway is running:</p>



<pre class="wp-block-code"><code class="">$ kubectl get po -n envoy-gateway-system<br>NAME                            READY   STATUS    RESTARTS   AGE<br>envoy-gateway-9cbbc577c-5h5qw   1/1     Running   0          16m</code></pre>



<p>As a quickstart, you can install directly the <a href="https://gateway-api.sigs.k8s.io/api-types/gatewayclass/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">GatewayClass</a>, <a href="https://gateway-api.sigs.k8s.io/api-types/gateway/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Gateway</a>, <a href="https://gateway-api.sigs.k8s.io/api-types/httproute/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">HTTPRoute</a> and an example app:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/quickstart.yaml -n default</code></pre>



<p>This command deploys a <code>GatewayClass</code>, a <code>Gateway</code>, a <code>HTTPRoute</code> and an app deployed in a deployment and exposed through a service:</p>



<pre class="wp-block-code"><code class="">gatewayclass.gateway.networking.k8s.io/eg created<br>gateway.gateway.networking.k8s.io/eg created<br>serviceaccount/backend created<br>service/backend created<br>deployment.apps/backend created<br>httproute.gateway.networking.k8s.io/backend created</code></pre>



<p>As you can see, a GatewayClass have been deployed:</p>



<pre class="wp-block-code"><code class="">$ kubectl get gatewayclass -o yaml | kubectl neat<br>apiVersion: v1<br>items:<br>- apiVersion: gateway.networking.k8s.io/v1<br>  kind: GatewayClass<br>  metadata:<br>    name: eg<br>  spec:<br>    controllerName: gateway.envoyproxy.io/gatewayclass-controller<br>kind: List<br>metadata:<br>  resourceVersion: ""</code></pre>



<p>Note that a GatewayClass is a cluster-wide resource so you don&#8217;t have to specify any namespace.</p>



<p>A Gateway have been deployed also:</p>



<pre class="wp-block-code"><code class="">$ kubectl get gateway -o yaml -n default | kubectl neat<br>apiVersion: v1<br>items:<br>- apiVersion: gateway.networking.k8s.io/v1<br>  kind: Gateway<br>  metadata:<br>    name: eg<br>    namespace: default<br>  spec:<br>    gatewayClassName: eg<br>    listeners:<br>    - allowedRoutes:<br>        namespaces:<br>          from: Same<br>      name: http<br>      port: 80<br>      protocol: HTTP<br>kind: List<br>metadata:<br>  resourceVersion: ""</code></pre>



<p>A HTTPRoute also:</p>



<pre class="wp-block-code"><code class="">$ kubectl get httproute -o yaml -n default | kubectl neat<br>apiVersion: v1<br>items:<br>- apiVersion: gateway.networking.k8s.io/v1<br>  kind: HTTPRoute<br>  metadata:<br>    name: backend<br>    namespace: default<br>  spec:<br>    hostnames:<br>    - www.example.com<br>    parentRefs:<br>    - group: gateway.networking.k8s.io<br>      kind: Gateway<br>      name: eg<br>    rules:<br>    - backendRefs:<br>      - group: ""<br>        kind: Service<br>        name: backend<br>        port: 3000<br>        weight: 1<br>      matches:<br>      - path:<br>          type: PathPrefix<br>          value: /<br>kind: List<br>metadata:<br>  resourceVersion: ""</code></pre>



<p>In order to retrieve the external IP (of the external Load Balancer), you just have to get information about the Gateway and export it in an environment variable:</p>



<pre class="wp-block-code"><code class="">$ kubectl get gateway eg<br>NAME   CLASS   ADDRESS        PROGRAMMED   AGE<br>eg     eg      xx.xxx.xx.xxx   True        18m<br><br>$ export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')<br><br>$ echo $GATEWAY_HOST<br>xx.xxx.xx.xxx</code></pre>



<p>And finally, a <code>backend</code> service have been deployed with its deployment:</p>



<pre class="wp-block-code"><code class="">$ kubectl get pod,svc -l app=backend -n default<br>NAME                           READY   STATUS    RESTARTS   AGE<br>pod/backend-765694d47f-zr6hh   1/1     Running   0          21m<br><br>NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE<br>service/backend   ClusterIP   10.3.114.179   &lt;none&gt;        3000/TCP   21m</code></pre>



<p>In order to create your own <code>Gateway</code> and <code>*Route</code> resources, don&#8217;t hesitate to take a look at the <a href="https://gateway-api.sigs.k8s.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Gateway API website</a>.</p>



<h3 class="wp-block-heading">Conclusion</h3>



<p>Two migration paths are currently available for OVHcloud MKS users:</p>



<ul class="wp-block-list">
<li>Short-term: switch to a modern Ingress Controller (Traefik, Contour, HAProxy, NGF&#8230;). It provides full support for current Ingress usage, without requiring API changes.</li>



<li>Long-term: adopt the Gateway API. Gateway API brings multi‑protocol support, clearer separation of roles, and is the strategic direction of Kubernetes networking.</li>
</ul>



<p>Which approach and which tool should you choose? Well, it’s up to you, depending on your use cases, your teams, your needs… 🙂</p>



<p>As we have seen in this blog post, OVHcloud MKS users can begin adopting these technologies today, safely and incrementally.</p>



<p>This ecosystem is evolving quickly, so stay tuned to find out about the coming release of a pre-installed official GatewayClass (based on OpenStack Octavia) 💪.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmoving-beyond-ingress-why-should-ovhcloud-managed-kubernetes-service-mks-users-start-looking-at-the-gateway-api%2F&amp;action_name=Moving%20Beyond%20Ingress%3A%20Why%20should%20OVHcloud%20Managed%20Kubernetes%20Service%20%28MKS%29%20users%20start%20looking%20at%20the%20Gateway%20API%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Solutions at OVHcloud to overcome the Docker Hub pull rate limits</title>
		<link>https://blog.ovhcloud.com/solutions-at-ovhcloud-to-overcome-the-docker-hub-pull-rate-limits/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Fri, 11 Apr 2025 06:53:38 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Docker Hub]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Private Registry]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<category><![CDATA[registry]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=28623</guid>

					<description><![CDATA[For the past few months, Docker has been announcing the implementation of new pull rate limits for the Docker Hub. The most significant change is the 10 pulls-per-hour limit, per IP address, for unauthenticated users that can quickly lead to a &#8220;You have reached your pull rate limit&#8221; error message. Even if these changes have [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fsolutions-at-ovhcloud-to-overcome-the-docker-hub-pull-rate-limits%2F&amp;action_name=Solutions%20at%20OVHcloud%20to%20overcome%20the%20Docker%20Hub%20pull%20rate%20limits&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="960" height="540" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/ovh_solutions_overcome_docker_hub_pull_rate_limits-1.png" alt="" class="wp-image-28707" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/ovh_solutions_overcome_docker_hub_pull_rate_limits-1.png 960w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/ovh_solutions_overcome_docker_hub_pull_rate_limits-1-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/ovh_solutions_overcome_docker_hub_pull_rate_limits-1-768x432.png 768w" sizes="auto, (max-width: 960px) 100vw, 960px" /></figure>



<p>For the past few months, <a href="https://www.docker.com/blog/revisiting-docker-hub-policies-prioritizing-developer-experience/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Docker has been announcing the implementation of new pull rate limits for the Docker Hub</a>. The most significant change is the 10 pulls-per-hour limit, per IP address, for unauthenticated users that can quickly lead to a &#8220;You have reached your pull rate limit&#8221; error message.</p>



<p>Even if these changes have been implemented and rollbacked as of April 1, 2025, at OVHcloud, we are aware that these upcoming changes could impact your deployments and daily work.</p>



<p>In this blog post, you will find several solutions and best practices that can help you reduce Docker pull commands and avoid hitting Docker Hub&#8217;s pull rate limit.</p>



<h3 class="wp-block-heading">Use OVHcloud Managed Private Registry and activate the proxy cache</h3>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="800" height="800" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry.png" alt="" class="wp-image-28658" style="width:181px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-70x70.png 70w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p><a href="https://www.ovhcloud.com/en/public-cloud/managed-rancher-service/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Managed Private Registry</a> (MPR) is a container image registry, based on CNCF project Harbor. It allows you to store and manage Docker (or OCI-compliant) container images and artifacts in a private, secure, and scalable environment, hosted in OVHcloud&#8217;s infrastructure.</p>



<p>MPR provides a <strong>proxy cache</strong> feature that helps you mirror and cache images from external registries, like <strong>Docker Hub</strong>, <strong>Github Container Registry</strong>, <strong>Quay</strong>, <strong>JFrog Artifactory Registry</strong>, etc. External registries can be private or public. This improves performance and reduces rate limits imposed by external registries 💪.</p>



<h4 class="wp-block-heading">Configure proxy cache in OVHcloud Managed Private Registry</h4>



<p>If you don&#8217;t have deployed a MPR yet, you can deploy it through the <a href="https://help.ovhcloud.com/csm/en-gb-public-cloud-private-registry-creation?id=kb_article_view&amp;sysparm_article=KB0050325" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Control Panel</a>, the <a href="https://help.ovhcloud.com/csm/en-public-cloud-private-registry-creation-via-terraform?id=kb_article_view&amp;sysparm_article=KB0050330" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Terraform provider</a>, the <a href="https://help.ovhcloud.com/csm/en-public-cloud-private-registry-creation-with-pulumi?id=kb_article_view&amp;sysparm_article=KB0061073" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Pulumi provider</a> and even the API. Follow the guide according to your needs.</p>



<p>First, log in the <a href="https://help.ovhcloud.com/csm/en-gb-public-cloud-private-registry-connect-to-ui?id=kb_article_view&amp;sysparm_article=KB0050321" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Harbor user interface</a> on your private registry, follow the guide if you needed to.</p>



<p>⚠️ In order to activate the proxy cache, you need to log in the Harbor UI with an administrator account.</p>



<h5 class="wp-block-heading">Registry endpoint creation</h5>



<p>In the left sidebar, click on <strong>Registries</strong> (inside the Administration section).</p>



<p>Then click on the <strong>New endpoint</strong> button.</p>



<p>Select Docker Hub in the provider list, enter a name (&#8220;Docker Hub&#8221; for example), fill your Docker Hub login in Access ID field and fill your Docker Hub password in Access Secret field.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="674" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-1024x674.png" alt="" class="wp-image-28663" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-1024x674.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-300x197.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-768x505.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-1536x1010.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21.png 1818w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>⚠️ Note that we <strong>strongly recommend</strong> using a <strong>Docker account</strong> (even a free one) to <strong>avoid rate limits</strong>, for unanthenticated users, when pulling images. Without authentication, Docker Hub enforces strict pull limits, which may cause failures when pulling frequently used images.</p>



<p>Click on the <strong>Test connection</strong> button to test if your login and password are correct.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="620" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39-1024x620.png" alt="" class="wp-image-28664" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39-1024x620.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39-300x182.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39-768x465.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39.png 1228w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Now click on the <strong>OK</strong> button in order to create the new endpoint.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="330" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-1024x330.png" alt="" class="wp-image-28665" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-1024x330.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-300x97.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-768x247.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-1536x494.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-2048x659.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>The Docker Hub endpoint is created 🎉</p>



<h5 class="wp-block-heading">Proxy cache project creation</h5>



<p>In the left sidebar, click on <strong>Projects</strong>, then click on the <strong>New project</strong> button.</p>



<p>Enter a project name (&#8220;docker-hub&#8221; for example), enable the Proxy Cache, click on the Docker Hub endpoint in the list and click on the <strong>OK</strong> button.</p>



<p>ℹ️ Note that a project is private by default, so you have to click on the Public checkbox if you want to change the visibilty of a project.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="735" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33-1024x735.png" alt="" class="wp-image-28669" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33-1024x735.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33-300x215.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33-768x551.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33.png 1182w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>⚠️ The name of a proxy cache project should not contains dot(s), indeed it can causes issues with external tools like Kaniko.</p>



<p>Your proxy cache project have been created 🎉</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="373" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-1024x373.png" alt="" class="wp-image-28670" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-1024x373.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-300x109.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-768x280.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-1536x560.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-2048x746.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>⚠️ A proxy cache project works similarly to a normal Harbor project, except that you are not able to push images to a proxy cache project.</p>



<p>Now when you want to pull a Docker images hosted in the Docker Hub you proxy cached, instead of pulling directly from the Docker Hub, you need to configure your docker/podman pull commands and Kubernetes pod manifests to pull images from the OVHcloud Managed Private Registry:</p>



<pre class="wp-block-code"><code class="">$ docker pull xxxxxxxx.c1.de1.container-registry.ovh.net/docker-hub/ovhcom/ovh-platform-hello:latest
latest: Pulling from docker-hub/ovhcom/ovh-platform-hello
1f3e46996e29: Pull complete 
6aa905c35cc0: Pull complete 
Digest: sha256:fddb76f0eb92d95b3721bfa0ea87350c5d39ea262e90cd30d66f429bb40c8b07
Status: Downloaded newer image for xxxxxxxx.c1.de1.container-registry.ovh.net/docker-hub/ovhcom/ovh-platform-hello:latest
xxxxxxxx.c1.de1.container-registry.ovh.net/docker-hub/ovhcom/ovh-platform-hello:latest</code></pre>



<h3 class="wp-block-heading">Disable the AlwaysPullImages admission plugin on your MKS cluster</h3>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="200" height="200" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Managed-Kubernetes-Service.png" alt="" class="wp-image-28702" style="width:186px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Managed-Kubernetes-Service.png 200w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Managed-Kubernetes-Service-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Managed-Kubernetes-Service-70x70.png 70w" sizes="auto, (max-width: 200px) 100vw, 200px" /></figure>



<p>By default, the <strong>AlwaysPullImages</strong> Kubernetes admission plugin is enabled in your OVHcloud Managed Kubernetes (MKS) cluster.</p>



<p>⚠️ When it is enabled, this forces the imagePullPolicy of a container to be set to <strong>Always</strong>, no matter how it is specified when creating the resource.</p>



<p>This is useful in a multitenant cluster so that users can be assured that their private images can only be used by those who have the credentials to pull them. Without this admission controller, once an image has been pulled to a node, any pod from any user can use it by knowing the image&#8217;s name (assuming the Pod is scheduled onto the right node), without any authorization check against the image.</p>



<p>But, it can cause a lot of pull requests to the Docker Hub and you can reach the rate limits.</p>



<p>So a solution can be to deactivate the AlwaysPullImages admission plugin in your MKS cluster.</p>



<p>In this blog post, we will deactivate it in the OVHcloud Control Panel.</p>



<h5 class="wp-block-heading">Enable/Disable MKS admission plugins</h5>



<p>Log in the OVHcloud Control Panel. In the left sidebar, click on the <strong>Managed Kubernetes Service</strong> and then click on the wanted MKS cluster.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="777" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-1024x777.png" alt="" class="wp-image-28687" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-1024x777.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-300x227.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-768x582.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-1536x1165.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01.png 2044w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>In the <strong>Cluster information</strong> section, scroll down and click on <strong>Enable/disable plugin</strong>. A popup will appear.</p>



<p>Then click on <strong>Disable</strong> for the Always Pull Images plugin and click on the <strong>Save</strong> button.</p>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" width="896" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36-896x1024.png" alt="" class="wp-image-28691" style="width:387px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36-896x1024.png 896w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36-262x300.png 262w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36-768x878.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36.png 936w" sizes="auto, (max-width: 896px) 100vw, 896px" /></figure>



<p>⚠️ Any changes on the Admission plugins require a redeployment of the MKS cluster API server (without data loss) so the API server can be potentially not available during the redeployment.</p>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" width="541" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-37-541x1024.png" alt="" class="wp-image-28695" style="width:228px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-37-541x1024.png 541w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-37-159x300.png 159w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-37.png 572w" sizes="auto, (max-width: 541px) 100vw, 541px" /></figure>



<h3 class="wp-block-heading">Conclusion</h3>



<p>To learn more about how to use and configure <a href="https://help.ovhcloud.com/csm/fr-documentation-public-cloud-containers-orchestration-managed-private-registry?id=kb_browse_cat&amp;kb_id=574a8325551974502d4c6e78b7421938&amp;kb_category=7939e6a464282d10476b3689cb0d0ed7&amp;spa=1" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud private registries</a> and <a href="https://help.ovhcloud.com/csm/world-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s?id=kb_browse_cat&amp;kb_id=574a8325551974502d4c6e78b7421938&amp;kb_category=f334d555f49801102d4ca4d466a7fdd2&amp;spa=1" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud MKS clusters</a>, don&#8217;t hesitate to follow our guides.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fsolutions-at-ovhcloud-to-overcome-the-docker-hub-pull-rate-limits%2F&amp;action_name=Solutions%20at%20OVHcloud%20to%20overcome%20the%20Docker%20Hub%20pull%20rate%20limits&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Create Kubernetes clusters with OVHcloud Managed Rancher Service</title>
		<link>https://blog.ovhcloud.com/create-kubernetes-clusters-with-ovhcloud-managed-rancher-service/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Wed, 20 Nov 2024 10:25:20 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Managed Rancher Services]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=27656</guid>

					<description><![CDATA[Container orchestration is now essential for modern application deployment, providing scalability, flexibility, and resource efficiency. It has become common to have to manage several Kubernetes clusters, but doing so effectively requires the right tools. Fortunately, OVHcloud offers a solution that enables you to manage all your Kubernetes clusters from a single, centralized management tool: Managed [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fcreate-kubernetes-clusters-with-ovhcloud-managed-rancher-service%2F&amp;action_name=Create%20Kubernetes%20clusters%20with%20OVHcloud%20Managed%20Rancher%20Service&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p>Container orchestration is now essential for modern application deployment, providing scalability, flexibility, and resource efficiency. It has become common to have to manage several Kubernetes clusters, but doing so effectively requires the right tools. Fortunately, OVHcloud offers a solution that enables you to manage all your Kubernetes clusters from a single, centralized management tool: Managed Rancher Service (MRS).</p>



<p>In this blog post we will see what is MRS and how to create several different Kubernetes clusters through the Rancher UI.</p>



<h2 class="wp-block-heading">Managed Rancher Service</h2>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="733" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-11-1024x733.png" alt="" class="wp-image-27673" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-11-1024x733.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-11-300x215.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-11-768x550.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-11.png 1288w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Managed Rancher Services (MRS), in General Availability since September 2024, is based on Rancher, an open-source container management platform, that simplifies the deployment and management of Kubernetes clusters. Managed Rancher Service by OVHcloud provides a powerful platform for orchestrating Kubernetes clusters seamlessly.</p>



<p><img decoding="async" width="956px;" height="646px;" id="docs-internal-guid-afd7b855-7fff-5cf0-82f4-9e29acef9958" src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUfgiMMaUkWZCI6LMYjaP9Uh7L6wHmIC4VjR6ewWTkGE6P1FgEYX4K6KZm6FBuarHRuBnzzGOGB_HzH62a2AwS-ZG1SL8b3jckh8xqpmeMTIhTJUND-SmnVZyubejfZtLIC3Tw7ElewOupAGmVBmVdqMZjenNteU=s2048?key=wQK7eM6IZ3-befkwZxY5vw"><a href="https://www.ovh.com/auth/?onsuccess=https%3A//www.ovh.com/manager/%23/public-cloud/create-rancher&amp;ovhSubsidiary=GB" data-wpel-link="exclude"></a></p>



<p>With the Managed&nbsp;<a href="https://www.ovhcloud.com/en-gb/learn/what-is-rancher/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Rancher</a>&nbsp;Service it becomes easy to manage and create multiple Kubernetes clusters on any platform and location including:</p>



<ul class="wp-block-list">
<li><strong>Hosted Kubernetes provider</strong> (e.g. OVHcloud MKS, AWS EKS, GCP GKE, etc).</li>



<li><strong>Infrastructure Provider</strong> &#8211; Public Cloud or Private Cloud (vSphere, Nutanix, etc).</li>



<li>Bare-metal servers, cloud hosted or on premise.</li>



<li>Virtual machines, cloud hosted or on premise</li>
</ul>



<p>You can also <a href="https://help.ovhcloud.com/csm/en-ie-public-cloud-managed-rancher-service-import-kubernetes?id=kb_article_view&amp;sysparm_article=KB0064294" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">import your existing Kubernetes clusters</a> and then manage them for a multi-cloud purpose:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="502" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-6-1024x502.png" alt="" class="wp-image-27667" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-6-1024x502.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-6-300x147.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-6-768x377.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-6.png 1400w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Find more information on our&nbsp;dedicated<a href="https://www.ovhcloud.com/fr/public-cloud/managed-rancher-service/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">&nbsp;Managed Rancher Services page</a>.</p>



<h2 class="wp-block-heading">How To</h2>



<p>Through this blog post we will show how to create several Kubernetes clusters:</p>



<ul class="wp-block-list">
<li>a Managed Kubernetes (MKS) cluster (Hosted Kubernetes provider)</li>



<li>a Kubernetes cluster running on OVHcloud Public Cloud Compute Instances (PCI) (Infrastructure Provider)</li>



<li>a K3s Kubernetes cluster using existing nodes (Custom driver)</li>
</ul>



<h3 class="wp-block-heading">Create an OVHcloud Managed Kubernetes (MKS) cluster</h3>



<p>In this part of this blog post, we will create a MKS cluster with 3 nodes based on b3-8 flavor:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="650" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-12-1024x650.png" alt="" class="wp-image-27674" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-12-1024x650.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-12-300x190.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-12-768x487.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-12-1536x975.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-12.png 1576w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Log in to your Managed Rancher Service UI and then click on <strong>Create</strong> button.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="257" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-1-1024x257.png" alt="" class="wp-image-27660" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-1-1024x257.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-1-300x75.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-1-768x193.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-1-1536x385.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-1-2048x513.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>To create an MKS cluster, use the&nbsp;<strong>hosted Kubernetes provider</strong>&nbsp;way and click on the&nbsp;<code>OVHcloud MKS</code>&nbsp;driver.</p>



<p>First, enter an MKS cluster name, for example&nbsp;<code>my-rancher-mks-cluster</code>:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="188" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-2-1024x188.png" alt="" class="wp-image-27661" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-2-1024x188.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-2-300x55.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-2-768x141.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-2-1536x282.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-2-2048x376.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>At this step, you can optionally configure <strong>Member Roles</strong> or <strong>Labels &amp; Annotations</strong>, <a href="https://help.ovhcloud.com/csm/en-ie-public-cloud-managed-rancher-service-creating-mks?id=kb_article_view&amp;sysparm_article=KB0064279#creating-a-managed-kubernetes-service-mks-cluster" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">follow our guide</a> to know more.</p>



<p>For the&nbsp;<strong>Account Configuration</strong>, you need to provide your OVHcloud API credentials (<code>Application Key</code>,&nbsp;<code>Application Secret</code>&nbsp;and&nbsp;<code>Consumer Key</code>). If you don&#8217;t have OVHcloud API credentials, you can follow our guide on how to&nbsp;<a href="https://help.ovhcloud.com/csm/en-ie-api-getting-started-ovhcloud-api?id=kb_article_view&amp;sysparm_article=KB0042786#advanced-usage-pair-ovhcloud-apis-with-an-application" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Generate your OVHcloud API keys</a>.</p>



<p>Also provide your&nbsp;<code>Public Cloud project ID</code>. The project ID is where your Managed Kubernetes Service (MKS) cluster will be deployed. You can follow the guide on&nbsp;<a href="https://help.ovhcloud.com/csm/en-ie-public-cloud-compute-create-project?id=kb_article_view&amp;sysparm_article=KB0050609" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">How to create your first Project</a>&nbsp;or if already existing, you can copy/paste it from the&nbsp;<a href="https://www.ovh.com/auth/?action=gotomanager&amp;from=https://www.ovh.ie/&amp;ovhSubsidiary=ie" data-wpel-link="exclude">OVHcloud Control Panel</a>&nbsp;or&nbsp;<a href="https://eu.api.ovh.com/console-preview/?section=%2Fcloud&amp;branch=v1#get-/cloud/project" data-wpel-link="exclude">API</a>.</p>



<p>And finally select the OVHcloud API endpoint, depending on your location: <code>ovh-eu</code>, <code>ovh-ca</code> or <code>ovh-us.</code></p>



<p>For the&nbsp;<strong>Cluster Configuration</strong>, you need to select the&nbsp;<code>Region</code>&nbsp;where your cluster will be deployed. Then, select the&nbsp;<code>Kubernetes Version</code>. Then, select&nbsp;<code>Update Policy</code>&nbsp;information. If you want further information, refer to the&nbsp;<a href="https://help.ovhcloud.com/csm/en-ie-public-cloud-kubernetes-change-security-update?id=kb_article_view&amp;sysparm_article=KB0049651" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Managed Kubernetes Update Policies</a>&nbsp;guide.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="250" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-3-1024x250.png" alt="" class="wp-image-27662" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-3-1024x250.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-3-300x73.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-3-768x188.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-3-1536x375.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-3-2048x501.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>For the&nbsp;<strong>Network Configuration</strong>, in the&nbsp;<code>Private Network ID</code>&nbsp;field, select an existing OVHcloud Public Cloud private network or choose&nbsp;<code>None</code>&nbsp;if you want to create a cluster with nodes using only public interfaces.</p>



<p>For the&nbsp;<strong>NodePools Configuration</strong>, for every NodePool you want to:</p>



<ul class="wp-block-list">
<li>Enter the&nbsp;<strong>Name</strong>&nbsp;of the NodePool. The name must be unique inside a same MKS cluster.</li>



<li>Choose an OVHcloud instance&nbsp;<strong>Flavor</strong>&nbsp;used by this NodePool.</li>



<li>Enable or disable the Autoscaling.</li>



<li>Enter the number of nodes you want, it&#8217;s the&nbsp;<strong>Size</strong>&nbsp;of your NodePool. If the autoscaling is enabled, then choose the minimum and maximum number of nodes.</li>



<li>Enable the&nbsp;<strong>Monthly Billing</strong>&nbsp;(Hourly billing by default).</li>



<li>Click on the&nbsp;<code>Add Node Pool</code>&nbsp;button to add the node pool in the list below.</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="338" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-4-1024x338.png" alt="" class="wp-image-27663" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-4-1024x338.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-4-300x99.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-4-768x253.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-4-1536x506.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-4-2048x675.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Click on the&nbsp;<code>Finish &amp; Create Cluster</code>&nbsp;button.</p>



<p>Your MKS cluster is provisioning, the creation will take around 3-4 minutes for the cluster creation and 3-4 minutes for the node pool with 3 nodes and the Rancher agent deployed into them.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="238" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-5-1024x238.png" alt="" class="wp-image-27665" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-5-1024x238.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-5-300x70.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-5-768x178.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-5-1536x357.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-5-2048x476.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">Create a Kubernetes cluster based on OVHcloud Public Cloud Compute Instances&nbsp;</h3>



<p>In this part of this blog post, we will create a Kubernetes cluster with 3 nodes based on b3-16 flavor for etcd &amp; control-plane &amp; 2 nodes based on b3-8 flavor for workers.</p>



<p>In the Rancher UI, you have to first<a href="https://help.ovhcloud.com/csm/en-ie-public-cloud-managed-rancher-service-create-kubernetes-compute-instances?id=kb_article_view&amp;sysparm_article=KB0064332#creating-ovhcloud-public-cloud-credentials" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> create OVHcloud Public Cloud credentials</a>.</p>



<p>Then, go back to the Rancher UI Home and click on the <strong>Create</strong> button.</p>



<p>This time, you will create a Kubernetes cluster running in Compute Instances, so you have to <strong>provision new nodes and create a cluster using RKE2/K3s</strong> through the Infrastructure provider and specifically the <code>OVHcloud Public Cloud</code>&nbsp;driver:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="510" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-7-1024x510.png" alt="" class="wp-image-27668" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-7-1024x510.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-7-300x149.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-7-768x383.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-7-1536x765.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-7-2048x1020.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Select the OVHcloud Public Cloud credential created earlier in this blog post:</p>



<figure class="wp-block-image"><img decoding="async" src="https://help.ovhcloud.com/public_cloud-containers_orchestration-managed_rancher_service-creating-kubernetes-pci-compute-instances-images-rancher-select-creds.png" alt="OVHcloud Managed Rancher Service Create Kubernetes PCI"/></figure>



<p>Then, define the cluster name, <code>my-rancher-k8s-pci</code> for example.</p>



<figure class="wp-block-image"><img decoding="async" src="https://help.ovhcloud.com/public_cloud-containers_orchestration-managed_rancher_service-creating-kubernetes-pci-compute-instances-images-rancher-cluster-name.png" alt="OVHcloud Managed Rancher Service Cluster Name"/></figure>



<p>In the&nbsp;<strong>Machine Pools</strong>&nbsp;section you will configure your cluster. When you configure a machine pool in Rancher, there are three roles that can be assigned to nodes:&nbsp;<code>etcd</code>,&nbsp;<code>Control Plane</code>&nbsp;and&nbsp;<code>Worker</code>.</p>



<p><strong>Note</strong>:<br>In Rancher when you configure a node, there are three roles that can be assigned to nodes:&nbsp;<code><em>etcd</em></code>,&nbsp;<code><em>controlplane</em></code>&nbsp;and&nbsp;<code><em>worker</em></code>.</p>



<p>There are some good practices:</p>



<ul class="wp-block-list">
<li>At least 3 machines/nodes with the role&nbsp;<code>etcd</code>&nbsp;are needed to survive a loss of 1 node and have a minimum high availability configuration for etcd. 3&nbsp;<code>etcd</code>&nbsp;nodes are generally sufficient for smaller and medium clusters, and 5&nbsp;<code>etcd</code>&nbsp;nodes for large clusters.</li>



<li>At least 2 machines/nodes with the role&nbsp;<code>Control Plane</code>&nbsp;for master component high availability.</li>



<li>You can set both the&nbsp;<code>etcd</code>&nbsp;and&nbsp;<code>Control Plane</code>&nbsp;roles for one instance.</li>



<li>The&nbsp;<code>Worker</code>&nbsp;role should not be used or added to nodes with the&nbsp;<code>etcd</code>&nbsp;or&nbsp;<code>Control Plane</code>&nbsp;role.</li>



<li>At least 2 machines/nodes with the&nbsp;<code>Worker</code>&nbsp;role for workload rescheduling upon node failure.</li>
</ul>



<p>For each of the machine pools, you have to:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="551" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-8-1024x551.png" alt="" class="wp-image-27669" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-8-1024x551.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-8-300x161.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-8-768x413.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-8-1536x826.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-8-2048x1101.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<ul class="wp-block-list">
<li>Define the pool name (<code>node-pool-1</code>&nbsp;for example for the first machine pool).</li>



<li>Define machine count (3 for example for the first machine pool).</li>



<li>Select roles (check&nbsp;<code>etcd</code>&nbsp;and&nbsp;<code>Control Plane</code>&nbsp;for the first machine pool)/</li>



<li>Choose the region (<code>GRA</code>11&nbsp;for example for the first machine pool). If you want to check the availability of specific products that you plan to use alongside Kubernetes, you can refer to the&nbsp;<a href="https://www.ovhcloud.com/en-ie/public-cloud/regions-availability/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Availability of Public Cloud Product</a>&nbsp;page.</li>



<li>Choose the flavor (<code>b3-16</code>&nbsp;for example). You can refer to the&nbsp;<a href="https://www.ovhcloud.com/en-ie/public-cloud/prices/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Flavor list</a>.</li>



<li>Choose the image for the Operating System (OS) used for your machines/nodes. Please refer to&nbsp;<a href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">Rancher Operating Systems and Container Runtime Requirements</a>.</li>



<li>Choose a Key Pair (optional). It&#8217;s the SSH Key Pair that will be used to access your nodes. Please refer to this guide on&nbsp;<a href="https://help.ovhcloud.com/csm/en-ie-public-cloud-compute-getting-started?id=kb_article_view&amp;sysparm_article=KB0051014" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">how to create a SSH KeyPair and add it to your Public Cloud project</a>. If you leave this field empty, a new keypair will be generated automatically.</li>



<li>Choose the Security Group that will be applied to created instances. You can leave the field empty.</li>



<li>Choose the Availability Zone (only&nbsp;<code>nova</code>&nbsp;is supported at the moment).</li>



<li>Choose the Floating IP Pools (only&nbsp;<code>Ext-Net</code>&nbsp;is supported at the moment).</li>



<li>Choose the Networks. You need to choose a private network (with a gateway). The compute instances will be created in this private network.</li>
</ul>



<p>At the bottom of the&nbsp;<strong>Machine Pools</strong>&nbsp;section, click on the&nbsp;<code>+</code>&nbsp;button to add the second machine pool with 2&nbsp;<code>workers</code>&nbsp;machines/nodes and the same configuration.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="551" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-9-1024x551.png" alt="" class="wp-image-27670" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-9-1024x551.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-9-300x161.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-9-768x413.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-9-1536x826.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-9-2048x1101.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>As you can see, we can choose another flavor type for worker machines/nodes.</p>



<p>In the&nbsp;<strong>Cluster Configuration</strong>&nbsp;section, choose the Kubernetes version. You need to choose between RKE2 and K3s Kubernetes Operating System (OS). For a production environment, we recommend choosing RKE2.</p>



<p>You need also to choose the container network (CNI), we decided to choose <a href="https://cilium.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">CIlium</a> for this blog post but you can select <a href="https://www.tigera.io/project-calico/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">calico</a> or <a href="https://kops.sigs.k8s.io/networking/canal/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">canal</a> instead depending on your needs.</p>



<p>Select the&nbsp;<code>Container Network</code>, choose if you want to activate a Project Network isolation and the System Services tooling you want to install in your cluster.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="480" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-10-1024x480.png" alt="" class="wp-image-27671" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-10-1024x480.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-10-300x141.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-10-768x360.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-10-1536x720.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-10-2048x960.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Follow the&nbsp;<a href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">RKE2 cluster configuration reference</a>&nbsp;for the Cluster Configuration.</p>



<p>In the&nbsp;<strong>Member Roles</strong>&nbsp;tab, you can add members for users that need to access the cluster. After creating the cluster, you can also add members.</p>



<p>Finally, click the&nbsp;<code>Create</code>&nbsp;button to create your Kubernetes cluster with OVHcloud PCI driver.</p>



<p>The cluster creation can take several minutes (depending on the OS and on the number of nodes you want).</p>



<figure class="wp-block-image"><img decoding="async" src="https://help.ovhcloud.com/public_cloud-containers_orchestration-managed_rancher_service-creating-kubernetes-pci-compute-instances-images-rancher-cluster-created.png" alt="OVHcloud Managed Rancher Service Cluster Created"/></figure>



<h3 class="wp-block-heading">Create a Kubernetes cluster with existing nodes</h3>



<p>Another possibility through MRS is to create a Kubernetes cluster based on existing nodes. You can bring your own nodes and create a Kubernetes cluster running on them 🙂</p>



<p>For that the pre-requisite is to have existing machines (virtual or physical) accessible through SSH.</p>



<p>In the Rancher UI create on the <strong>Create</strong> button, scroll down and select the <strong>Custom</strong> driver:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="472" src="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-13-1024x472.png" alt="" class="wp-image-27679" srcset="https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-13-1024x472.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-13-300x138.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-13-768x354.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-13-1536x708.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2024/10/image-13-2048x944.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fill in a cluster name (<code>custom-kube-cluster</code>&nbsp;for example) and choose the Kubernetes version You can choose between K3s and RKE2. For production needs we recommend RKE2. And choose the container network (<code>calico</code>&nbsp;by default).</p>



<figure class="wp-block-image"><img decoding="async" src="https://help.ovhcloud.com/public_cloud-containers_orchestration-managed_rancher_service-creating-kubernetes-custom-nodes-images-rancher-cluster-creation.png" alt="Rancher Custom Cluster Creation"/></figure>



<p>Click on the different tabs to configure your cluster depending on your needs and then click on the <strong>Create</strong> button.</p>



<p>As we already said in the previous chapter,&nbsp;in Rancher, there are three roles that can be assigned to nodes:&nbsp;<code>etcd</code>,&nbsp;<code>Control Plane</code>&nbsp;and&nbsp;<code>Worker</code>.</p>



<p>For the configuration of our&nbsp;<code>etcd</code>&nbsp;+&nbsp;<code>Control Plane</code>&nbsp;nodes, check only the&nbsp;<code>etcd</code>&nbsp;and&nbsp;<code>Control Plane</code>&nbsp;Nodes Roles:</p>



<figure class="wp-block-image"><img decoding="async" src="https://help.ovhcloud.com/public_cloud-containers_orchestration-managed_rancher_service-creating-kubernetes-custom-nodes-images-rancher-cluster-roles.png" alt="Rancher cluster roles"/></figure>



<figure class="wp-block-image"><img decoding="async" src="https://help.ovhcloud.com/public_cloud-containers_orchestration-managed_rancher_service-creating-kubernetes-custom-nodes-images-rancher-cluster-roles-command.png" alt="Rancher command roles for etcd and control plane"/></figure>



<p>SSH to your machines/nodes you created for&nbsp;<code>etcd</code>&nbsp;and&nbsp;<code>Control Plane</code>&nbsp;and copy/paste the registration command.</p>



<pre class="wp-block-code"><code class="">ssh xxxxx@xxx.xxx.xxx.xxx

curl -fL https://xxxxxx.xxxx.rancher.ovh.net/system-agent-install.sh | sudo  sh -s - --server https://xxxxxx.xxxx.rancher.ovh.net --label 'cattle.io/os=linux' --token z2r458coqudhfilgdsifgdsqilgfqsdigfidsufgoisdnvzj --etcd --controlplane</code></pre>



<p>For the configuration of our&nbsp;<code>Worker</code>&nbsp;nodes, uncheck the checkboxes and check only the Worker checkbox:</p>



<figure class="wp-block-image"><img decoding="async" src="https://help.ovhcloud.com/public_cloud-containers_orchestration-managed_rancher_service-creating-kubernetes-custom-nodes-images-rancher-cluster-roles-worker.png" alt="Rancher cluster roles"/></figure>



<figure class="wp-block-image"><img decoding="async" src="https://help.ovhcloud.com/public_cloud-containers_orchestration-managed_rancher_service-creating-kubernetes-custom-nodes-images-rancher-command-worker.png" alt="Rancher command for workers"/></figure>



<p>SSH to your machines/nodes you created for&nbsp;<code>etcd</code>&nbsp;and&nbsp;<code>controlpane</code>&nbsp;and copy/paste the registration command.</p>



<pre class="wp-block-code"><code class="">ssh xxxxx@xxx.xxx.xxx.xxx

curl -fL https://xxxxxx.xxxx.rancher.ovh.net/system-agent-install.sh | sudo  sh -s - --server https://xxxxxx.xxxx.rancher.ovh.net --label 'cattle.io/os=linux' --token z2r458coqudhfilgdsifgdsqilgfqsdigfidsufgoisdnvzj --worker</code></pre>



<p>After executing these commands to the machines/nodes, wait until the cluster is in&nbsp;<code>Active</code>&nbsp;state in the Rancher UI.</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>Managed Rancher Service can help you to create, import and manage your new and existing Kubernetes clusters with a centralised interface. You saw in this blog posts three ways to create a Kubernetes clusters but we encourage you to test the other possibilities and explore the Rancher UI.</p>



<p><strong>Want to go further?</strong></p>



<p>Visit our technical&nbsp;<a href="https://help.ovhcloud.com/csm/en-gb-documentation-public-cloud-containers-orchestration-managed-rancher-service?id=kb_browse_cat&amp;kb_id=574a8325551974502d4c6e78b7421938&amp;kb_category=ba1cdc8ff1a082502d4cea09e7c8beb9&amp;spa=1" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">guides and how to about OVHcloud Managed Rancher Service</a>.<a href="https://blog.ovhcloud.com/author/aurelie-vache/" data-wpel-link="internal"></a></p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fcreate-kubernetes-clusters-with-ovhcloud-managed-rancher-service%2F&amp;action_name=Create%20Kubernetes%20clusters%20with%20OVHcloud%20Managed%20Rancher%20Service&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Using GPU on Managed Kubernetes Service with NVIDIA GPU operator</title>
		<link>https://blog.ovhcloud.com/using-gpu-on-managed-kubernetes-service-with-nvidia-gpu-operator/</link>
		
		<dc:creator><![CDATA[Maxime Hurtrel]]></dc:creator>
		<pubDate>Wed, 19 Jan 2022 15:53:13 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[Partnership]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=21533</guid>

					<description><![CDATA[Two years after launching our Managed Kubernetes service, we&#8217;re seeing a lot of diversity in the workloads that run in production. We have been challenged by some customers looking for GPU acceleration, and have teamed up with our partner NVIDIA to deliver high performance GPUs on Kubernetes. We&#8217;ve done it in a way that combines [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fusing-gpu-on-managed-kubernetes-service-with-nvidia-gpu-operator%2F&amp;action_name=Using%20GPU%20on%20Managed%20Kubernetes%20Service%20with%20NVIDIA%20GPU%20operator&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p>Two years after launching our Managed Kubernetes service, we&#8217;re seeing a lot of diversity in the workloads that run in production. We have been challenged by some customers looking for GPU acceleration, and have teamed up with our partner NVIDIA to deliver high performance GPUs on Kubernetes. We&#8217;ve done it in a way that combines <strong>simplicity, day-2-maintainability and total flexibility</strong>. The solution<strong> is now available in all OVHcloud regions where we offer Kubernetes and GPUs.</strong></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="574" src="https://blog.ovhcloud.com/wp-content/uploads/2021/12/Capture-décran-2021-12-29-à-10.50.52-1-1024x574.png" alt="" class="wp-image-21539" srcset="https://blog.ovhcloud.com/wp-content/uploads/2021/12/Capture-décran-2021-12-29-à-10.50.52-1-1024x574.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/Capture-décran-2021-12-29-à-10.50.52-1-300x168.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/Capture-décran-2021-12-29-à-10.50.52-1-768x431.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/Capture-décran-2021-12-29-à-10.50.52-1.png 1416w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">The challenge behind a fully managed service</h2>



<p>Readers unfamiliar with our orchestration service and/or GPUs may be surprised that we did not yet offer this integration in general availability. This lies in the fact that our team is focused on providing a <strong>totally managed experience, including patching the OS (Operating System) and Kubelet of each Node each time it is required</strong>. To achieve this goal, we have built and maintained a single hardened image for the dozens of flavors, in each of the 10+ regions.<br>Based on the experience of selected beta users, we found that this approach doesn&#8217;t always work for use cases that require a very specific NVIDIA driver configuration. Working with our technical partners at NVIDIA, we found a solution to leverage GPUs is a simple way that allows fine tuning such as the <a href="https://en.wikipedia.org/wiki/CUDA" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">CUDA</a> configuration for example.</p>



<h2 class="wp-block-heading">NVIDIA to the rescue </h2>



<p>This Keep-It-Simple-Stupid (KISS) solution relies on the great work of NVIDIA building and maintaining an <strong>official <a href="https://github.com/NVIDIA/gpu-operator" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NVIDIA GPU operator</a></strong><a href="https://github.com/NVIDIA/gpu-operator" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">.</a> The Apache 2.0 licensed software uses the operator framework within Kubernetes. It does this to <strong>automate the management of all NVIDIA software components needed to use GPUs</strong>, such as NVIDIA drivers, Kubernetes device plugin for GPUs, and others. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="895" src="https://blog.ovhcloud.com/wp-content/uploads/2021/12/gpu-operator-1024x895.png" alt="" class="wp-image-21547" srcset="https://blog.ovhcloud.com/wp-content/uploads/2021/12/gpu-operator-1024x895.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/gpu-operator-300x262.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/gpu-operator-768x671.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/gpu-operator.png 1394w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>We ensured it was compliant with our fully maintained Operating System (OS), based on a recent Ubuntu LTS version. After testing it, <a href="https://docs.ovh.com/gb/en/kubernetes/deploying-gpu-application" data-wpel-link="exclude">we documented how to use it on our Managed Kubernetes Service.</a> We appreciate that this solution leverages an open source software that you can use on any compatible NVIDIA hardware. This allows you to guarantee consistent behavior in hybrid or multicloud scenarios, aligned with our <a href="https://www.ovhcloud.com/en/lp/manifesto/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">SMART </a>motto.</p>



<p>Here is an illustration describing the <strong>shared responsibility model </strong>of the stack:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="772" src="https://blog.ovhcloud.com/wp-content/uploads/2021/12/ovh-nvidia3-1024x772.png" alt="" class="wp-image-21560" srcset="https://blog.ovhcloud.com/wp-content/uploads/2021/12/ovh-nvidia3-1024x772.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/ovh-nvidia3-300x226.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/ovh-nvidia3-768x579.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/ovh-nvidia3-1536x1157.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2021/12/ovh-nvidia3-2048x1543.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>All our OVHcloud Public customers can now le<a href="https://docs.ovh.com/gb/en/kubernetes/deploying-gpu-application" data-wpel-link="exclude">verage the feature, adding a GPU node pool to any of their existing or new clusters. </a>This can be done in the regions where both Kubernetes and T1 or T2 instances are available: GRA5, GRA7 and GRA9 (France), DE1 (Germany) (available in the upcoming weeks) and BHS5 (Canada) at the date this blog post is published.<br>Note that GPUs worker nodes are <strong>compatible with all features released, including <a href="https://docs.ovh.com/gb/en/kubernetes/using_vrack/" data-wpel-link="exclude">vRack technology</a> and <a href="https://docs.ovh.com/gb/en/kubernetes/using-cluster-autoscaler/" data-wpel-link="exclude">cluster autoscaling</a></strong> for example.</p>



<p>Having Kubernetes clusters with GPU options means deploying typical AI/ML applications, such as Kubeflow, MLFlow, JupyterHub, NVIDIA NGC is easy and flexible. Do not hesitate to discuss this feature with other Kubernetes users on our <a href="https://gitter.im/ovh/kubernetes" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Gitter Channel</a>. You may also have a look to our fully managed <a href="https://www.ovhcloud.com/en/public-cloud/ai-notebook/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Notebook</a> or <a href="https://www.ovhcloud.com/en/public-cloud/ai-training/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI training </a>services for even simpler out-of-the box experience and per-minute pricing!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fusing-gpu-on-managed-kubernetes-service-with-nvidia-gpu-operator%2F&amp;action_name=Using%20GPU%20on%20Managed%20Kubernetes%20Service%20with%20NVIDIA%20GPU%20operator&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>OVHcloud Managed Kubernetes certified Kubernetes 1.19</title>
		<link>https://blog.ovhcloud.com/ovhcloud-managed-kubernetes-certified-kubernetes-1-19/</link>
		
		<dc:creator><![CDATA[Sébastien Jardin&nbsp;and&nbsp;Horacio Gonzalez]]></dc:creator>
		<pubDate>Tue, 17 Nov 2020 14:42:34 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<guid isPermaLink="false">https://www.ovh.com/blog/?p=19496</guid>

					<description><![CDATA[Our OVHcloud Managed Kubernetes product has now been available for more than one year on general availability. From now on, Kubernetes version 1.19 is certified by the CNCF on our platform. Kubernetes is in constant evolution and amelioration, every new version brings a lot of new feature and fixes. The 1.19 is not the exception: [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fovhcloud-managed-kubernetes-certified-kubernetes-1-19%2F&amp;action_name=OVHcloud%20Managed%20Kubernetes%20certified%20Kubernetes%201.19&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p>Our OVHcloud Managed Kubernetes product has now been available for more than one year on general availability. From now on, Kubernetes version 1.19 is certified by the CNCF on our platform.</p>



<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2020/11/IMG_0367-1024x537.png" alt="" class="wp-image-19923" width="768" height="403" srcset="https://blog.ovhcloud.com/wp-content/uploads/2020/11/IMG_0367-1024x537.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2020/11/IMG_0367-300x157.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2020/11/IMG_0367-768x403.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2020/11/IMG_0367.png 1200w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure></div>



<p>Kubernetes is in constant evolution and amelioration, every new version brings a lot of new feature and fixes. The 1.19 is not the exception:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>Finally, we have arrived with Kubernetes 1.19, the second release for 2020, and by far the longest release cycle lasting 20 weeks in total. It consists of 34 enhancements: 10 enhancements are moving to stable, 15 enhancements in beta, and 9 enhancements in alpha.</p></blockquote>



<div class="wp-block-image"><figure class="aligncenter"><img decoding="async" src="https://d33wubrfki0l68.cloudfront.net/407c7c66b2f50a2b4c81f707f83d11f389a737e8/cfc36/images/blog/2020-08-26-kubernetes-1.19-release-announcement/accentuate.png" alt="Kubernetes 1.19 Release Logo"/><figcaption>Kubernetes 1.19 Accentuate the Paw-sitive logo by @emanate_design</figcaption></figure></div>



<h3 class="wp-block-heading">So what&#8217;s new in Kubernetes 1.19?</h3>



<p>You can find the whole change list in <a href="https://kubernetes.io/docs/setup/release/notes/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">the official release notes</a>, and a more casual reading version on the <a href="https://kubernetes.io/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubernetes blog</a>, but in this post we want to have a look to some of the (in our highly subjective point of view) major themes in this release:</p>



<h4 class="wp-block-heading">Ingress graduates to General Availability</h4>



<p>Ingress is a Kubernetes API object that manages external access to the services in a cluster, typically HTTP. It may provide load balancing, SSL termination and name-based virtual hosting. </p>



<p>Ingress is nowadays a central feature of Kubernetes, widely used in production. And surprisingly enough, it was still officially a beta. By graduating it to GA, the Kubernetes community acknowledges its importance and its <em>de facto</em> standard status. And of course, it opens the way to working on an Ingress v2, or some extensions, to give it even more features.</p>



<p>To get an overview of what&#8217;s Ingress and how does it compares (and interacts with) other ways to get external traffic into your Kubernetes cluster, you can read our <a href="https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/" data-wpel-link="exclude">Getting external traffic into Kubernetes – ClusterIp, NodePort, LoadBalancer, and Ingress</a> post.</p>



<figure class="wp-block-image size-large"><a href="https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/" data-wpel-link="exclude"><img decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/02/E69267D8-9239-43D4-9DA3-DAA5A54F879B.png" alt=""/></a></figure>



<p>An you can of course find more informations about Ingress at the official <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">documentation</a>.</p>



<h4 class="wp-block-heading">Avoiding permanent beta API Versions</h4>



<p>Ingress isn&#8217;t the only API that have been in beta status for ages, in fact it&#8217;s only one of the numerous examples of a semi-permanent beta status for new APIs, where widely used features remain as beta release after release even if they are considered production-ready by most users.</p>



<p>From Kubernetes v1.20 and onwards, SIG community have decided of a new policy is to avoid features staying in beta for a long time: when a new feature&#8217;s API reaches beta, that starts a countdown. The beta-quality API now has <strong>three releases</strong> (about nine calendar months) to either:</p>



<ul class="wp-block-list"><li>reach general availability (GA), and deprecate the beta, or</li><li>have a new beta version (<em>and deprecate the previous beta</em>).</li></ul>



<p>More informations in this <a href="https://kubernetes.io/blog/2020/08/21/moving-forward-from-beta/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubernetes blog post</a>. </p>



<h4 class="wp-block-heading">Increase Kubernetes support window to one year</h4>



<p>Until this release 1.19, minor versions of Kubernetes had a support window of nine months, from Kubernetes 1.19 onwards, the support window for minor releases increases to one year.</p>



<p>As a Managed Kubernetes provider, we are particularly happy of this change, as our observations fully align with the working group (WG) Long Term Support (LTS)&nbsp;showing that a significant proportion of Kubernetes users didn&#8217;t upgrade in the 9-month support period. </p>



<h4 class="wp-block-heading">Other APIs graduate to GA, and some new betas</h4>



<p>Some other Kubernetes APIs that been upgraded to general availability, and the old API endpoints have been deprecated: </p>



<ul class="wp-block-list"><li><code>apiextensions.k8s.io/v1beta1</code>  -&gt; <code>apiextensions.k8s.io/v1</code></li><li><code>apiregistration.k8s.io/v1beta1</code> -&gt; <code>apiregistration.k8s.io/v1</code></li><li><code>authentication.k8s.io/v1beta1</code> -&gt; <code>authentication.k8s.io</code></li><li><code>authorization.k8s.io/v1beta1</code> -&gt; <code>authorization.k8s.io/v1</code></li><li><code>coordination.k8s.io/v1beta1</code> -&gt; <code>coordination.k8s.io/v1</code></li></ul>



<p>Some APIs also get a new beta version:</p>



<ul class="wp-block-list"><li><code>autoscaling/v2beta1</code> -&gt;&nbsp;<code>autoscaling/v2beta2</code></li></ul>



<h4 class="wp-block-heading">Better Logging</h4>



<p>Dealing with logs in Kubernetes has always been a tricky task. As there was no uniform structure neither for log messages in the Kubernetes control plane nor for the reference to Kubernetes objects in the logs, automating the log management relied in ad-hoc solutions and<em> in fine</em> on dreaded regular expressions&#8230; In consequence, building analytical solutions using those logs, was not only complicated to develop but also hard to maintain.</p>



<p>Version 1.19, the &nbsp;SIG-instrumentation begins the migration towards a new structured log paradigm. The migration is ongoing, not all the logs are structured yet, so you still should have to handle  the unstructured log messages. But the promise is that when the migration will be done, most common logs will be more easily queryable, with standardized log messages and references to Kubernetes objects.</p>



<p>You can get more information in <a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">the corresponding KEP</a> (Kubernetes Enhancement Proposal) and in the<a href="https://kubernetes.io/docs/concepts/cluster-administration/system-logs/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> system logs</a> section of the documentation.</p>



<h3 class="wp-block-heading">Did you know ?</h3>



<h4 class="wp-block-heading">Deprecated detection</h4>



<p>In Kubernetes 1.19, SIG API Machinery has implemented deprecation warning on Kubernetes APIs. From this version onwards, making an API request to a deprecated REST API endpoint returns a&nbsp;<code>Warning</code>&nbsp;header in the response and a deprecation annotation on the audit event associated to the API call, and some metrics. </p>



<p>The warning includes details about the release in which the API will no longer be available, and the replacement API version. The idea is to inform both the end-user and the cluster administrator when deprecated APIs are used in the cluster.</p>



<p>To help you to upgrade those deprecated APIs, <code>kubectl</code> may warn you when you&#8217;re using them. the resource is deprecated. Simply use the <em>&#8211;raw</em> or <code><em>--warnings-as-errors</em></code> flag in your <code>kubectl</code> calls:</p>



<pre class="wp-block-code"><code class="">~$ kubectl get --raw /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions > /dev/null
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition</code></pre>



<p>More information on <a href="https://kubernetes.io/blog/2020/09/03/warnings/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">this Kubernetes blog post</a>.</p>



<h4 class="wp-block-heading">Ecosystem</h4>



<p>The <a href="https://www.cncf.io/blog/2020/07/15/certified-kubernetes-security-specialist-cks-coming-in-november/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Certified Kubernetes Security Specialist</a> (CKS) coming in November! CKS focuses on cluster &amp; system hardening, minimizing microservice vulnerabilities and the security of the supply chain.</p>



<h4 class="wp-block-heading">Community</h4>



<p><a href="https://www.kubernetes.dev" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubernetes.dev</a>, a Kubernetes contributor focused website has been launched. It brings the contributor documentation, resources and project event information into one central location.</p>



<h3 class="wp-block-heading">Some useful links</h3>



<p><a href="https://kubernetes.io/docs/setup/release/notes/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://kubernetes.io/docs/setup/release/notes/</a></p>



<p><a href="https://www.kubernetes.dev/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.kubernetes.dev/</a></p>



<p><a href="https://www.cncf.io/blog/2020/07/15/certified-kubernetes-security-specialist-cks-coming-in-november/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.cncf.io/blog/2020/07/15/certified-kubernetes-security-specialist-cks-coming-in-november/</a></p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fovhcloud-managed-kubernetes-certified-kubernetes-1-19%2F&amp;action_name=OVHcloud%20Managed%20Kubernetes%20certified%20Kubernetes%201.19&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>MyBinder and OVH partnership</title>
		<link>https://blog.ovhcloud.com/mybinder-and-ovh-partnership/</link>
		
		<dc:creator><![CDATA[Mael Le Gal]]></dc:creator>
		<pubDate>Mon, 24 Jun 2019 12:16:55 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Jupyter]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=15606</guid>

					<description><![CDATA[Last month, OVH and Binder team partnered together in order to support the growth of the BinderHub ecosystem around the world. With approximately 100,000 weekly users of the mybinder.org public deployment and 3,000 unique git repositories hosting Binder badges, the need for more resources and computing time was felt. Today, we are thrilled to announce [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmybinder-and-ovh-partnership%2F&amp;action_name=MyBinder%20and%20OVH%20partnership&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p class="part">Last month, <strong>OVH</strong> and <strong>Binder</strong> team partnered together in order to support the growth of the <strong>BinderHub</strong> ecosystem around the world.</p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="999" height="493" src="/blog/wp-content/uploads/2019/06/IMG_0301.png" alt="OVH loves Binder and the Jupyter project" class="wp-image-15666" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/06/IMG_0301.png 999w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/IMG_0301-300x148.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/IMG_0301-768x379.png 768w" sizes="auto, (max-width: 999px) 100vw, 999px" /></figure></div>



<p class="part">With approximately 100,000 weekly users of the <a href="http://mybinder.org/" target="_blank" rel="noopener noreferrer nofollow external" data-wpel-link="external">mybinder.org</a> public deployment and 3,000 unique git repositories hosting Binder badges, the need for more resources and computing time was felt.</p>



<p class="part">Today, we are thrilled to announce that <strong>OVH</strong> is now part of the world-wide federation of BinderHubs powering <a rel="noopener noreferrer nofollow external" href="http://mybinder.org/" target="_blank" data-wpel-link="external">mybinder.org</a>. All traffic to <a rel="noopener noreferrer nofollow external" href="http://mybinder.org/" target="_blank" data-wpel-link="external">mybinder.org</a> is now split between two BinderHubs &#8211; one run by the <strong>Binder team</strong>, and another run on <strong>OVH</strong> infrastructure.</p>



<p class="part">So for those who don’t already know <a href="http://mybinder.org/" target="_blank" rel="noopener noreferrer nofollow external" data-wpel-link="external">mybinder.org</a>, here&#8217;s a summary.</p>



<h2 class="part wp-block-heading" id="What-is-Jupyter">What is Jupyter?</h2>



<p class="part"><a href="https://jupyter.org/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Jupyter</a> is an awesome open-source project that allows users to create, visualise and edit interactive notebooks. It supports a lot of popular programming languages such as <strong>Python</strong>, <strong>R</strong> and <strong>Scala</strong> as well as some presentation standards such as markdown, code snippet, charts visualisation…</p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="300" height="223" src="/blog/wp-content/uploads/2019/06/jupyer_notebook-300x223.png" alt="" class="wp-image-15609" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer_notebook-300x223.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer_notebook-768x571.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer_notebook-1024x762.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer_notebook-1200x893.png 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer_notebook.png 1214w" sizes="auto, (max-width: 300px) 100vw, 300px" /></figure></div>



<p class="part"><em>Example of a local Jupyter Notebook reading a notebook inside the OVH GitHub repository <a href="https://github.com/ovh/prescience-client" target="_blank" rel="noopener noreferrer nofollow external" data-wpel-link="external">prescience client</a>.</em></p>



<p>The main use case is the ability to share your work with tons of people,&nbsp; who can try, use and edit the work directly from their web browser.</p>



<p>Many researchers and professors are now able to work remotely on the same projects, without any infrastructure or environment issues. It&#8217;s a major improvement for communities.</p>



<p>Here is for example a notebook (<a href="https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projects/blob/master/example-data-science-notebook/Example%20Machine%20Learning%20Notebook.ipynb" target="_blank" rel="noopener noreferrer nofollow external" data-wpel-link="external">Github project</a>) allowing you to use Machine Learning,&nbsp; from dataset ingestion to classification:</p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="1157" height="777" src="/blog/wp-content/uploads/2019/06/jupyer.machine.learning.png" alt="jupyter machine learning notebook example" class="wp-image-15642" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer.machine.learning.png 1157w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer.machine.learning-300x201.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer.machine.learning-768x516.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/jupyer.machine.learning-1024x688.png 1024w" sizes="auto, (max-width: 1157px) 100vw, 1157px" /></figure></div>



<p><em>Example of a Machine Learning Jupyter Notebook<br></em></p>



<h2 class="part wp-block-heading" id="What-is-JupyterHub">What is JupyterHub?</h2>



<p class="part"><a href="https://jupyter.org/hub" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">JupyterHub</a> is an even more awesome open-source project bringing the multi-user feature for <strong>Jupyter</strong> notebooks. With several pluggable authentication mechanisms (ex: PAM, OAuth), it allows <strong>Jupyter</strong> notebooks to be spawned on the fly from a centralised infrastructure. Users can then easily share their notebooks and access rights with each other. That makes <strong>JupyterHub</strong> perfect for companies, classrooms and research labs.</p>



<h2 class="part wp-block-heading" id="What-is-BinderHub">What is BinderHub?</h2>



<p class="part">Finally, <a href="https://binderhub.readthedocs.io/en/latest/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">BinderHub</a> is the cherry on the cake: it allows users to turn any Git repository (such as GitHub) into a collection of interactive <strong>Jupyter</strong> notebooks with only one click.</p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="300" height="276" src="/blog/wp-content/uploads/2019/06/binder-300x276.png" alt="" class="wp-image-15611" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/06/binder-300x276.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/binder-768x707.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/binder.png 962w" sizes="auto, (max-width: 300px) 100vw, 300px" /></figure></div>



<p><em>Landing page of the binder project</em></p>



<p class="part">The <strong>Binder</strong> instance deployed by OVH can be accessed <a rel="noopener noreferrer nofollow external" href="https://ovh.mybinder.org/" target="_blank" data-wpel-link="external">here</a>.</p>



<ul class="part wp-block-list"><li>Just choose a publicly accessible git repository (better if it already contains some <strong>Jupyter</strong> notebooks).</li><li>Copy the URL of a chosen repository into the correct binder field.</li><li>Click the launch button.</li><li>If it is the first time that binder sees the repository you provide, you will see compilation logs appear. Your repository is being analysed and prepared for the start of a related <strong>Jupyter</strong> notebook.</li><li>Once the compilation is complete you will be automatically redirected to your dedicated instance.</li><li>You can then start interacting and hacking inside the notebook.</li><li>On the initial binder page you will see a link to share your repository with others.</li></ul>



<h2 class="part wp-block-heading" id="How-it-works">How does it work?</h2>



<h3 class="part wp-block-heading" id="Tools-used-by-BinderHub">Tools used by BinderHub</h3>



<p class="part">BinderHub connects several services together to provide on-the-fly creation and registry of Docker images. It uses the following tools:</p>



<ul class="part wp-block-list"><li class="" data-startline="49" data-endline="49">A cloud provider such as OVH.</li><li class="" data-startline="50" data-endline="50">Kubernetes to manage resources on the cloud</li><li class="" data-startline="51" data-endline="51">Helm to configure and control Kubernetes.</li><li class="" data-startline="52" data-endline="52">Docker to use containers that standardise computing environments.</li><li class="" data-startline="53" data-endline="53">A BinderHub UI that users can access to specify Git repos they want built.</li><li class="" data-startline="54" data-endline="54">BinderHub to generate Docker images using the URL of a Git repository.</li><li class="" data-startline="55" data-endline="55">A Docker registry that hosts container images.</li><li class="" data-startline="56" data-endline="57">JupyterHub to deploy temporary containers for users.</li></ul>



<h3 class="part wp-block-heading" id="What-happens-when-a-user-clicks-a-Binder-link">What happens when a user clicks a Binder link?</h3>



<p class="part">After a user clicks a Binder link, the following chain of events happens:</p>



<ol class="part wp-block-list"><li class="" data-startline="62" data-endline="62">BinderHub resolves the link to the repository.</li><li class="" data-startline="63" data-endline="63">BinderHub determines whether a Docker image already exists for the repository at the latest reference (git commit hash, branch, or tag).</li><li class="" data-startline="64" data-endline="67">If the image doesn’t exist, BinderHub creates a build pod that uses repo2docker to:
<ul>
<li class="" data-startline="65" data-endline="65">Fetch the repository associated with the link.</li>
<li class="" data-startline="66" data-endline="66">Build a Docker container image containing the environment specified in configuration files in the repository.</li>
<li class="" data-startline="67" data-endline="67">Push that image to a Docker registry, and send the registry information to the BinderHub for future reference.</li>
</ul>
</li><li class="" data-startline="68" data-endline="68">BinderHub sends the Docker image registry to JupyterHub.</li><li class="" data-startline="69" data-endline="69">JupyterHub creates a Kubernetes pod for the user that serves the built Docker image for the repository.</li><li class="" data-startline="70" data-endline="71">JupyterHub monitors the user’s pod for activity, and destroys it after a short period of inactivity.</li></ol>



<h3 class="wp-block-heading">A diagram of the BinderHub architecture</h3>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="1024" height="818" src="https://www.ovh.com/blog/wp-content/uploads/2019/06/IMG_0300-1024x818.png" alt="MyBinder Architecture" class="wp-image-15663" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/06/IMG_0300-1024x818.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/IMG_0300-300x240.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/IMG_0300-768x614.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/IMG_0300-1200x959.png 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/06/IMG_0300.png 1468w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure></div>



<h2 class="part wp-block-heading">How we deployed it?</h2>



<h3 class="wp-block-heading">Powered by OVH Kubernetes</h3>



<p class="part">One great thing about the <strong>Binder</strong> project is that it is completely cloud agnostic, you just need a <strong>Kubernetes</strong> cluster to deploy on.</p>



<p class="part">Kubernetes is one of the best choices to make when it comes to scalability on a micro-services architecture stack. The managed Kubernetes solution is powered by OVH’s Public Cloud instances. With OVH Load Balancers and integrated additional disks, you can host all types of workloads, with total reversibility.</p>



<p class="part">To this end, we used 2 services in the OVH Public Cloud:</p>



<ul class="part wp-block-list"><li class="" data-startline="85" data-endline="85">A <a href="https://www.ovh.co.uk/public-cloud/kubernetes/" target="_blank" rel="noopener noreferrer nofollow external" data-wpel-link="external">Kubernetes Cluster</a> today consuming 6 nodes of <code>C2-15</code> VM instances (it will grow in the future)</li><li class="" data-startline="86" data-endline="87">A <a href="https://labs.ovh.com/private-registry" target="_blank" rel="noopener noreferrer" data-wpel-link="exclude">Docker Registry</a></li></ul>



<p class="part">We also ordered a specific domain name so that our binder stack could be publicly accessible from anywhere.</p>



<h3 class="part wp-block-heading" id="Installation-of-HELM-on-our-new-cluster">Installation of HELM on our new cluster</h3>



<p class="part">Once the automatic installation of our Kubernetes cluster was complete we downloaded the administration YAML file allowing us to manage our cluster and to launch <code>kubectl</code> commands on it.</p>



<p class="part"><code>kubectl</code> is the official and most popular tool used to administrate Kubernetes cluster. More information about how to install it can be found <a rel="noopener noreferrer nofollow external" href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" target="_blank" data-wpel-link="external">here</a>.</p>



<p class="part">The automatic deployment of the full Binder stack is already prepared in the form of Helm package. Helm is a package manager for kubernetes and it needs a client part (<code>helm</code>) and a server part (<code>tiller</code>) to work.</p>



<p class="part">All information about installing <code>helm</code> and <code>tille</code> can be found <a href="https://helm.sh/docs/using_helm/#installing-helm" target="_blank" rel="noopener noreferrer nofollow external" data-wpel-link="external">here</a>.</p>



<h3 class="part wp-block-heading" id="Configuration-of-our-HELM-deployment">Configuration of our Helm deployment</h3>



<p class="part">With <code>tiller</code> installed on our cluster, everything was ready to automate the deployment of binder in our OVH infrastructure.</p>



<p class="part">The configuration of the <code>helm</code> deployment is pretty straightforward and all the steps have been described by the Binder team <a rel="noopener noreferrer nofollow external" href="https://binderhub.readthedocs.io/en/latest/setup-binderhub.html" target="_blank" data-wpel-link="external">here</a>.</p>



<h3 class="part wp-block-heading" id="Integration-into-the-binderhub-CDCI-process">Integration into the binderhub CD/CI process</h3>



<p class="part">The<code> binder team </code>already had a travis workflow existing for the automation of their test and deployment processes. Everything is transparent and they expose all their configurations (except secrets) on <a rel="noopener noreferrer nofollow external" href="https://github.com/jupyterhub/mybinder.org-deploy" target="_blank" data-wpel-link="external">their</a> GitHub project. We just had to integrate with their current workflow and push our specific configuration on their repository.</p>



<p class="part">We then waited for their next launch of their Travis workflow and it worked.</p>



<p class="part">From this moment onward, the ovh stack for binder was running and accessible by anyone from everywhere at this adress: <a href="https://ovh.mybinder.org/" target="_blank" rel="noopener noreferrer nofollow external" data-wpel-link="external">https://ovh.mybinder.org/</a>.</p>



<h2 class="part wp-block-heading" id="What-comes-next">What comes next?</h2>



<p class="part"><strong>OVH</strong> will continue engaging with the data open-source community, and keep building a strong relationship with the <strong>Jupyter</strong> foundation and more generally the python community.</p>



<p class="part">This first collaborative experience with such a data-driven open-source organisation helped us to establish the best team organisation and management to ensure that both <strong>OVH</strong> and the community achieve their goals in the best way possible</p>



<p class="part">Working with open source is very different from the industry as it requires a different mindset: very human-centric, where everyone has different objectives, priorities, timeline and points of view that should all be considered.</p>



<h2 class="part wp-block-heading" id="Special-Thanks">Special Thanks</h2>



<p>We are grateful to the Binder, Jupyter, and QuantStack team for their help, the OVH K8s team for the OVH Managed Kubernetes and OVH Managed Private Registry, and the OVH MLS team for the support. You rock, people!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmybinder-and-ovh-partnership%2F&amp;action_name=MyBinder%20and%20OVH%20partnership&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Deploying a FaaS platform on OVH Managed Kubernetes using OpenFaaS</title>
		<link>https://blog.ovhcloud.com/deploying-a-faas-platform-on-ovh-managed-kubernetes-using-openfaas/</link>
		
		<dc:creator><![CDATA[Horacio Gonzalez]]></dc:creator>
		<pubDate>Fri, 24 May 2019 16:40:47 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[FaaS]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OpenFaaS]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Platform]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=15487</guid>

					<description><![CDATA[Several weeks ago, I was taking part in a meetup about Kubernetes, when one of the attendees made a remark that resonated deeply with me&#8230; Hey, Horacio, that Kubernetes thing is rather cool, but what I would have loved to see is a Functions-as-a-Service platform. Most of my apps could be easily done with a [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploying-a-faas-platform-on-ovh-managed-kubernetes-using-openfaas%2F&amp;action_name=Deploying%20a%20FaaS%20platform%20on%20OVH%20Managed%20Kubernetes%20using%20OpenFaaS&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p>Several weeks ago, I was taking part in a meetup about Kubernetes, when one of the attendees made a remark that resonated deeply with me&#8230;</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;"><em>Hey, Horacio, that Kubernetes thing is rather cool, but what I would have loved to see is a Functions-as-a-Service platform. Most of my apps could be easily done with a database and several serverless functions!</em></p></blockquote>



<p>It wasn&#8217;t the first time I&#8217;d got that question&#8230;</p>



<p>Being, above all, a web developer, I can definitely relate. Kubernetes is a wonderful product – you can install complicated web architectures with a click –&nbsp;but what about the <em>database + some functions</em> model?</p>



<p>Well, you can also do it with Kubernetes!</p>



<p>That&#8217;s the beauty of the rich Kubernetes ecosystem: you can find projects to address many different use cases, from <a href="https://www.ovh.com/fr/blog/deploying-game-servers-with-agones-on-ovh-managed-kubernetes/" data-wpel-link="exclude">game servers with Agones</a> to FaaS platforms&#8230;</p>



<h3 class="wp-block-heading">There is an Helm chart for that!</h3>



<p>Saying <em>&#8220;You can do it with Kubernetes!&#8221;</em> is almost the new &#8220;<em>There is an app for that!&#8221;</em>, but it doesn&#8217;t help a lot of people who are looking for solutions. As the question had come up several times, we decided to prepare a small tutorial on how to deploy and use a FaaS platform on OVH Managed Kubernetes.</p>



<p>We began by testing several FaaS platform on our Kubernetes. Our objective was to find the following solution:</p>



<ul class="wp-block-list"><li>Easy to deploy (ideally with a simple <a href="https://github.com/helm/helm" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Helm chart)</a></li><li>Manageable with both an UI and a CLI, because different customers have different needs</li><li>Auto-scalable, in both the upscaling and downscaling senses</li><li>Supported by comprehensive documentation</li></ul>



<p>We tested lots of platforms, like <a href="https://kubeless.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubeless</a>, <a href="https://github.com/apache/incubator-openwhisk" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenWhisk</a>, <a href="https://github.com/openfaas/faas" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenFaaS</a> and <a href="https://github.com/fission/fission" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Fission</a>, and I must say that all of them performed quite well.&nbsp;In the end though, the one that scored the best in terms of our objectives was OpenFaaS, so we decided to use it as the reference for this blog post.</p>



<h3 class="wp-block-heading">OpenFaaS –&nbsp;a Kubernetes-native FaaS platform</h3>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="745" height="167" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/CAA4B336-0797-4587-B92D-6F83A5C7197B.png" alt="OpenFaaS" class="wp-image-15505" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/CAA4B336-0797-4587-B92D-6F83A5C7197B.png 745w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/CAA4B336-0797-4587-B92D-6F83A5C7197B-300x67.png 300w" sizes="auto, (max-width: 745px) 100vw, 745px" /></figure></div>



<p><a href="https://github.com/openfaas/faas" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenFaaS</a> is an open-source framework for building serverless functions with Docker and Kubernetes. The project is already mature, popular and active, with more than 14k stars on GitHub, hundreds of contributors, and lots of users (both corporate and private).</p>



<p>OpenFaaS is very simple to deploy, using a Helm chart (including an operator for CRDs, i.e. <code>kubectl get functions</code>). It has both a CLI and a UI, manages auto-scaling effectively, and its documentation is really comprehensive (with a Slack channel to discuss it, as a nice bonus!).</p>



<p>Technically, OpenFaaS is composed of several functional blocks:</p>



<ul class="wp-block-list"><li>The <em>Function Watchdog.</em>&nbsp;A tiny golang HTTP server that transforms any Docker image into a serverless function</li><li>The <em>API Gateway</em>, which provides&nbsp;an external route into functions and collects metrics</li><li>The <em>UI Portal</em>, which creates and invokes functions</li><li>The <em>CLI</em> (essentially a REST client for the <em>API Gateway</em>), which can deploy any container as a function</li></ul>



<p>Functions can be written in many languages (although I mainly used JavaScript, Go and Python for testing purposes), using handy templates or a simple Dockerfile.</p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="1024" height="665" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798-1024x665.png" alt="OpenFaaS Architecture" class="wp-image-15508" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798-1024x665.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798-300x195.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798-768x499.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/F39BD4F4-2C54-4F5F-B6F4-2D59E634B798.png 1118w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure></div>



<h3 class="wp-block-heading">Deploying OpenFaaS on OVH Managed Kubernetes</h3>



<p>There are several ways to install OpenFaaS on a Kubernetes cluster. In this post we&#8217;re looking at the easiest one: installing with <a href="https://helm.sh/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Helm</a>.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;">If you need information on how to install and use Helm on your OVH Managed Kubernetes cluster, you can follow <a href="https://docs.ovh.com/gb/en/kubernetes/installing-helm/" data-wpel-link="exclude">our tutorial</a>.</p></blockquote>



<p>The official Helm chart for OpenFaas is <a href="https://github.com/openfaas/faas-netes/blob/master/chart/openfaas" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">available on the faas-netes repository</a>.</p>



<h3 class="wp-block-heading">Adding the OpenFaaS Helm chart</h3>



<p>The OpenFaaS Helm chart isn&#8217;t available in Helm&#8217;s standard <code>stable</code> repository, so you&#8217;ll need to add their repository to your Helm installation:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">helm repo add openfaas https://openfaas.github.io/faas-netes/
helm repo update</code></pre>



<h3 class="wp-block-heading">Creating the namespaces</h3>



<p>OpenFaaS guidelines recommend creating two <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">namespaces</a>, one for OpenFaaS core services and one for the functions:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml</code></pre>



<h3 class="wp-block-heading">Generating secrets</h3>



<p>A FaaS platform that&#8217;s open to the internet seems like a bad idea. That&#8217;s why we are generating secrets, to enable authentication on the gateway:</p>



<pre class="wp-block-code language-bash"><code lang="bash" class="language-bash"># generate a random password
PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)

kubectl -n openfaas create secret generic basic-auth \
    --from-literal=basic-auth-user=admin \
    --from-literal=basic-auth-password="$PASSWORD"</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;"><strong>Note:</strong> you will need this password later in the tutorial (to access the UI portal, for example). You can view it at any point in the terminal session with&nbsp;<code>echo $PASSWORD</code>.</p></blockquote>



<h3 class="wp-block-heading">Deploying the Helm chart</h3>



<p>The Helm chart can be deployed in three modes: <code>LoadBalancer</code>, <code>NodePort</code> and <code>Ingress</code>. For our purposes, the simplest way is simply using our external Load Balancer, so we will deploy it in <code>LoadBalancer</code>, with the <code>--set serviceType=LoadBalancer</code> option.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;">If you want to better understand the difference between these three modes, you can read our <a href="https://www.ovh.com/fr/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/" data-wpel-link="exclude">Getting external traffic into Kubernetes – ClusterIp, NodePort, LoadBalancer, and Ingress</a> blog post.</p></blockquote>



<p>Deploy the Helm chart as follows:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">helm upgrade openfaas --install openfaas/openfaas \
    --namespace openfaas  \
    --set basic_auth=true \
    --set functionNamespace=openfaas-fn \
    --set serviceType=LoadBalancer</code></pre>



<p>As suggested in the install message, you can verify that OpenFaaS has started by running:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"</code></pre>



<p>If it&#8217;s working, you should see a list of the available OpenFaaS <code>deployment</code> objects:</p>



<pre class="wp-block-code console"><code class="">$ kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
alertmanager   1         1         1            1           33s
faas-idler     1         1         1            1           33s
gateway        1         1         1            1           33s
nats           1         1         1            1           33s
prometheus     1         1         1            1           33s
queue-worker   1         1         1            1           33s
</code></pre>



<h3 class="wp-block-heading">Install the FaaS CLI and log in to the API Gateway</h3>



<p>The easiest way to interact with your new OpenFaaS platform is by installing <code>faas-cli</code>, the command line client for OpenFaaS on a Linux or Mac (or in a WSL linux terminal in Windows):</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">curl -sL https://cli.openfaas.com | sh</code></pre>



<p>You can now use the CLI to log in to the gateway. The CLI will need the public URL of the OpenFaaS <code>LoadBalancer</code>, which you can get via <code>kubectl</code>:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl get svc -n openfaas gateway-external -o wide</code></pre>



<p>Export the URL to an&nbsp;<code>OPENFAAS_URL</code> variable:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">export OPENFAAS_URL=[THE_URL_OF_YOUR_LOADBALANCER]:[THE_EXTERNAL_PORT]</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p style="text-align: left;"><strong>Note:</strong> you will need this URL later on the tutorial, for example to access the UI portal. You can see it at any moment in the terminal session by doing <code>echo $OPENFAAS_URL</code>.</p></blockquote>



<p>And connect to the gateway:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">echo -n $PASSWORD | ./faas-cli login -g $OPENFAAS_URL -u admin --password-stdin</code></pre>



<p>Now your&#8217;re connected to the gateway, and you can send commands to the OpenFaaS platform.</p>



<p>By default, there is no function installed on your OpenFaaS platform, as you can verify with the <code>faas-cli list</code> command.</p>



<p>In my own deployment (URLs and IP changed for this example), the preceding operations gave:</p>



<pre class="wp-block-code console"><code class="">$ kubectl get svc -n openfaas gateway-external -o wide
 NAME               TYPE           CLUSTER-IP    EXTERNAL-IP                        PORT(S)          AGE     SELECTOR
 gateway-external   LoadBalancer   10.3.xxx.yyy   xxxrt657xx.lb.c1.gra.k8s.ovh.net   8080:30012/TCP   9m10s   app=gateway

 $ export OPENFAAS_URL=xxxrt657xx.lb.c1.gra.k8s.ovh.net:8080

 $ echo -n $PASSWORD | ./faas-cli login -g $OPENFAAS_URL -u admin --password-stdin
 Calling the OpenFaaS server to validate the credentials...
 WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.
 credentials saved for admin http://xxxrt657xx.lb.c1.gra.k8s.ovh.net:8080

$ ./faas-cli version
  ___                   _____           ____
 / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) |  __/ | | |  _| (_| | (_| |___) |
 \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/
      |_|
CLI:
 commit:  b42d0703b6136cac7b0d06fa2b212c468b0cff92
 version: 0.8.11
Gateway
 uri:     http://xxxrt657xx.lb.c1.gra.k8s.ovh.net:8080
 version: 0.13.0
 sha:     fa93655d90d1518b04e7cfca7d7548d7d133a34e
 commit:  Update test for metrics server
Provider
 name:          faas-netes
 orchestration: kubernetes
 version:       0.7.5 
 sha:           4d3671bae8993cf3fde2da9845818a668a009617

$ ./faas-cli list Function Invocations Replicas </code></pre>



<h3 class="wp-block-heading">Deploying and invoking functions</h3>



<p>You can easily deploy functions on your OpenFaaS platform using the CLI, with this command:&nbsp;<code>faas-cli up</code>.</p>



<p>Let&#8217;s try out&nbsp;<a href="https://raw.githubusercontent.com/openfaas/faas/master/stack.yml" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">some sample functions</a> from the OpenFaaS repository:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">./faas-cli deploy -f https://raw.githubusercontent.com/openfaas/faas/master/stack.yml</code></pre>



<p>Running a <code>faas-cli list</code> command now will show the deployed functions:</p>



<pre class="wp-block-code console"><code class="">$ ./faas-cli list
Function                          Invocations     Replicas
base64                            0               1    
echoit                            0               1    
hubstats                          0               1    
markdown                          0               1    
nodeinfo                          0               1    
wordcount                         0               1    
</code></pre>



<p>As an example, let&#8217;s invoke&nbsp;<code>wordcount</code> (a function that takes the syntax of the unix <a href="https://en.wikipedia.org/wiki/Wc_(Unix)" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><code>wc</code></a> command, giving us the number of lines, words and characters of the input data):</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">echo 'I love when a plan comes together' | ./faas-cli invoke wordcount</code></pre>



<pre class="wp-block-code console"><code class="">
$ echo 'I love when a plan comes together' | ./faas-cli invoke wordcount
       1         7        34
</code></pre>



<h3 class="wp-block-heading">Invoking a function without the CLI</h3>



<p>You can use the <code>faas-cli describe</code> command to get the public URL of your function, and then call it directly with your favorite HTTP library (or the good old <code>curl</code>):</p>



<pre class="wp-block-code console"><code class="">$ ./faas-cli describe wordcount
Name:                wordcount
Status:              Ready
Replicas:            1
Available replicas:  1
Invocations:         1
Image:               functions/alpine:latest
Function process:    
URL:                 http://xxxxx657xx.lb.c1.gra.k8s.ovh.net:8080/function/wordcount
Async URL:           http://xxxxx657xx.lb.c1.gra.k8s.ovh.net:8080/async-function/wordcount
Labels:              faas_function : wordcount
Annotations:         prometheus.io.scrape : false

$ curl -X POST --data-binary "I love when a plan comes together" "http://xxxxx657xx.lb.c1.gra.k8s.ovh.net:8080/function/wordcount"
       0         7        33
</code></pre>



<h3 class="wp-block-heading">Containers everywhere&#8230;</h3>



<p>The most attractive part of a FaaS platform is being able to deploy your own functions.<br>In OpenFaaS, you can write your these function in many languages, not just the usual suspects (JavaScript, Python, Go etc.). This is because in OpenFaaS, you can deploy basically any container as a function, although this does mean you need to package your functions as containers in order to deploy them.</p>



<p>That also means that in order to create your own functions, you need to have <a href="https://www.docker.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Docker</a> installed on your workstation, and you will need to push the images in a Docker registry (either the official one or a private one).</p>



<p>If you need a private registry, you can <a href="https://docs.docker.com/registry/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">install one</a> on your OVH Managed Kubernetes cluster. For this tutorial we are choosing to deploy our image on the official Docker registry.</p>



<h2 class="wp-block-heading">Writing our first function</h2>



<p>For our first example, we are going to create and deploy a <em>hello word</em> function in JavaScript, using <a href="https://nodejs.org/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NodeJS</a>. Let&#8217;s begin by creating and scaffolding the function folder:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">mkdir hello-js-project
cd hello-js-project
../faas-cli new hello-js --lang node</code></pre>



<p>The CLI will download a JS function template from OpenFaaS repository, generate a function description file (<code>hello-js.yml</code> in this case) and a folder for the function source code (<code>hello-js</code>). For NodeJS, you will find a <code>package.json</code> (to declare eventual dependencies to your function, for example) and a <code>handler.js</code> (the function main code) in this folder.</p>



<p>Edit <code>hello-js.yml</code> to set the name of the image you want to upload to the Docker registry:</p>



<pre title="hello-js.yaml" class="wp-block-code"><code lang="yaml" class="language-yaml">version: 1.0
provider:
  name: openfaas
  gateway: http://6d6rt657vc.lb.c1.gra.k8s.ovh.net:8080
functions:
  hello-js:
    lang: node
    handler: ./hello-js
    image: ovhplatform/openfaas-hello-js:latest</code></pre>



<p>The function described in the <code>handler.js</code> file is really simple. It exports a function with two parameters: a <code>context</code> where you will receive the request data, and a <code>callback</code> that you will call at the end of your function and where you will pass the response data.</p>



<pre title="handler.js" class="wp-block-code"><code lang="javascript" class="language-javascript">"use strict"

module.exports = (context, callback) => {
    callback(undefined, {status: "done"});
}</code></pre>



<p>Let&#8217;s edit it to send back our <em>hello world</em> message:</p>



<pre title="handler.js" class="wp-block-code"><code lang="javascript" class="language-javascript">"use strict"

module.exports = (context, callback) => {
    callback(undefined, {message: 'Hello world'});
}</code></pre>



<p>Now you can build the Docker image and push it to the public Docker registry:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash"># Build the image
../faas-cli build -f hello-js.yml
# Login at Docker Registry, needed to push the image
docker login     
# Push the image to the registry
../faas-cli push -f hello-js.yml</code></pre>



<p>With the image in the registry, let&#8217;s deploy and invoke the function with the OpenFaaS CLI:</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash"># Deploy the function
../faas-cli deploy -f hello-js.yml
# Invoke the function
../faas-cli invoke hello-js</code></pre>



<p>Congratulations! You have just written and deployed your first OpenFaaS function.</p>



<h3 class="wp-block-heading">Using the OpenFaaS UI Portal</h3>



<p>You can test the UI Portal by pointing your browser to your OpenFaaS gateway URL (the one you have set on the <code>$OPENFAAS_URL</code> variable), and entering the <code>admin</code>&nbsp;user and the password you have set on the <code>$PASSWORD</code> variable when prompted to.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="963" height="579" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/ui-portal-01.jpg" alt="OpenFaaS UI Portal" class="wp-image-15495" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-01.jpg 963w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-01-300x180.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-01-768x462.jpg 768w" sizes="auto, (max-width: 963px) 100vw, 963px" /></figure>



<p>In the UI Portal, you will find the list of the deployed functions. For each function, you can find its description, invoke it and see the result.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="828" height="768" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/ui-portal-02.jpg" alt="OpenFaaS UI Portal" class="wp-image-15496" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-02.jpg 828w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-02-300x278.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-02-768x712.jpg 768w" sizes="auto, (max-width: 828px) 100vw, 828px" /></figure>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="832" height="899" src="https://www.ovh.com/blog/wp-content/uploads/2019/05/ui-portal-03.jpg" alt="OpenFaaS UI Portal" class="wp-image-15497" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-03.jpg 832w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-03-278x300.jpg 278w, https://blog.ovhcloud.com/wp-content/uploads/2019/05/ui-portal-03-768x830.jpg 768w" sizes="auto, (max-width: 832px) 100vw, 832px" /></figure>



<h3 class="wp-block-heading">Where do we go from here?</h3>



<p>So you now have a working OpenFaaS platform on your OVH Managed Kubernetes cluster.</p>



<p>To learn more about OpenFaaS, and how you can get the most out of it, please refer to the <a href="https://docs.openfaas.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official OpenFaaS documentation</a>. You can also follow the <a href="https://github.com/openfaas/workshop" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenFaaS workshops</a>&nbsp;for more practical tips and advice.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploying-a-faas-platform-on-ovh-managed-kubernetes-using-openfaas%2F&amp;action_name=Deploying%20a%20FaaS%20platform%20on%20OVH%20Managed%20Kubernetes%20using%20OpenFaaS&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Deploying game servers with Agones on OVH Managed Kubernetes</title>
		<link>https://blog.ovhcloud.com/deploying-game-servers-with-agones-on-ovh-managed-kubernetes/</link>
		
		<dc:creator><![CDATA[Horacio Gonzalez]]></dc:creator>
		<pubDate>Fri, 12 Apr 2019 10:01:44 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Agones]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Platform]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=15322</guid>

					<description><![CDATA[One of the key advantages of usisng Kubernetes is the formidable ecosystem around it. From Rancher to Istio, from Rook to Fission, from gVisor to KubeDB, the Kubernetes ecosystem is rich, vibrant and ever-growing. We are getting to the point where for most deployment needs we can say there is a K8s-based open-source project for [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploying-game-servers-with-agones-on-ovh-managed-kubernetes%2F&amp;action_name=Deploying%20game%20servers%20with%20Agones%20on%20OVH%20Managed%20Kubernetes&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p>One of the key advantages of usisng Kubernetes is the formidable ecosystem around it. From <a href="http://rancher.com/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Rancher</a> to <a href="https://istio.io/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Istio</a>, from <a href="https://rook.io/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Rook</a> to <a href="https://fission.io/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Fission</a>, from <a href="https://gvisor.dev/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">gVisor</a> to <a href="https://kubedb.com/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">KubeDB</a>, the Kubernetes ecosystem is rich, vibrant and ever-growing. We are getting to the point where for most deployment needs we can say <em>there is a K8s-based open-source project for that</em>.</p>



<p>One of the latests additions to this ecosystem is the <a href="https://agones.dev" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Agones</a> project, an open-source, multiplayer, dedicated game-server hosting built on Kubernetes, developed by Google in collaboration with <a href="https://www.ubisoft.com/en-us/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Ubisoft</a>. The project was <a href="https://cloud.google.com/blog/products/gcp/introducing-agones-open-source-multiplayer-dedicated-game-server-hosting-built-on-kubernetes" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">announced in March</a>, and has already made quite a bit of noise&#8230;</p>



<p>In the OVH Platform Team we are fans of both online gaming and Kubernetes, so we told ourselves that we needed to test Agones. And what better way to test it than deploying it on our <a href="https://www.ovh.com/fr/kubernetes/" rel="nofollow" data-wpel-link="exclude">OVH Managed Kubernetes</a> service, installing a <a href="http://www.xonotic.org/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Xonotic</a> game server cluster and playing some old-school deathmatches with collegues?</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="/blog/wp-content/uploads/2019/04/DD101A52-234E-460C-8B52-B723DE785563.jpeg" alt="Agones on OVH Managed Kubernetes" width="599" height="301"/></figure></div>



<p>And of course, we needed to write about it to share the experience&#8230;</p>



<h3 class="wp-block-heading">Why Agones?</h3>



<p>Agones (<a href="https://www.merriam-webster.com/dictionary/agones" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">derived from the Greek word <em>agōn</em></a>, contests held during public festivals or more generally &#8220;contest&#8221; or &#8220;competition at games&#8221;) aims to replace the usual proprietary solutions to deploy, scale and manage game servers.</p>



<p>Agones enriches Kubernetes with a <a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Custom Controller</a> and a <a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Custom Resource Definition</a>. With them, you can standardise Kubernetes tooling and APIs to create, scale and manage game server clusters.</p>



<h4 class="wp-block-heading">Wait, what game servers are you talking about?</h4>



<p>Well, Agones&#8217;s main focus is online multiplayer games such as <a href="https://en.wikipedia.org/wiki/First-person_shooter" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">FPS</a>s and <a href="https://en.wikipedia.org/wiki/Multiplayer_online_battle_arena" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">MOBA</a>s, fast-paced games requiring dedicated, low-latency game servers that synchronize the state of the game between players and serve as a source of truth for gaming situations.</p>



<p>These kinds of games ask for relatively ephemeral dedicated gaming servers, with every match running on a server instance. The servers need to be stateful (they must keep the game status), with the state usually held in memory for the duration of the match.</p>



<p>Latency is a key concern, as the competitive real-time aspects of the games ask for quick responses from the server. That means that the connection from the player device to the game server should be the most direct possible, ideally bypassing any intermediate server such as a load-balancer.</p>



<h4 class="wp-block-heading">And how do you connect the players to the right server?</h4>



<p>Every game publisher used to have their own proprietary solutions, but most on them follow a similar flow, with a matchmaking service that groups players into a match, deals with a cluster manager to provision a dedicated instance of game server and send to the players its IP address and port, to allow them to directly connect to the server and play the game.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="/blog/wp-content/uploads/2019/04/1779D5AB-BF4B-4588-99E7-0BC6A888AE33.jpeg" alt="Online gaming matchmaking and game server asignation" width="407" height="232"/></figure></div>



<p>Agones and it&#8217;s Custom Controller and Custom Resource Definition replaces the complex cluster management infrastructure with a standardised, Kubernetes-based tooling and APIs. The matchmaker services interact with these APIs to spawn new game server pods and get their IP address and ports to the concerned players.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="/blog/wp-content/uploads/2019/04/3D4C3CDD-5938-4CD8-89AE-8A97D7BF540F.jpeg" alt="Online gaming matchmaking and game server asignation with " width="449" height="397"/></figure></div>



<h4 class="wp-block-heading">The cherry on the cake</h4>



<p>Using Kubernetes for these tasks also gives some nice additional bonus, like being able to deploy the full gaming infrastructure in a developer environnement (or even in a <a href="https://github.com/kubernetes/minikube" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">minikube</a>), or easily clone it to deploy in a new data center or cloud region, but also offering a whole platform to host all the additional services needed to build a game: account management, leaderboards, inventory&#8230;</p>



<p>And of course, the simplicity of operating Kubernetes-based platforms, especially when they dynamic, heterogeneous and distributed, as most online gaming platforms.</p>



<h3 class="wp-block-heading">Deploying Agones on OVH Managed Kubernetes</h3>



<p>There are several ways to install Agones in a Kubernetes cluster. For our test we chose the easiest one: installing with <a href="https://helm.sh/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Helm</a>.</p>



<h4 class="wp-block-heading">Enabling creation of RBAC resources</h4>



<p>The first step to install Agones is to setup a service account with enough permissions to create some special RBAC resource types.</p>



<pre class="wp-block-code"><code class="">kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole=cluster-admin --serviceaccount=kube-system:default</code></pre>



<p>Now we have the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Cluster Role Binding</a> needed for the installation.</p>



<h4 class="wp-block-heading">Installing the Agones chart</h4>



<p>Now let&#8217;s continue by adding Agones repository to Helm&#8217;s repository list.</p>



<pre class="wp-block-code"><code class="">helm repo add agones https://agones.dev/chart/stable</code></pre>



<p>And then installing the stable Agones chart:</p>



<pre class="wp-block-code"><code class="">helm install --name my-agones --namespace agones-system agones/agones</code></pre>



<p>The installation we have just done isn&#8217;t suited for production, as the <a href="https://agones.dev/site/docs/installation/helm/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">official install instructions</a> recommend running Agones and the game servers in separate, dedicated pools of nodes. But for the needs of our test, the basic setup is enough.</p>



<h3 class="wp-block-heading">Confirming Agones started successfully</h3>



<p>To verify that Agones is running on our Kubernetes cluster, we can look at the pods in the <code>agones-system</code> namespace:</p>



<pre class="wp-block-code"><code class="">kubectl get --namespace agones-system pods</code></pre>



<p>If everything is ok, you should see an <code>agones-controller</code> pod with a <em>Running</em> status:</p>



<pre class="wp-block-code console"><code class="">$ kubectl get --namespace agones-system pods
NAME                                 READY   STATUS    RESTARTS   AGE
agones-controller-5f766fc567-xf4vv   1/1     Running   0          5d15h
agones-ping-889c5954d-6kfj4          1/1     Running   0          5d15h
agones-ping-889c5954d-mtp4g          1/1     Running   0          5d15h
</code></pre>



<p>You can also see more details using:</p>



<pre class="wp-block-code"><code class="">kubectl describe --namespace agones-system pods</code></pre>



<p>Looking at the <code>agones-controller</code> description, you should see something like:</p>



<pre class="wp-block-code console"><code class="">$ kubectl describe --namespace agones-system pods
Name:               agones-controller-5f766fc567-xf4vv
Namespace:          agones-system
[...]
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
</code></pre>



<p>Where all the <code>Conditions</code> should have status <code>True</code>.</p>



<h3 class="wp-block-heading">Deploying a game server</h3>



<p>The Agones <em>Hello world</em> is rather boring, a simple <a href="https://github.com/GoogleCloudPlatform/agones/tree/release-0.9.0/examples/simple-udp" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">UDP echo server</a>, so we decided to skip it and go directly to something more interesting: a <a href="https://github.com/GoogleCloudPlatform/agones/blob/release-0.9.0/examples/xonotic" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Xonotic game server</a>.</p>



<p><a href="https://www.xonotic.org/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">Xonotic</a> is an open-source multi-player FPS, and a rather good one, with lots of interesting game modes, maps, weapons and customization options.</p>



<p>Deploying a Xonotic game server over Agones is rather easy:</p>



<pre class="wp-block-code"><code class="">kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/agones/release-0.9.0/examples/xonotic/gameserver.yaml</code></pre>



<p>The game server deployment can take some moments, so we need to wait until its status is <code>Ready</code> before using it. We can fetch the status with:</p>



<pre class="wp-block-code"><code class="">kubectl get gameserver</code></pre>



<p>We wait until the fetch gives a <code>Ready</code> status on our game server:</p>



<pre class="wp-block-code console"><code class="">$ kubectl get gameserver
NAME      STATE   ADDRESS         PORT   NODE       AGE
xonotic   Ready   51.83.xxx.yyy   7094   node-zzz   5d
</code></pre>



<p>When the game server is ready, we also get the address and the port we should use to connect to our deathmatch game (in my example, <code>51.83.xxx.yyy:7094</code>).</p>



<h3 class="wp-block-heading">It&#8217;s frag time</h3>



<p>So now that we have a server, let&#8217;s test it!</p>



<p>We downloaded the Xonotic client for our computers (it runs on Windows, Linux and MacOS, so there is no excuse), and lauched it:</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-1024x576.png" alt="xonotic" class="wp-image-15335" width="768" height="432" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-1024x576.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-768x432.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13-1200x675.png 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-13.png 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure></div>



<p>Then we go to the <em>Multiplayer</em> menu and enter the address and port of our game server:</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-1024x576.png" alt="" class="wp-image-15336" width="768" height="432" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-1024x576.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-768x432.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41-1200x675.png 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-28-41.png 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure></div>



<p>And we are ready to play!</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-1024x576.png" alt="" class="wp-image-15337" width="768" height="432" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-1024x576.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-768x432.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36-1200x675.png 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-10-02-35-36.png 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure></div>



<h4 class="wp-block-heading">And on the server side?</h4>



<p>On the server side, we can spy how things are going for our game server, using <code>kubectl logs</code>. Let&#8217;s begin by finding the pod running the game:</p>



<pre class="wp-block-code"><code class="">kubectl get pods</code></pre>



<p>We see that our game server is running in a pod called <code>xonotic</code>:</p>



<pre class="wp-block-code console"><code class="">$ kubectl get pods 
NAME      READY   STATUS    RESTARTS   AGE
xonotic   2/2     Running   0          5d15h
</code></pre>



<p>We can then use <code>kubectl logs</code> on it. In the pod there are two containers, the main <code>xonotic</code> one and a Agones <em>sidecar</em>, so we must specify that we want the logs of the <code>xonotic</code> container:</p>



<pre class="wp-block-code console"><code class="">$ kubectl logs xonotic
Error from server (BadRequest): a container name must be specified for pod xonotic, choose one of: [xonotic agones-gameserver-sidecar]
$ kubectl logs xonotic xonotic
>>> Connecting to Agones with the SDK
>>> Starting health checking
>>> Starting wrapper for Xonotic!
>>> Path to Xonotic server script: /home/xonotic/Xonotic/server_linux.sh 
Game is Xonotic using base gamedir data
gamename for server filtering: Xonotic
Xonotic Linux 22:03:50 Mar 31 2017 - release
Current nice level is below the soft limit - cannot use niceness
Skeletal animation uses SSE code path
execing quake.rc
[...]
Authenticated connection to 109.190.xxx.yyy:42475 has been established: client is v6xt9/GlzxBH+xViJCiSf4E/SCn3Kx47aY3EJ+HOmZo=@Xon//Ks, I am /EpGZ8F@~Xon//Ks
LostInBrittany is connecting...
url_fclose: failure in crypto_uri_postbuf
Receiving player stats failed: -1
LostInBrittany connected
LostInBrittany connected
LostInBrittany is now spectating
[BOT]Eureka connected
[BOT]Hellfire connected
[BOT]Lion connected
[BOT]Scorcher connected
unconnected changed name to [BOT]Eureka
unconnected changed name to [BOT]Hellfire
unconnected changed name to [BOT]Lion
unconnected changed name to [BOT]Scorcher
[BOT]Scorcher picked up Strength
[BOT]Scorcher drew first blood! 
[BOT]Hellfire was gunned down by [BOT]Scorcher's Shotgun
[BOT]Scorcher slapped [BOT]Lion around a bit with a large Shotgun
[BOT]Scorcher was gunned down by [BOT]Eureka's Shotgun, ending their 2 frag spree
[BOT]Scorcher slapped [BOT]Lion around a bit with a large Shotgun
[BOT]Scorcher was shot to death by [BOT]Eureka's Blaster
[BOT]Hellfire slapped [BOT]Eureka around a bit with a large Shotgun, ending their 2 frag spree
[BOT]Eureka slapped [BOT]Scorcher around a bit with a large Shotgun
[BOT]Eureka was gunned down by [BOT]Hellfire's Shotgun
[BOT]Hellfire was shot to death by [BOT]Lion's Blaster, ending their 2 frag spree
[BOT]Scorcher was cooked by [BOT]Lion
[BOT]Eureka turned into hot slag
[...]
</code></pre>



<h4 class="wp-block-heading">Add some friends&#8230;</h4>



<p>The next step is mostly enjoyable: asking the collegues to connect to the server and doing a true deathmatch like in Quake 2 times.</p>



<h3 class="wp-block-heading">And now?</h3>



<p>We have a working game server, but we have barely uncovered the possibilities of Agones: deploying a <a href="https://agones.dev/site/docs/reference/fleet/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">fleet</a> (a set of warm GameServers that are available to be allocated from), testing the <a href="https://agones.dev/site/docs/reference/fleetautoscaler/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">FleetAutoscaler</a> (to automatically scale up and down a Fleet in response to demand), making some dummy <a href="https://agones.dev/site/docs/tutorials/allocator-service-go/" rel="nofollow external noopener noreferrer" data-wpel-link="external" target="_blank">allocator service</a>. In future blog posts we will dive deeper into it, and explore those possibilities.</p>



<p>And in a wider context, we are going to continue our exploratory journey on Agones. The project is still very young, an early alpha, but it shows some impressive perspectives.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploying-game-servers-with-agones-on-ovh-managed-kubernetes%2F&amp;action_name=Deploying%20game%20servers%20with%20Agones%20on%20OVH%20Managed%20Kubernetes&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How to monitor your Kubernetes Cluster with OVH Observability</title>
		<link>https://blog.ovhcloud.com/how-to-monitor-your-kubernetes-cluster-with-ovh-observability/</link>
		
		<dc:creator><![CDATA[Adrien Carreira]]></dc:creator>
		<pubDate>Fri, 08 Mar 2019 13:33:55 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Beamium]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Metrics]]></category>
		<category><![CDATA[Noderig]]></category>
		<category><![CDATA[Observability]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Observability]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=14897</guid>

					<description><![CDATA[Our colleagues in the K8S team launched the OVH Managed Kubernetes solution&#160;last week,&#160;in which they manage the Kubernetes master components and spawn your nodes on top of our Public Cloud solution. I will not describe the details of how it works here, but there are already many blog posts about it (here&#160;and&#160;here, to get you [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fhow-to-monitor-your-kubernetes-cluster-with-ovh-observability%2F&amp;action_name=How%20to%20monitor%20your%20Kubernetes%20Cluster%20with%20OVH%20Observability&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p class="graf graf--p">Our colleagues in the K8S team launched the OVH Managed Kubernetes solution&nbsp;<a class="markup--anchor markup--p-anchor" href="https://www.ovh.com/fr/kubernetes/" target="_blank" rel="noopener noreferrer" data-href="https://www.ovh.com/fr/kubernetes/" data-wpel-link="exclude">last week,</a>&nbsp;in which they manage the Kubernetes master components and spawn your nodes on top of our Public Cloud solution. I will not describe the details of how it works here, but there are already many blog posts about it (<a class="markup--anchor markup--p-anchor" href="https://www.ovh.com/fr/blog/kubinception-and-etcd/" target="_blank" rel="noopener noreferrer" data-href="https://www.ovh.com/fr/blog/kubinception-and-etcd/" data-wpel-link="exclude">here</a>&nbsp;and&nbsp;<a class="markup--anchor markup--p-anchor" href="https://www.ovh.com/fr/blog/kubinception-using-kubernetes-to-run-kubernetes/" target="_blank" rel="noopener noreferrer" data-href="https://www.ovh.com/fr/blog/kubinception-using-kubernetes-to-run-kubernetes/" data-wpel-link="exclude">here,</a> to get you started).</p>



<p>In the <a href="https://labs.ovh.com/machine-learning-platform" data-wpel-link="exclude">Prescience team</a>, we have used Kubernetes for more than a year now. Our cluster includes 40 nodes, running on top of PCI. We continuously run about 800 pods, and generate a lot of metrics as a result.</p>



<p>Today, we&#8217;ll look at how we handle these metrics to monitor our Kubernetes Cluster, and (equally importantly!) how to do this with your own cluster.</p>



<h3 class="graf graf--h3 wp-block-heading">OVH Metrics</h3>



<p class="graf graf--p">Like any infrastructure, you need to monitor your Kubernetes Cluster. You need to know exactly how your nodes, cluster and applications behave once they have been deployed in order to provide reliable services to your customers. To do this with our own Cluster, we use <a href="https://www.ovh.com/fr/data-platforms/metrics/" data-wpel-link="exclude">OVH Observability</a>.</p>



<p class="graf graf--p">OVH Observability is backend-agnostic, so we can push metrics with one format and query with another one. It can handle:</p>



<ul class="postList wp-block-list"><li class="graf graf--li">Graphite</li><li class="graf graf--li">InfluxDB</li><li class="graf graf--li">Metrics2.0</li><li class="graf graf--li">OpentTSDB</li><li class="graf graf--li">Prometheus</li><li class="graf graf--li">Warp10</li></ul>



<p class="graf graf--p">It also incorporates a managed <a class="markup--anchor markup--p-anchor" href="https://grafana.metrics.ovh.net" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://grafana.metrics.ovh.net" data-wpel-link="external">Grafana</a>, in order to display metrics and create monitoring dashboards.</p>



<h3 class="graf graf--h3 wp-block-heading">Monitoring Nodes</h3>



<p class="graf graf--p">The first thing to monitor is the health of nodes. Everything else starts from there.</p>



<p class="graf graf--p">In order to monitor your nodes, we will use <a class="markup--anchor markup--p-anchor" href="https://github.com/ovh/noderig" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://github.com/ovh/noderig" data-wpel-link="external">Noderig</a> and <a class="markup--anchor markup--p-anchor" href="https://github.com/ovh/beamium" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://github.com/ovh/beamium" data-wpel-link="external">Beamium</a>, as described <a href="/monitoring-guidelines-for-ovh-observability/" data-wpel-link="internal">here</a>. We will also use Kubernetes DaemonSets to start the process on all our nodes.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/03/IMG_0135-1024x770.jpg" alt="" class="wp-image-15024" width="768" height="578" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0135-1024x770.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0135-300x226.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0135-768x578.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0135-1200x903.jpg 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0135.jpg 2039w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure></div>



<p class="graf graf--p">So let’s start creating a namespace&#8230;</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl create namespace metrics</code></pre>



<p class="graf graf--p">Next, create a secret with the write token metrics, which you can find in the OVH Control Panel.</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl create secret generic w10-credentials --from-literal=METRICS_TOKEN=your-token -n metrics</code></pre>



<p class="graf graf--p">Copy <code class="markup--code markup--p-code">metrics.yml</code> into a file and apply the configuration with kubectl</p>



<pre title="metrics.yml" class="wp-block-code"><code lang="yaml" class="language-yaml"># This will configure Beamium to scrap noderig
# And push metrics to warp 10
# We also add the HOSTNAME to the labels of the metrics pushed
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: beamium-config
  namespace: metrics
data:
  config.yaml: |
    scrapers:
      nodering:
        url: http://0.0.0.0:9100/metrics
        period: 30000
        format: sensision
        labels:
          app: nodering

    sinks:
      warp:
        url: https://warp10.gra1.metrics.ovh.net/api/v0/update
        token: $METRICS_TOKEN

    labels:
      host: $HOSTNAME

    parameters:
      log-file: /dev/stdout
---
# This is a custom collector that report the uptime of the node
apiVersion: v1
kind: ConfigMap
metadata:
  name: noderig-collector
  namespace: metrics
data:
  uptime.sh: |
    #!/bin/sh
    echo 'os.uptime' `date +%s%N | cut -b1-10` `awk '{print $1}' /proc/uptime`
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: metrics-daemon
  namespace: metrics
spec:
  selector:
    matchLabels:
      name: metrics-daemon
  template:
    metadata:
      labels:
        name: metrics-daemon
    spec:
      terminationGracePeriodSeconds: 10
      hostNetwork: true
      volumes:
      - name: config
        configMap:
          name: beamium-config
      - name: noderig-collector
        configMap:
          name: noderig-collector
          defaultMode: 0777
      - name: beamium-persistence
        emptyDir:{}
      containers:
      - image: ovhcom/beamium:latest
        imagePullPolicy: Always
        name: beamium
        env:
        - name: HOSTNAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: TEMPLATE_CONFIG
          value: /config/config.yaml
        envFrom:
        - secretRef:
            name: w10-credentials
            optional: false
        resources:
          limits:
            cpu: "0.05"
            memory: 128Mi
          requests:
            cpu: "0.01"
            memory: 128Mi
        workingDir: /beamium
        volumeMounts:
        - mountPath: /config
          name: config
        - mountPath: /beamium
          name: beamium-persistence
      - image: ovhcom/noderig:latest
        imagePullPolicy: Always
        name: noderig
        args: ["-c", "/collectors", "--net", "3"]
        volumeMounts:
        - mountPath: /collectors/60/uptime.sh
          name: noderig-collector
          subPath: uptime.sh
        resources:
          limits:
            cpu: "0.05"
            memory: 128Mi
          requests:
            cpu: "0.01"
            memory: 128Mi</code></pre>



<p class="graf graf--p"><em class="markup--em markup--p-em">Don’t hesitate to change the collector levels if you need more information.</em></p>



<p>Then apply the configuration with kubectl&#8230;</p>



<pre class="wp-block-code console"><code class="">$ kubectl apply -f metrics.yml
# Then, just wait a minutes for the pods to start
$ kubectl get all -n metrics
NAME                       READY   STATUS    RESTARTS   AGE
pod/metrics-daemon-2j6jh   2/2     Running   0          5m15s
pod/metrics-daemon-t6frh   2/2     Running   0          5m14s

NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE AGE
daemonset.apps/metrics-daemon 40        40        40      40           40          122d</code></pre>



<p class="graf graf--p">You can import our dashboard in to your Grafana from <a class="markup--anchor markup--p-anchor" href="https://grafana.com/dashboards/9876" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://grafana.com/dashboards/9876" data-wpel-link="external">here</a>, and view some metrics about your nodes straight away.</p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="1842" height="631" src="/blog/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.09.08.png" alt="" class="wp-image-14899" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.09.08.png 1842w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.09.08-300x103.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.09.08-768x263.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.09.08-1024x351.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.09.08-1200x411.png 1200w" sizes="auto, (max-width: 1842px) 100vw, 1842px" /></figure></div>



<h3 class="graf graf--h3 wp-block-heading">Kube Metrics</h3>



<p>As the OVH Kube is a managed service, you don&#8217;t need to monitor the apiserver, etcd, or controlplane. The OVH Kubernetes team takes care of this. So we will focus on <a href="https://github.com/google/cadvisor/blob/master/info/v1/container.go" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">cAdvisor</a> metrics and <a href="https://github.com/kubernetes/kube-state-metrics" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kube state metrics</a></p>



<p>The most mature solution for dynamically scraping metrics inside the Kube (for now) is <a href="https://github.com/prometheus/prometheus" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Prometheus</a>.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/03/IMG_0144-1024x770.jpg" alt="Kube metrics" class="wp-image-15033" width="512" height="385" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0144-1024x770.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0144-300x226.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0144-768x578.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0144-1200x903.jpg 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0144.jpg 2039w" sizes="auto, (max-width: 512px) 100vw, 512px" /></figure></div>



<p class="graf graf--p"><em class="markup--em markup--p-em">In the next Beamium release, we should be able to reproduce the features of the Prometheus scraper.</em></p>



<p class="graf graf--p">To install the Prometheus server, you need to install Helm on the cluster&#8230;</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
    --clusterrole cluster-admin \
    --serviceaccount=kube-system:tiller
helm init --service-account tiller</code></pre>



<p class="graf graf--p">You then need to create the following two files:&nbsp;<code class="markup--code markup--p-code">prometheus.yml</code> and <code class="markup--code markup--p-code">values.yml</code>.</p>



<pre title="prometheus.yml" class="wp-block-code"><code lang="yaml" class="language-yaml"># Based on https://github.com/prometheus/prometheus/blob/release-2.2/documentation/examples/prometheus-kubernetes.yml
serverFiles:
  prometheus.yml:
    remote_write:
    - url: "https://prometheus.gra1.metrics.ovh.net/remote_write"
      remote_timeout: 120s
      bearer_token: $TOKEN
      write_relabel_configs:
      # Filter metrics to keep
      - action: keep
        source_labels: [__name__]
        regex: "eagle.*|\
            kube_node_info.*|\
            kube_node_spec_taint.*|\
            container_start_time_seconds|\
            container_last_seen|\
            container_cpu_usage_seconds_total|\
            container_fs_io_time_seconds_total|\
            container_fs_write_seconds_total|\
            container_fs_usage_bytes|\
            container_fs_limit_bytes|\
            container_memory_working_set_bytes|\
            container_memory_rss|\
            container_memory_usage_bytes|\
            container_network_receive_bytes_total|\
            container_network_transmit_bytes_total|\
            machine_memory_bytes|\
            machine_cpu_cores"

    scrape_configs:
    # Scrape config for Kubelet cAdvisor.
    - job_name: 'kubernetes-cadvisor'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      
      relabel_configs:
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
        
      metric_relabel_configs:
      # Only keep systemd important services like docker|containerd|kubelet and kubepods,
      # We also want machine_cpu_cores that don't have id, so we need to add the name of the metric in order to be matched
      # The string will concat id with name and the separator is a ;
      # `/;container_cpu_usage_seconds_total` OK
      # `/system.slice;container_cpu_usage_seconds_total` OK
      # `/system.slice/minion.service;container_cpu_usage_seconds_total` NOK, Useless
      # `/kubepods/besteffort/e2514ad43202;container_cpu_usage_seconds_total` Best Effort POD OK
      # `/kubepods/burstable/e2514ad43202;container_cpu_usage_seconds_total` Burstable POD OK
      # `/kubepods/e2514ad43202;container_cpu_usage_seconds_total` Guaranteed POD OK
      # `/docker/pod104329ff;container_cpu_usage_seconds_total` OK, Container that run on docker but not managed by kube
      # `;machine_cpu_cores` OK, there is no id on these metrics, but we want to keep them also
      - source_labels: [id,__name__]
        regex: "^((/(system.slice(/(docker|containerd|kubelet).service)?|(kubepods|docker).*)?);.*|;(machine_cpu_cores|machine_memory_bytes))$"
        action: keep
      # Remove Useless parents keys like `/kubepods/burstable` or `/docker`
      - source_labels: [id]
        regex: "(/kubepods/burstable|/kubepods/besteffort|/kubepods|/docker)"
        action: drop
        # cAdvisor give metrics per container and sometimes it sum up per pod
        # As we already have the child, we will sum up ourselves, so we drop metrics for the POD and keep containers metrics
        # Metrics for the POD don't have container_name, so we drop if we have just the pod_name
      - source_labels: [container_name,pod_name]
        regex: ";(.+)"
        action: drop
    
    # Scrape config for service endpoints.
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    # Example scrape config for pods
    #
    # The relabeling allows the actual pod scrape endpoint to be configured via the
    # following annotations:
    #
    # * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
    # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
    # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the
    # pod's declared ports (default is a port-free target if none are declared).
    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod

      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod_name
      - source_labels: [__meta_kubernetes_pod_node_name]
        action: replace
        target_label: host
      - action: labeldrop
        regex: (pod_template_generation|job|release|controller_revision_hash|workload_user_cattle_io_workloadselector|pod_template_hash)
</code></pre>



<pre title="values.yml" class="wp-block-code"><code lang="yaml" class="language-yaml">alertmanager:
  enabled: false
pushgateway:
  enabled: false
nodeExporter:
  enabled: false
server:
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: traefik
      ingress.kubernetes.io/auth-type: basic
      ingress.kubernetes.io/auth-secret: basic-auth
    hosts:
    - prometheus.domain.com
  image:
    tag: v2.7.1
  persistentVolume:
    enabled: false
</code></pre>



<p class="graf graf--p">Don’t forget to replace your token!</p>



<p>The Prometheus scraper is quite powerful. You can relabel your time series, keep a few that match your regex, etc. This config removes a lot of useless metrics, so don’t hesitate to tweak it if you want to see more cAdvisor metrics (for example).</p>



<p class="graf graf--p">&nbsp;Install it with Helm&#8230;</p>



<pre class="wp-block-code"><code lang="bash" class="language-bash">helm install stable/prometheus \
    --namespace=metrics \
    --name=metrics \
    --values=values/values.yaml \
    --values=values/prometheus.yaml</code></pre>



<p class="graf graf--p">Add add a basic-auth secret&#8230;</p>



<pre class="wp-block-code console"><code class="">$ htpasswd -c auth foo
New password: &lt;bar>
New password:
Re-type new password:
Adding password for user foo
$ kubectl create secret generic basic-auth --from-file=auth -n metrics
secret "basic-auth" created</code></pre>



<p class="graf graf--p">You can can access the Prometheus server interface through <code class="markup--code markup--p-code">prometheus.domain.com.</code></p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="1876" height="809" src="/blog/wp-content/uploads/2019/03/Screen-Shot-2019-03-06-at-10.01.21.png" alt="" class="wp-image-14933" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-06-at-10.01.21.png 1876w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-06-at-10.01.21-300x129.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-06-at-10.01.21-768x331.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-06-at-10.01.21-1024x442.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-06-at-10.01.21-1200x517.png 1200w" sizes="auto, (max-width: 1876px) 100vw, 1876px" /></figure></div>



<p class="graf graf--p">You will see all the metrics for your Cluster, although only the one you have filtered will be pushed to OVH Metrics.</p>



<p>The Prometheus interfaces is a good way to explore your metrics, as it&#8217;s quite straightforward to display and monitor your infrastructure. You can find our dashboard <a class="markup--anchor markup--p-anchor" href="https://grafana.com/dashboards/9880" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://grafana.com/dashboards/9880" data-wpel-link="external">here.</a></p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="1843" height="653" src="/blog/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-16.07.20.png" alt="" class="wp-image-14900" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-16.07.20.png 1843w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-16.07.20-300x106.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-16.07.20-768x272.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-16.07.20-1024x363.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-16.07.20-1200x425.png 1200w" sizes="auto, (max-width: 1843px) 100vw, 1843px" /></figure></div>



<h3 class="graf graf--h3 wp-block-heading">Resources Metrics</h3>



<p class="graf graf--p">As @<a class="markup--user markup--p-user" href="https://medium.com/u/7dfbd8de8b55" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://medium.com/u/7dfbd8de8b55" data-anchor-type="2" data-user-id="7dfbd8de8b55" data-action-value="7dfbd8de8b55" data-action="show-user-card" data-action-type="hover" data-wpel-link="external">Martin Schneppenheim</a> said in this <a class="markup--anchor markup--p-anchor" href="https://medium.com/@martin.schneppenheim/utilizing-and-monitoring-kubernetes-cluster-resources-more-effectively-using-this-tool-df4c68ec2053" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://medium.com/@martin.schneppenheim/utilizing-and-monitoring-kubernetes-cluster-resources-more-effectively-using-this-tool-df4c68ec2053" data-wpel-link="external">post</a>, in order to correctly manage a Kubernetes Cluster, you also need to monitor pod resources.</p>



<p>We will install <a class="markup--anchor markup--p-anchor" href="https://github.com/google-cloud-tools/kube-eagle" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://github.com/google-cloud-tools/kube-eagle" data-wpel-link="external">Kube Eagle</a>, which will fetch and expose some metrics about CPU and RAM requests and limits, so they can be fetched by the Prometheus server you just installed.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/03/IMG_0145-1024x443.jpg" alt="Kube Eagle" class="wp-image-15035" width="512" height="222" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0145-1024x443.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0145-300x130.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0145-768x333.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0145-1200x520.jpg 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0145.jpg 2039w" sizes="auto, (max-width: 512px) 100vw, 512px" /></figure></div>



<p>Create a file named <code class="markup--code markup--p-code">eagle.yml</code>.</p>



<pre title="eagle.yml" class="wp-block-code"><code lang="yaml" class="language-yaml">apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    app: kube-eagle
  name: kube-eagle
  namespace: kube-eagle
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - pods
  verbs:
  - get
  - list
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  labels:
    app: kube-eagle
  name: kube-eagle
  namespace: kube-eagle
subjects:
- kind: ServiceAccount
  name: kube-eagle
  namespace: kube-eagle
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-eagle
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: kube-eagle
  labels:
    app: kube-eagle
  name: kube-eagle
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: kube-eagle
  name: kube-eagle
  labels:
    app: kube-eagle
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-eagle
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
        prometheus.io/path: "/metrics"
      labels:
        app: kube-eagle
    spec:
      serviceAccountName: kube-eagle
      containers:
      - name: kube-eagle
        image: "quay.io/google-cloud-tools/kube-eagle:1.0.0"
        imagePullPolicy: IfNotPresent
        env:
        - name: PORT
          value: "8080"
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: http
        readinessProbe:
          httpGet:
            path: /health
            port: http
</code></pre>



<pre class="wp-block-code console"><code class="">$ kubectl create namespace kube-eagle
$ kubectl apply -f eagle.yml</code></pre>



<p class="graf graf--p">Next, add import this <a class="markup--anchor markup--p-anchor" href="https://grafana.com/dashboards/9875/revisions" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://grafana.com/dashboards/9875/revisions" data-wpel-link="external">Grafana dashboard</a> (it’s the same dashboard as Kube Eagle, but ported to Warp10).</p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="1838" height="784" src="/blog/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.06.50.png" alt="" class="wp-image-14901" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.06.50.png 1838w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.06.50-300x128.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.06.50-768x328.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.06.50-1024x437.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/Screen-Shot-2019-03-05-at-15.06.50-1200x512.png 1200w" sizes="auto, (max-width: 1838px) 100vw, 1838px" /></figure></div>



<p class="graf graf--p">You now have an easy way of monitoring your pod resources in the Cluster!</p>



<h3 class="graf graf--h3 wp-block-heading">Custom Metrics</h3>



<p>How does Prometheus know that it needs to scrape kube-eagle? If you looks at the metadata of the <code class="markup--code markup--p-code">eagle.yml</code>, you&#8217;ll see that:</p>



<pre class="wp-block-code"><code lang="yaml" class="language-yaml">annotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "8080" # The port where to find the metrics
  prometheus.io/path: "/metrics" # The path where to find the metrics</code></pre>



<p>Theses annotations will trigger the Prometheus auto-discovery process (described in <code class="markup--code markup--p-code">prometheus.yml</code> line 114).</p>



<p>This means you can easily add these annotations to pods or services that contain a Prometheus exporter, and then forward these metrics to OVH Observability. <a href="https://prometheus.io/docs/instrumenting/exporters/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">You can find a non-exhaustive list of Prometheus exporters here</a>.</p>



<div class="wp-block-image"><figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.ovh.com/blog/wp-content/uploads/2019/03/IMG_0141-1024x443.jpg" alt="" class="wp-image-15027" width="512" height="222" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0141-1024x443.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0141-300x130.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0141-768x333.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0141-1200x520.jpg 1200w, https://blog.ovhcloud.com/wp-content/uploads/2019/03/IMG_0141.jpg 2039w" sizes="auto, (max-width: 512px) 100vw, 512px" /></figure></div>



<h3 class="graf graf--h3 wp-block-heading">Volumetrics Analysis</h3>



<p>As you saw on the&nbsp;<code class="markup--code markup--p-code">prometheus.yml</code> , we&#8217;ve tried to filter a lot of useless metrics. For example, with cAdvisor on a fresh cluster, with only three real production pods, and with the whole kube-system and Prometheus namespace, have about 2,600 metrics per node. With a smart cleaning approach, you can reduce this to 126 series.</p>



<p>Here&#8217;s a table to show the approximate number of metrics you will generate, based on the number of nodes&nbsp;<strong>(N)</strong> and the number of production pods <strong>(P) </strong>you have:</p>



<figure class="wp-block-table"><table><tbody><tr><td>&nbsp;</td><td><strong>Noderig</strong></td><td><strong>cAdvisor</strong></td><td><strong>Kube State</strong></td><td><strong>Eagle</strong></td><td><strong>Total</strong></td></tr><tr><td>nodes</td><td>N * 13<sup id="cite_ref-ned_1-3" class="reference">(1)</sup></td><td>N * 2<sup id="cite_ref-ned_1-3" class="reference">(2)</sup></td><td>N * 1<sup id="cite_ref-ned_1-3" class="reference">(3)</sup></td><td>N * 8<sup id="cite_ref-ned_1-3" class="reference">(4)</sup></td><td><strong>N * 24</strong></td></tr><tr><td>system.slice</td><td>0</td><td>N * 5<sup id="cite_ref-ned_1-3" class="reference">(5)</sup> * 16<sup id="cite_ref-ned_1-3" class="reference">(6)</sup></td><td>0</td><td>0</td><td><strong>N * 80</strong></td></tr><tr><td>kube-system + kube-proxy + metrics</td><td>0</td><td>N * 5<sup id="cite_ref-ned_1-3" class="reference">(9)</sup> * 26<sup id="cite_ref-ned_1-3" class="reference">(6)</sup></td><td>0</td><td>N * 5<sup id="cite_ref-ned_1-3" class="reference">(9)</sup> * 6<sup id="cite_ref-ned_1-3" class="reference">(10)</sup></td><td><strong>N * 160</strong></td></tr><tr><td>Production Pods</td><td>0</td><td>P * 26<sup id="cite_ref-ned_1-3" class="reference">(6)</sup></td><td>0</td><td>P * 6<sup id="cite_ref-ned_1-3" class="reference">(10)</sup></td><td><strong>P * 32</strong></td></tr></tbody></table></figure>



<p>For example, if you run three nodes with 60 Pods, you will generate 264 * 3 + 32 * 60 ~= 2,700 metrics</p>



<p><em>NB: A pod has a unique name, so if you redeploy a deployment, you will create 32 new metrics each time.</em></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(1) Noderig metrics: <code class="markup--code markup--p-code">os.mem / os.cpu / os.disk.fs / os.load1 / os.net.dropped (in/out) / os.net.errs (in/out) / os.net.packets (in/out) / os.net.bytes (in/out)/ os.uptime</code></sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(2) cAdvisor nodes metrics: <code class="markup--code markup--p-code">machine_memory_bytes / machine_cpu_cores</code></sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(3) Kube state nodes metrics: <code class="markup--code markup--p-code">kube_node_info</code></sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(4) Kube Eagle nodes metrics: <code class="markup--code markup--p-code">eagle_node_resource_allocatable_cpu_cores / eagle_node_resource_allocatable_memory_bytes / eagle_node_resource_limits_cpu_cores / eagle_node_resource_limits_memory_bytes / eagle_node_resource_requests_cpu_cores / eagle_node_resource_requests_memory_bytes / eagle_node_resource_usage_cpu_cores / eagle_node_resource_usage_memory_bytes</code></sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(5) With our filters, we will monitor around five system.slices&nbsp;</sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(6)&nbsp; Metrics are reported per container. A pod is a set of containers (a minimum of two): your container + the pause container for the network. So we can consider (2* 10&nbsp;+ 6) for the number of metrics per pod. 10 metrics from the cAdvisor and six for the network (see below) and for system.slice we will have 10 + 6, because it&#8217;s treated as one container.</sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(7) cAdvisor will provide these metrics for each container</sup><sup id="cite_ref-ned_1-3" class="reference">: </sup><sup id="cite_ref-ned_1-3" class="reference"><code class="markup--code markup--p-code">container_start_time_seconds / container_last_seen / container_cpu_usage_seconds_total / container_fs_io_time_seconds_total / container_fs_write_seconds_total / container_fs_usage_bytes / container_fs_limit_bytes / container_memory_working_set_bytes / container_memory_rss / container_memory_usage_bytes </code></sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(8) cAdvisor will provide these metrics for each interface: <code class="markup--code markup--p-code">container_network_receive_bytes_total * per interface / container_network_transmit_bytes_total * per interface</code></sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(9) <code class="markup--code markup--p-code">kube-dns / beamium-noderig-metrics / kube-proxy / canal / metrics-server&nbsp;</code></sup></p>



<p><sup id="cite_ref-ned_1-3" class="reference">(10) Kube Eagle pods metrics: <code class="markup--code markup--p-code"> eagle_pod_container_resource_limits_cpu_cores /  eagle_pod_container_resource_limits_memory_bytes / eagle_pod_container_resource_requests_cpu_cores / eagle_pod_container_resource_requests_memory_bytes / eagle_pod_container_resource_usage_cpu_cores / eagle_pod_container_resource_usage_memory_bytes</code></sup></p>



<h3 class="graf graf--h3 wp-block-heading">Conclusion</h3>



<p class="graf graf--p">As you can see, monitoring your Kubernetes Cluster with OVH Observability is easy. You don&#8217;t need to worry about how and where to store your metrics, leaving you free to focus on leveraging your Kubernetes Cluster to handle your business workloads effectively, like we have in the Machine Learning Services Team.</p>



<p class="graf graf--p">The next step will be to add an alerting system, to notify you when your nodes are down (for example). For this, you can use the free&nbsp;<a class="markup--anchor markup--p-anchor" href="https://studio.metrics.ovh.net/" target="_blank" rel="noopener noreferrer nofollow external" data-href="https://studio.metrics.ovh.net/" data-wpel-link="external">OVH Alert Monitoring</a>&nbsp;tool.</p>



<h4 class="graf graf--h4 graf-after--p wp-block-heading" id="a936">Stay in&nbsp;touch</h4>



<p class="graf graf--p graf-after--h4 graf--trailing">For any questions, feel free to&nbsp;<a href="https://gitter.im/ovh/metrics" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">join the Observability Gitter</a>&nbsp;or <a href="https://gitter.im/ovh/kubernetes" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubernetes Gitter!</a><br>Follow us on Twitter: <a href="https://twitter.com/OVH" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">@OVH</a></p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fhow-to-monitor-your-kubernetes-cluster-with-ovh-observability%2F&amp;action_name=How%20to%20monitor%20your%20Kubernetes%20Cluster%20with%20OVH%20Observability&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Getting external traffic into Kubernetes &#8211; ClusterIp, NodePort, LoadBalancer, and Ingress</title>
		<link>https://blog.ovhcloud.com/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/</link>
		
		<dc:creator><![CDATA[Horacio Gonzalez]]></dc:creator>
		<pubDate>Fri, 22 Feb 2019 15:20:44 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Platform]]></category>
		<guid isPermaLink="false">https://blog.ovh.com/fr/blog/?p=14674</guid>

					<description><![CDATA[For the last few months, I have been acting as Developer Advocate for the OVH Managed Kubernetes beta, following our beta testers, getting feedback, writing docs and tutorials, and generally helping to make sure the product matches our users' needs as closely as possible.

In the next few posts, I am going to tell you some stories about this beta phase. We'll be taking a look at feedback from some of our beta testers, technical insights, and some fun anecdotes about the development of this new service.



Today, we'll start with one of the most frequent questions I got during the early days of the beta: How do I route external traffic into my Kubernetes service? The question came up a lot as our customers began to explore Kubernetes, and when I tried to answer it, I realised that part of the problem was the sheer number of possible answers, and the concepts needed to understand them.<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fgetting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress%2F&amp;action_name=Getting%20external%20traffic%20into%20Kubernetes%20%26%238211%3B%20ClusterIp%2C%20NodePort%2C%20LoadBalancer%2C%20and%20Ingress&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[<p>For the last few months, I have been acting as <strong>Developer Advocate</strong> for the <strong><a href="https://labs.ovh.com/kubernetes-k8s" data-wpel-link="exclude">OVH Managed Kubernetes beta</a></strong>, following our beta testers, getting feedback, writing docs and tutorials, and generally helping to make sure<strong> the product matches our users&#8217; needs</strong> as closely as possible.</p>
<p>In the next few posts, I am going to tell you some <strong>stories about this beta phase</strong>. We&#8217;ll be taking a look at feedback from some of our beta testers, technical insights, and some fun anecdotes about the development of this new service.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium wp-image-14708" src="/blog/wp-content/uploads/2019/02/1FEEF258-644A-481F-B324-7C05AD45B8CC-300x169.png" alt="" width="300" height="169" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/1FEEF258-644A-481F-B324-7C05AD45B8CC-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/1FEEF258-644A-481F-B324-7C05AD45B8CC-768x432.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/1FEEF258-644A-481F-B324-7C05AD45B8CC.png 885w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>Today, we&#8217;ll start with one of the most frequent questions I got during the early days of the beta: <em><strong>How do I route external traffic into my Kubernetes service?</strong> </em>The question came up a lot as our customers began to explore Kubernetes, and when I tried to answer it, I realised that part of the problem was the <strong>sheer number of</strong> <strong>possible answers</strong>, and the <strong>concepts</strong> needed to understand them.</p>
<p>Related to that question was a <strong>feature request</strong>: most users wanted a load balancing tool<i>. </i>As the beta phase is all about confirming the stability of the product and validating the feature set prioritisation, we were able to quickly confirm <code>LoadBalancer</code>as a key feature of our first commercial release.</p>
<p>To try to better answer the external traffic question, and to make the adoption of <code>LoadBalancer</code>easier, we wrote a tutorial and added some drawings, which got nice feedback. This helped people to understand the concept underlaying the routing of external traffic on Kubernetes.</p>
<p>This blog post is an expanded version of this tutorial. We hope that you will find it useful!</p>
<h2 id="some-concepts-clusterip-nodeport-ingress-and-loadbalancer" class="code-line" data-line="26">Some concepts:  <code>ClusterIP</code>,  <code>NodePort</code>,  <code>Ingress</code> and  <code>LoadBalancer</code></h2>
<p class="code-line" data-line="28">When you begin to use Kubernetes for real-world applications, one of the first questions to ask is how to get external traffic into your cluster. The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official documentation</a> offers a comprehensive (but rather dry) explanation of this topic, but here we are going to explain it in a more practical, need-to-know way.</p>
<p class="code-line" data-line="30">There are several ways to route external traffic into your cluster:</p>
<ul>
<li class="code-line" data-line="32">
<p class="code-line" data-line="32">Using Kubernetes proxy and <code>ClusterIP</code>: The default Kubernetes <code>ServiceType</code> is <code>ClusterIp</code>, which exposes the <code>Service</code> on a cluster-internal IP. To reach the <code>ClusterIp</code> from an external source, you can open a Kubernetes proxy between the external source and the cluster. This is usually only used for development.</p>
</li>
<li class="code-line" data-line="34">
<p class="code-line" data-line="34">Exposing services as <code>NodePort</code>: Declaring a <code>Service</code> as <code>NodePort</code>exposes it on each Node’s IP at a static port (referred to as the <code>NodePort</code>). You can then access the <code>Service</code> from outside the cluster by requesting <code>&lt;NodeIp&gt;:&lt;NodePort&gt;</code>. This can also be used for production, albeit with some limitations.</p>
</li>
<li class="code-line" data-line="36">
<p class="code-line" data-line="36">Exposing services as <code>LoadBalancer</code>: Declaring a <code>Service</code> as <code>LoadBalancer</code> exposes it externally, using a cloud provider’s load balancer solution. The cloud provider will provision a load balancer for the <code>Service</code>, and map it to its automatically assigned <code>NodePort</code>. This is the most widely used method in production environments.</p>
</li>
</ul>
<h3 id="using-kubernetes-proxy-and-clusterip" class="code-line" data-line="38">Using Kubernetes proxy and <code>ClusterIP</code></h3>
<p class="code-line" data-line="40">The default Kubernetes <code>ServiceType</code> is <code>ClusterIp</code>, which exposes the <code>Service</code> on a cluster-internal IP. To reach the <code>ClusterIp</code> from an external computer, you can open a Kubernetes proxy between the external computer and the cluster.</p>
<p class="code-line" data-line="42">You can use <code>kubectl</code> to create such a proxy. When the proxy is up, you&#8217;re directly connected to the cluster, and you can use the internal IP (ClusterIp) for that<code>Service</code>.</p>
<p><figure id="attachment_14701" aria-describedby="caption-attachment-14701" style="width: 376px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14701" src="/blog/wp-content/uploads/2019/02/1D7F7733-BE79-4408-919C-C9D8F8AF3A9E.jpeg" alt="kubectl proxy and ClusterIP" width="376" height="600" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/1D7F7733-BE79-4408-919C-C9D8F8AF3A9E.jpeg 502w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/1D7F7733-BE79-4408-919C-C9D8F8AF3A9E-188x300.jpeg 188w" sizes="auto, (max-width: 376px) 100vw, 376px" /><figcaption id="caption-attachment-14701" class="wp-caption-text">kubectl proxy and ClusterIP</figcaption></figure></p>
<div class="imageFrame">
<p class="code-line" data-line="46">This method isn&#8217;t suitable for a production environment, but it&#8217;s useful for development, debugging, and other quick-and-dirty operations.</p>
</div>
<h3 id="exposing-services-as-nodeport" class="code-line" data-line="52">Exposing services as <code>NodePort</code></h3>
<p class="code-line" data-line="54">Declaring a service as <code>NodePort</code> exposes the <code>Service</code> on each Node’s IP at the <code>NodePort</code> (a fixed port for that <code>Service</code>, in the default range of 30000-32767). You can then access the <code>Service</code> from outside the cluster by requesting <code>&lt;NodeIp&gt;:&lt;NodePort&gt;</code>. Every service you deploy as <code>NodePort</code> will be exposed in its own port, on every Node.</p>
<p><figure id="attachment_14702" aria-describedby="caption-attachment-14702" style="width: 500px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14702" src="/blog/wp-content/uploads/2019/02/BDFD96AE-11F9-4079-B375-250FA40B7CE9.jpeg" alt="NodePort" width="500" height="542" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/BDFD96AE-11F9-4079-B375-250FA40B7CE9.jpeg 738w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/BDFD96AE-11F9-4079-B375-250FA40B7CE9-277x300.jpeg 277w" sizes="auto, (max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-14702" class="wp-caption-text">NodePort</figcaption></figure></p>
<div class="imageFrame">
<p class="code-line" data-line="58">
</div>
<p class="code-line" data-line="62">It&#8217;s rather cumbersome to use <code>NodePort</code>for <code>Services</code>that are in production. As you are using non-standard ports, you often need to set-up an external load balancer that listens to the standard ports and redirects the traffic to the <code>&lt;NodeIp&gt;:&lt;NodePort&gt;</code>.</p>
<h3 id="exposing-services-as-loadbalancer" class="code-line" data-line="65">Exposing services as <code>LoadBalancer</code></h3>
<p class="code-line" data-line="67">Declaring a service of type <code>LoadBalancer</code> exposes it externally using a cloud provider’s load balancer. The cloud provider will provision a load balancer for the <code>Service</code>, and map it to its automatically assigned <code>NodePort</code>. How the traffic from that external load balancer is routed to the <code>Service</code> pods depends on the cluster provider.</p>
<p><figure id="attachment_14703" aria-describedby="caption-attachment-14703" style="width: 500px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14703" src="/blog/wp-content/uploads/2019/02/81CC04AA-9585-4FCD-A53C-1C1CACDCBAB4.jpeg" alt="LoadBalancer" width="500" height="559" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/81CC04AA-9585-4FCD-A53C-1C1CACDCBAB4.jpeg 716w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/81CC04AA-9585-4FCD-A53C-1C1CACDCBAB4-269x300.jpeg 269w" sizes="auto, (max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-14703" class="wp-caption-text">LoadBalancer</figcaption></figure></p>
<div class="imageFrame">
<p class="code-line" data-line="71">
</div>
<p class="code-line" data-line="76">The <code>LoadBalancer</code> is the best option for a production environment, with two caveats:</p>
<ul>
<li class="code-line" data-line="78">Every <code>Service</code> that you deploy as <code>LoadBalancer</code> will get it&#8217;s own IP.</li>
<li class="code-line" data-line="79">The <code>LoadBalancer</code> is usually billed based on the number of exposed services, which can be expensive.</li>
</ul>
<blockquote class="code-line" data-line="81">
<p class="code-line" data-line="81">We are currently offering the OVH Managed Kubernetes LoadBalancer service as a free preview, until the end of summer 2019.</p>
</blockquote>
<h3 id="what-about-ingress" class="code-line" data-line="84">What about <code>Ingress</code>?</h3>
<p class="code-line" data-line="86">According to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official documentation</a>, an <code>Ingress</code> is an API object that manages external access to the services in a cluster (typically HTTP). So what&#8217;s the difference between this and <code>LoadBalancer</code> or <code>NodePort</code>?</p>
<p class="code-line" data-line="88"><code>Ingress</code> isn&#8217;t a type of <code>Service</code>, but rather an object that acts as a <a href="https://en.wikipedia.org/wiki/Reverse_proxy" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">reverse proxy</a> and single entry-point to your cluster that routes the request to different services. The most basic <code>Ingress</code> is the <a href="https://github.com/kubernetes/ingress-nginx" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NGINX Ingress Controller</a>, where the NGINX takes on the role of reverse proxy, while also functioning as SSL.</p>
<p><figure id="attachment_14699" aria-describedby="caption-attachment-14699" style="width: 450px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14699" src="/blog/wp-content/uploads/2019/02/AF5F301F-ADDE-4ED7-9B80-4E2BA51DA6DB-225x300.png" alt="Ingress" width="450" height="600" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/AF5F301F-ADDE-4ED7-9B80-4E2BA51DA6DB-225x300.png 225w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/AF5F301F-ADDE-4ED7-9B80-4E2BA51DA6DB.png 600w" sizes="auto, (max-width: 450px) 100vw, 450px" /><figcaption id="caption-attachment-14699" class="wp-caption-text">Ingress</figcaption></figure></p>
<p class="code-line" data-line="90">Ingress is exposed to the outside of the cluster via <code>ClusterIP</code> and Kubernetes proxy, <code>NodePort</code>, or <code>LoadBalancer</code>, and routes incoming traffic according to the configured rules.</p>
<p><figure id="attachment_14706" aria-describedby="caption-attachment-14706" style="width: 450px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-14706" src="/blog/wp-content/uploads/2019/02/61EC6FAF-0BDF-4273-9DAD-13480B755E02.png" alt="Ingress behind LoadBalancer" width="450" height="600" srcset="https://blog.ovhcloud.com/wp-content/uploads/2019/02/61EC6FAF-0BDF-4273-9DAD-13480B755E02.png 600w, https://blog.ovhcloud.com/wp-content/uploads/2019/02/61EC6FAF-0BDF-4273-9DAD-13480B755E02-225x300.png 225w" sizes="auto, (max-width: 450px) 100vw, 450px" /><figcaption id="caption-attachment-14706" class="wp-caption-text">Ingress behind LoadBalancer</figcaption></figure></p>
<p class="code-line" data-line="92">The main advantage of using an <code>Ingress</code> behind a <code>LoadBalancer</code> is the cost: you can have lots of services behind a single <code>LoadBalancer</code>.</p>
<h2 data-line="92">Which one should I use?</h2>
<p>Well, that&#8217;s the one million dollar question, and one which will probably elicit a different response depending on who you ask!</p>
<p>You could go 100% <code>LoadBalancer</code>, getting an individual <code>LoadBalancer</code> for each service. Conceptually, it&#8217;s simple: every service is independent, with no extra configuration needed. The downside is the price (you will be paying for one <code>LoadBalancer</code> per service), and also the difficulty of managing lots of different IPs.</p>
<p>You could also use only one <code>LoadBalancer</code> and an <code>Ingress </code>behind it. All your services would be under the same IP, each one in a different path. It&#8217;s a cheaper approach, as you only pay for one <code>LoadBalancer</code>, but if your services don&#8217;t have a logical relationship, it can quickly become chaotic.</p>
<p>If you want my personal opinion, I would try to use a combination of the two&#8230;</p>
<p>An approach I like is having a <code>LoadBalancer</code> for every related set of services, and then routing to those services using an <code>Ingress</code>behind the  <code>LoadBalancer</code>. For example, let&#8217;s say you have two different microservice-based APIs, each one with around 10 services. I would put one <code>LoadBalancer</code> in front of one <code>Ingress</code> for each API, the <code>LoadBalancer</code>being the single public entry-point, and the<code>Ingress</code> routing traffic to the API&#8217;s different services.</p>
<p>But if your architecture is quite complex (especially if you&#8217;re using microservices), you will soon find that manually managing everything with <code>LoadBalancer</code> and <code>Ingress</code> is  rather  cumbersome. If that&#8217;s the case, the answer could be to delegate those tasks to a service mesh&#8230;</p>
<h2>What&#8217;s a service mesh?</h2>
<p>You may have heard of <a href="https://istio.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Istio</a> or <a href="https://linkerd.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Linkerd</a>, and how they make it easier to build microservice architectures on Kubernetes, adding nifty perks like A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.</p>
<p>Istio, Linkerd, and similar tools are service meshes, which allow you to build networks of microservices and define their interactions, while simultaneously adding some high-value features that make the setup and operation of microservice-based architectures easier.</p>
<p>There&#8217;s a lot to talk about when it comes to using service meshes on Kubernetes, but as they say, that&#8217;s a story for another time&#8230;</p>
<p>&nbsp;<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fgetting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress%2F&amp;action_name=Getting%20external%20traffic%20into%20Kubernetes%20%26%238211%3B%20ClusterIp%2C%20NodePort%2C%20LoadBalancer%2C%20and%20Ingress&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" /></p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
