<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Aurélie Vache, Author at OVHcloud Blog</title>
	<atom:link href="https://blog.ovhcloud.com/author/aurelie-vache/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.ovhcloud.com/author/aurelie-vache/</link>
	<description>Innovation for Freedom</description>
	<lastBuildDate>Tue, 14 Apr 2026 07:03:14 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Discover the External Secret Operator (ESO) OVHcloud Provider to manage your Kubernetes secrets  🎉</title>
		<link>https://blog.ovhcloud.com/discover-the-external-secret-operator-eso-ovhcloud-provider-to-manage-your-kubernetes-secrets-%f0%9f%8e%89/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Tue, 14 Apr 2026 07:02:22 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=31032</guid>

					<description><![CDATA[Several months ago, we released the Beta version of the OVHcloud Secret Manager and we guided you how to manage your secrets thanks to the existing External Secret Operator (ESO) Hashicorp Vault provider. As our Secret Manager is now in General Availability, our teams worked on the development of an OVHcloud ESO Provider now available [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdiscover-the-external-secret-operator-eso-ovhcloud-provider-to-manage-your-kubernetes-secrets-%25f0%259f%258e%2589%2F&amp;action_name=Discover%20the%20External%20Secret%20Operator%20%28ESO%29%20OVHcloud%20Provider%20to%20manage%20your%20Kubernetes%20secrets%20%20%F0%9F%8E%89&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large is-resized"><img fetchpriority="high" decoding="async" width="1024" height="681" src="https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-10-15.57.01.910-1024x681.png" alt="" class="wp-image-31204" style="aspect-ratio:1.503658927864753;width:524px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-10-15.57.01.910-1024x681.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-10-15.57.01.910-300x200.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-10-15.57.01.910-768x511.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-10-15.57.01.910.png 1532w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Several months ago, we released the Beta version of the OVHcloud Secret Manager and we guided you <a href="https://blog.ovhcloud.com/manage-your-secrets-through-ovhcloud-secret-manager-thanks-to-external-secrets-operator-eso-on-ovhcloud-managed-kubernetes-service-mks/" data-wpel-link="internal">how to manage your secrets thanks to the existing External Secret Operator (ESO) Hashicorp Vault provider</a>.</p>



<p>As our Secret Manager is now in General Availability, our teams worked on the development of an OVHcloud ESO Provider now available in the <a href="https://github.com/external-secrets/external-secrets/releases/tag/v2.3.0" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">ESO v2.3.0 new release</a> 🎉.</p>



<p>In this blog post, you will learn how to create a new secret in the OVHcloud Secret Manager and how to manage it within your Kubernetes clusters through the <a href="https://external-secrets.io/latest/provider/ovhcloud/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud ESO provider</a>.</p>



<h3 class="wp-block-heading">External Secrets Operator (ESO)</h3>



<figure class="wp-block-image size-full"><img decoding="async" width="225" height="225" src="https://blog.ovhcloud.com/wp-content/uploads/2026/04/image.png" alt="" class="wp-image-31088" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/04/image.png 225w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/image-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/image-70x70.png 70w" sizes="(max-width: 225px) 100vw, 225px" /></figure>



<p>The <strong>External Secrets Operator</strong> (ESO), a CNCF sanbox project since 2022, is a Kubernetes operator that integrates external secret management systems.</p>



<p>The operator reads the information from an external APIs and automatically injects the values into a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubernetes Secret</a>. If the secret changes in the external API, the operator updates the secret in the Kubernetes cluster.</p>



<p>The ESO connects to an external Secret Manager, such as <a href="https://external-secrets.io/latest/provider/ovhcloud/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud</a>, Vault, AWS, or GCP, via a provider configured in a <strong>(Cluster)SecretStore.</strong> An <strong>ExternalSecret</strong> resource then specifies which secrets to retrieve. ESO fetches those values and creates a corresponding Kubernetes Secret within the cluster.</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img decoding="async" width="1024" height="943" src="https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-09-14.55.33.553-1024x943.png" alt="" class="wp-image-31170" style="aspect-ratio:1.0859073039196323;width:484px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-09-14.55.33.553-1024x943.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-09-14.55.33.553-300x276.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-09-14.55.33.553-768x707.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Gribouillis-2026-04-09-14.55.33.553.png 1097w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>For more details, read the <a href="https://external-secrets.io/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">ESO official documentation</a>.</p>



<h3 class="wp-block-heading">Prerequisites</h3>



<p>To be able to use the ESO OVHcloud provider, you need to follow some prerequisites:</p>



<ul class="wp-block-list">
<li>Have an OVHcloud account</li>



<li>Created an <a href="https://www.ovhcloud.com/en/identity-security-operations/key-management-service/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OKMS</a> domain (&#8220;<em>305db938-331f-454d-83a7-3a0a29291661</em>&#8221; for example in this blog post)</li>



<li><a href="https://github.com/ovh/public-cloud-examples/tree/main/iam/create-user-and-generate-pat-token-with-cli" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Created an IAM local user</a> (&#8220;<em>secretmanager-305db938-331f-454d-83a7-3a0a29291661</em>&#8221; for example in this blog post)</li>



<li>Installed the <a href="https://github.com/ovh/ovhcloud-cli/?tab=readme-ov-file#installation" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud CLI</a></li>



<li>Have a Kubernetes cluster</li>
</ul>



<p>The ESO OVH provider supports both <code><em>token</em></code> and <code><em>mTLS</em></code> authentication. In this blog post, we will use the token authentication mode. Please follow the <a href="https://external-secrets.io/latest/provider/ovhcloud/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud ESO provider</a> guide if you wish to use mTLS authentication mode.</p>



<h4 class="wp-block-heading">Generate a PAT token (For token authentication only)</h4>



<p>The ESO (<strong>Cluster)SecretStore</strong> needs the permission to fetch secrets from Secret Manager.</p>



<p>If you want to use token autentication, you’ll need a token (PAT). You can use the ovhcloud CLI to do that:</p>



<pre class="wp-block-code"><code class="">PAT_TOKEN=$(ovhcloud iam user token create &lt;iam-local-user-name&gt; --name pat-&lt;iam-local-user-name&gt; --description "PAT secret manager for domain &lt;okms-id&gt;" -o json  | jq .details.token |  tr -d '"')<br><br>echo $PAT_TOKEN<br>&lt;your-token&gt;</code></pre>



<p>You should have a result like this:</p>



<pre class="wp-block-code"><code class="">$ PAT_TOKEN=$(ovhcloud iam user token create secretmanager-305db938-331f-454d-83a7-3a0a29291661 --name pat-secretmanager-305db938-331f-454d-83a7-3a0a29291661 --description "PAT secret manager for domain 305db938-331f-454d-83a7-3a0a29291661" -o json  | jq .details.token |  tr -d '"')<br>2026/04/07 14:07:45 Final parameters:<br>{<br> "description": "PAT secret manager for domain 305db938-331f-454d-83a7-3a0a29291661",<br> "name": "pat-secretmanager-305db938-331f-454d-83a7-3a0a29291661"<br>}<br><br>$ echo $PAT_TOKEN<br>eyJhbGciOiJFZERTQSIsImtpZCI6IjgzMkFGNUE5ODg3MzFCMDNGM0EzMTRFMDJFRUJFRjBGNDE5MUY0Q0YiLCJraW5kIjoicGF0IiwidHlwIjoiSldUIn0.eyJ0b2tlbiI6InBBSFh1WE5JdVNHYVpmV3F2OUFzVmJrU3UwR2UySTJrdFU0OGdTZkwyZ1k9In0.-VDbiUf4vNm1KB9qSv7i4sGMCvxs_EuZFAETB-eaOFf3IX8-9m7akN800--ASgXy55_DDFHdy4Z5uSq8lww-Bw</code></pre>



<p>Encode the PAT token in base 64 and save it in an environment variable:</p>



<pre class="wp-block-code"><code class="">export PAT_TOKEN_B64=$(echo -n $PAT_TOKEN | base64)<br>echo $PAT_TOKEN_B64</code></pre>



<h4 class="wp-block-heading">Retrieve and save the KMS information</h4>



<p>List the OKMS domains:</p>



<pre class="wp-block-code"><code class="">$ ovhcloud okms list<br>┌──────────────────────────────────────┬─────────────┐<br>│                  id                  │   region    │<br>├──────────────────────────────────────┼─────────────┤<br>│ 305db938-331f-454d-83a7-3a0a29291661 │ eu-west-par │<br>│ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx │ eu-west-par │<br>└──────────────────────────────────────┴─────────────┘</code></pre>



<p>Save the KMS endpoint and the OKMS ID in two environment variables. For example:</p>



<pre class="wp-block-code"><code class="">export OKMS_ID="305db938-331f-454d-83a7-3a0a29291661"<br>export KMS_ENDPOINT=$(ovhcloud okms get 305db938-331f-454d-83a7-3a0a29291661 -o json | jq .restEndpoint | xargs)</code></pre>



<h4 class="wp-block-heading">Create a secret in the Secret Manager</h4>



<p>In the<a href="https://www.ovh.com/manager" data-wpel-link="exclude"> OVHcloud Control Panel</a> (UI), go to ‘Secret Manager’ section and click on the <strong>Create a secret</strong> button.</p>



<p>Then in order to create a secret ‘prod/eu-west-par/dockerconfigjson’, in the Europe region (France – Paris) eu-west-par, choose this region:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="695" height="674" src="https://blog.ovhcloud.com/wp-content/uploads/2026/04/Capture-decran-2026-04-13-a-14.13.25.png" alt="" class="wp-image-31231" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/04/Capture-decran-2026-04-13-a-14.13.25.png 695w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Capture-decran-2026-04-13-a-14.13.25-300x291.png 300w" sizes="auto, (max-width: 695px) 100vw, 695px" /></figure>



<p>Then, choose the OKMS domain and create&#8221;prod/eu-west-par/dockerconfigjson&#8221; in the path and fill the content:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="704" height="718" src="https://blog.ovhcloud.com/wp-content/uploads/2026/04/Capture-decran-2026-04-13-a-14.13.15.png" alt="" class="wp-image-31232" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/04/Capture-decran-2026-04-13-a-14.13.15.png 704w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Capture-decran-2026-04-13-a-14.13.15-294x300.png 294w, https://blog.ovhcloud.com/wp-content/uploads/2026/04/Capture-decran-2026-04-13-a-14.13.15-70x70.png 70w" sizes="auto, (max-width: 704px) 100vw, 704px" /></figure>



<p>Finally, click on the <strong>Create</strong> button to finalise the creation of the new secret.</p>



<h4 class="wp-block-heading">Install or update the ESO</h4>



<p>If you&#8217;d never installed ESO in your Kubernetes cluster, you can install it via Helm:</p>



<pre class="wp-block-code"><code class="">helm repo add external-secrets https://charts.external-secrets.io<br>helm repo update<br><br>helm install external-secrets \<br>   external-secrets/external-secrets \<br>    -n external-secrets \<br>    --create-namespace \<br>    --set installCRDs=true</code></pre>



<p>If you already installed it, now you should update it in order to use this new provider:</p>



<pre class="wp-block-code"><code class="">helm upgrade external-secrets external-secrets/external-secrets -n external-secrets</code></pre>



<p>⚠️ In order to use the OVHcloud provider, you need to have a running instance of ESO equals to version <strong>2.3.0</strong> or more.</p>



<pre class="wp-block-code"><code class="">$ helm list -n external-secrets<br><br>NAME            	NAMESPACE       	REVISION	UPDATED                              	STATUS  	CHART                 	APP VERSION<br>external-secrets	external-secrets	1       	2026-04-13 13:56:29.071329 +0200 CEST	deployed	external-secrets-2.3.0	v2.3.0</code></pre>



<h3 class="wp-block-heading">Let&#8217;s deploy a Secret in Kubernetes using the ESO provider!</h3>



<h4 class="wp-block-heading">Deploy a ClusterSecretStore to connect ESO to Secret Manager</h4>



<p>Set up a <strong>ClusterSecretStore</strong> to manage synchronization with Secret Manager.<br>It will use the OVHcloud provider with token authorization mode, and the OKMS endpoint as the backend.</p>



<p>Create a <strong>clustersecretstore.yaml.template</strong> file with the content below:</p>



<pre class="wp-block-code"><code class="">apiVersion: external-secrets.io/v1<br>kind: ClusterSecretStore<br>metadata:<br>  name: secret-store-ovh<br>spec:<br>  provider:<br>    ovh:<br>      server: "$KMS_ENDPOINT" # for example: "https://eu-west-rbx.okms.ovh.net"<br>      okmsid: "$OKMS_ID" # for example: "734b9b45-8b1a-469c-b140-b10bd6540017"<br>      auth:<br>        token:<br>          tokenSecretRef:<br>            name: ovh-token<br>            namespace: external-secrets<br>            key: token<br>---<br>apiVersion: v1<br>kind: Secret<br>metadata:<br>  name: ovh-token<br>  namespace: external-secrets<br>data:<br>  token: $PAT_TOKEN_B64</code></pre>



<p>Generate the <strong>clustersecretstore.yaml</strong> file from the environment variables you defined:</p>



<pre class="wp-block-code"><code class=""><code>envsubst &lt; clustersecretstore.yaml.template &gt; clustersecretstore.yaml</code></code></pre>



<p>You should obtain a file filled with the OVHcloud KMS information:</p>



<pre class="wp-block-code"><code class="">apiVersion: external-secrets.io/v1<br>kind: ClusterSecretStore<br>metadata:<br>  name: secret-store-ovh<br>spec:<br>  provider:<br>    ovh:<br>      server: "https://eu-west-par.okms.ovh.net" # for example: "https://eu-west-rbx.okms.ovh.net"<br>      okmsid: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # for example: "734b9b45-8b1a-469c-b140-b10bd6540017"<br>      auth:<br>        token:<br>          tokenSecretRef:<br>            name: ovh-token<br>            namespace: external-secrets<br>            key: token<br>---<br>apiVersion: v1<br>kind: Secret<br>metadata:<br>  name: ovh-token<br>  namespace: external-secrets<br>data:<br>  token: ZXlK...UJ3</code></pre>



<p>Apply it in your Kubernetes cluster:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f clustersecretstore.yaml</code></pre>



<p>Check:</p>



<pre class="wp-block-code"><code class="">$ kubectl get clustersecretstore.external-secrets.io/secret-store-ovh<br><br>NAME               AGE   STATUS   CAPABILITIES   READY<br>secret-store-ovh   7s    Valid    ReadWrite      True</code></pre>



<h3 class="wp-block-heading">Create an ExternalSecret</h3>



<p>Create an <strong>externalsecret.yaml</strong> file with the content below:</p>



<pre class="wp-block-code"><code class="">apiVersion: external-secrets.io/v1<br>kind: ExternalSecret<br>metadata:<br>  name: docker-config-secret<br>  namespace: external-secrets<br>spec:<br>  refreshInterval: 30m<br>  secretStoreRef:<br>    name: secret-store-ovh<br>    kind: ClusterSecretStore<br>  target:<br>    template:<br>      type: kubernetes.io/dockerconfigjson<br>      data:<br>        .dockerconfigjson: "{{ .mysecret | toString }}"<br>    name: ovhregistrycred<br>    creationPolicy: Owner<br>  data:<br>  - secretKey: ovhregistrycred<br>    remoteRef:<br>      key: prod/eu-west-par/dockerconfigjson</code></pre>



<p>Apply it:</p>



<pre class="wp-block-code"><code class="">$ kubectl apply -f externalsecret.yaml<br><br>externalsecret.external-secrets.io/docker-config-secret created</code></pre>



<p>Check:</p>



<pre class="wp-block-code"><code class="">$ kubectl get externalsecret.external-secrets.io/docker-config-secret -n external-secrets <br><br>NAME                   STORETYPE            STORE              REFRESH INTERVAL   STATUS         READY   LAST SYNC<br>docker-config-secret   ClusterSecretStore   secret-store-ovh   30m                SecretSynced   True    4s</code></pre>



<p>After applying this command, it will create a Kubernetes Secret object.</p>



<pre class="wp-block-code"><code class="">$ kubectl get secret ovhregistrycred -n external-secrets<br><br>NAME              TYPE                             DATA   AGE<br>ovhregistrycred   kubernetes.io/dockerconfigjson   1      49s</code></pre>



<p>The Kubernetes <strong>Secret</strong> have been created 🎉</p>



<p>We created a Secret directly from the key, but the OVHcloud ESO provider allows you to fetch the original secret from different parameters (fetch the whole secret, fetch nested values, fetch multiple secrets…), according to your needs.</p>



<h3 class="wp-block-heading">Conclusion</h3>



<p>In this blog, we’ve explained how to create secrets in the OVHcloud Secret Manager and then integrate them directly in your Kubernetes clusters using the new ESO OVHcloud provider.</p>



<p>With this brand new OVHcloud provider, you will have a smoother integration between the Secret Manager and your Kubernetes clusters with ESO.</p>



<p>Our team are working on several other integrations, so stay tuned, and please share your thoughts with us!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdiscover-the-external-secret-operator-eso-ovhcloud-provider-to-manage-your-kubernetes-secrets-%25f0%259f%258e%2589%2F&amp;action_name=Discover%20the%20External%20Secret%20Operator%20%28ESO%29%20OVHcloud%20Provider%20to%20manage%20your%20Kubernetes%20secrets%20%20%F0%9F%8E%89&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Secure your Software Supply Chain with OVHcloud Managed Private Registry (MPR)</title>
		<link>https://blog.ovhcloud.com/secure-your-software-supply-chain-with-ovhcloud-managed-private-registry-mpr/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Fri, 13 Feb 2026 16:40:51 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[OVHcloud Managed Private Registry]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=30357</guid>

					<description><![CDATA[Before an application go to production, it passes through several stages: source code, build, packaging and distribution. But Malicious code &#8211; such as a compromised dependency, breached CI pipeline, or modified package in a registry &#8211; can be introduced at any point in the development cycle, potentially impacting thousands of projects This is precisely where [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fsecure-your-software-supply-chain-with-ovhcloud-managed-private-registry-mpr%2F&amp;action_name=Secure%20your%20Software%20Supply%20Chain%20with%20OVHcloud%20Managed%20Private%20Registry%20%28MPR%29&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1012" height="1011" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Gribouillis-2026-01-30-13.25.17.911.png" alt="" class="wp-image-30442" style="aspect-ratio:1.0009787401988517;width:437px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Gribouillis-2026-01-30-13.25.17.911.png 1012w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Gribouillis-2026-01-30-13.25.17.911-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Gribouillis-2026-01-30-13.25.17.911-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Gribouillis-2026-01-30-13.25.17.911-768x767.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Gribouillis-2026-01-30-13.25.17.911-70x70.png 70w" sizes="auto, (max-width: 1012px) 100vw, 1012px" /></figure>



<p>Before an application go to production, it passes through several stages: source code, build, packaging and distribution. But Malicious code &#8211; such as a compromised dependency, breached CI pipeline, or modified package in a registry &#8211; can be introduced at any point in the development cycle, potentially impacting thousands of projects</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="581" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-13-1024x581.png" alt="" class="wp-image-30358" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-13-1024x581.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-13-300x170.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-13-768x436.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-13.png 1292w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>This is precisely where <strong>Software Supply Chain Security </strong>(SSCS) comes in: to protect not just the code itself, but also how it’s built, delivered, and utilised.</p>



<p>Attacks like SolarWinds and Log4Shell aren’t isolated incidents, but rather subtle indicators that have escalated in severity.</p>



<figure class="wp-block-image aligncenter is-resized"><img loading="lazy" decoding="async" width="800" height="800" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry.png" alt="" class="wp-image-28658" style="width:145px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-70x70.png 70w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p>This blog post explores recommended solutions and best practices for <a href="https://www.ovhcloud.com/en/public-cloud/managed-rancher-service/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><u>OVHcloud Managed</u></a> <a href="https://www.ovhcloud.com/en/public-cloud/managed-rancher-service/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><u>Private Registry</u></a> (MPR), an OCI-compliant artifact registry, to help you enhance your Software Supply Chain Security.</p>



<h3 class="wp-block-heading">Generate a Software Bill Of Materials (SBOM)</h3>



<p>SBOMs provides a list of all the ingredients (OS, libraries, code) and anything that composes the images that will run on your Kubernetes cluster. </p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="383" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-14-1024x383.png" alt="" class="wp-image-30360" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-14-1024x383.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-14-300x112.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-14-768x287.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-14.png 1256w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>From that list, you can find out more about the image, its vulnerabilities, and licenses.</p>



<h4 class="wp-block-heading">Generate an SBOM manually</h4>



<p>To manually generate an SBOM from your image, click the <strong>‘<strong>GENERATE</strong> SBOM’ </strong>button:</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="280" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.28.13-1024x280.png" alt="" class="wp-image-30361" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.28.13-1024x280.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.28.13-300x82.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.28.13-768x210.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.28.13-1536x420.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.28.13-2048x560.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Within seconds, the <em>SBOM </em>column for your image will display <em>“Queued”</em>, then change to <em>“Generating”</em>, and a <em>“SBOM details”</em> link will appear.</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="226" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-31-1024x226.png" alt="" class="wp-image-30393" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-31-1024x226.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-31-300x66.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-31-768x170.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-31-1536x340.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-31-2048x453.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Click the &#8216;<strong>SBOM details&#8217;</strong> link to view the SBOM:</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="557" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.26.38-1024x557.png" alt="" class="wp-image-30367" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.26.38-1024x557.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.26.38-300x163.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.26.38-768x418.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.26.38-1536x835.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.26.38-2048x1114.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Your application’s SBOM is generated by <strong>Trivy </strong>in <strong>SPDX </strong>format. This item is then listed as an accessory for your image in the registry.</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="130" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-17-1024x130.png" alt="" class="wp-image-30371" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-17-1024x130.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-17-300x38.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-17-768x98.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-17-1536x195.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-17-2048x260.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Click the <strong>&#8216;sbom.harbor&#8217;</strong> accessory type for more details:</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="629" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-25-1024x629.png" alt="" class="wp-image-30379" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-25-1024x629.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-25-300x184.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-25-768x472.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-25-1536x944.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-25-2048x1259.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h4 class="wp-block-heading">Generate an SBOM automatically</h4>



<p>Manually generating an SBOM is a good practice, but automating the process is even better. The private registry can automatically generates the SBOM for you once an image is pushed to the desired project.</p>



<p>Click the project your image is part of, navigate to the <em>‘Configuration’</em> tab, then tick the <strong>SBOM generation </strong>checkbox:</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="538" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-15-1024x538.png" alt="" class="wp-image-30365" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-15-1024x538.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-15-300x158.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-15-768x403.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-15-1536x806.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-15-2048x1075.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">Vulnerabilities scanning</h3>



<p>We recommend running vulnerability scans on the images to confirm that:</p>



<ul class="wp-block-list">
<li>the images provided are free of any known vulnerabilities (CVEs);</li>



<li>security patches are well integrated before deployment;</li>



<li>the images used in production comply with security and compliance policies.</li>
</ul>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="406" height="232" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-32.png" alt="" class="wp-image-30395" style="width:329px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-32.png 406w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-32-300x171.png 300w" sizes="auto, (max-width: 406px) 100vw, 406px" /></figure>



<p>There are several vulnerability scanners available, like <a href="https://trivy.dev/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><u>Trivy</u></a>, <a href="https://docs.docker.com/scout/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><u>Docker Scout</u></a>, and <a href="https://github.com/anchore/grype" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><u>Grype</u></a>.</p>



<p>The OVHcloud Managed Private Registry uses Trivy as its default vulnerability scanner, but you can add more scanners if needed. Go to the <em>Administration</em> panel, click <em>‘<strong>Interrogation Services</strong>’</em>, then navigate to the <em>‘<strong>Scanners</strong>’</em> tab:</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="437" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-33-1024x437.png" alt="" class="wp-image-30400" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-33-1024x437.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-33-300x128.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-33-768x328.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-33-1536x655.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-33-2048x873.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h4 class="wp-block-heading">Scan your image manually</h4>



<p>To manually run a vulnerability scan on your image, go to your project and click the <strong>SCAN VULNERABILITIES</strong> button:</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="186" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-35-1024x186.png" alt="" class="wp-image-30406" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-35-1024x186.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-35-300x55.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-35-768x140.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-35-1536x279.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-35-2048x372.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Within a few seconds, a scan will run and reveal any vulnerabilities detected in your image.</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="442" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.21-1024x442.png" alt="" class="wp-image-30404" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.21-1024x442.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.21-300x129.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.21-768x331.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.21-1536x662.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.21-2048x883.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Click your image to take a look at the CVEs list:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="557" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.39-1-1024x557.png" alt="" class="wp-image-30414" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.39-1-1024x557.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.39-1-300x163.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.39-1-768x418.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.39-1-1536x835.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/Capture-decran-2026-01-29-a-14.25.39-1-2048x1114.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h4 class="wp-block-heading">Scan your image automatically</h4>



<p>To automatically scan images on push, click the project your image is part of, then the <em>‘Configuration’ </em>tab, and tick the <strong>‘Vulnerabilities scanning’</strong> checkbox:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="390" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-36-1024x390.png" alt="" class="wp-image-30408" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-36-1024x390.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-36-300x114.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-36-768x293.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-36-1536x585.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-36-2048x781.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h4 class="wp-block-heading">Schedule vulnerability scans</h4>



<p>Another way to stay informed is by configuring your vulnerability scanner to run scans every day. Go in the <em>Administration </em>panel, click <em>‘<strong>Interrogation</strong> <strong>Services</strong>’</em>, then the <em>‘<strong>Vulnerability</strong>’</em> tab:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="264" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-34-1024x264.png" alt="" class="wp-image-30401" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-34-1024x264.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-34-300x77.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-34-768x198.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-34-1536x396.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-34-2048x528.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>You can choose to schedule the scan Hourly, Daily, Weekly or you can customize when the scan will be triggered.</p>



<p>Scheduled scans ensure that existing images are regularly/periodically analyzed for newly discovered vulnerabilities (CVEs).</p>



<h4 class="wp-block-heading">Prevent vulnerable images from running</h4>



<p>You can also configure a project to prevent vulnerable images from being pulled. In order to do that, check the <strong>Prevent vulnerable images from running</strong> checkbox.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="206" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-40-1024x206.png" alt="" class="wp-image-30430" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-40-1024x206.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-40-300x60.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-40-768x154.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-40.png 1424w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Select the severity level of vulnerabilities to prevent images from running, from None to Critical.</p>



<p>With this configuration, images cannot be pulled if their level is equal to or higher than the selected level of severity.</p>



<h3 class="wp-block-heading">Exploitable vulnerabilities</h3>



<p>When a scanner found vulnerabilities for your images, it is not necessary that they are exploitable in your application/in your image.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="170" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-41-1024x170.png" alt="" class="wp-image-30433" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-41-1024x170.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-41-300x50.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-41-768x128.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-41-1536x255.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-41.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>In this example, my application is build with golang 1.25-alpine, but Trivy found several CVEs that are only exploitable in golang 1.19.1 or less.</p>



<p>In order to remove/skip the &#8220;false positive&#8221;, a solution exists.</p>



<p>VEX (Vulnerability Exploitability eXchange) is a <strong>standard “format”</strong> to state whether a vulnerability is <strong>exploitable</strong> or not in a specific context.</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="609" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-43-1024x609.png" alt="" class="wp-image-30435" style="aspect-ratio:1.6814258951355643;width:452px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-43-1024x609.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-43-300x178.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-43-768x456.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-43-1536x913.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-43.png 1681w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>You can generate a VEX file with <a href="https://github.com/openvex/vexctl" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">vexctl</a> or <a href="https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">govulncheck</a> tools.</p>



<p>Example:</p>



<pre class="wp-block-code"><code class=""># With vexctl<br>$ VULN_ID="CVE-2022-27664"<br>$ PRODUCT="pkg:golang/golang.org/x/net@v0.0.0-20220127200216-cd36cc0744dd"<br>$ vexctl create --file vex.json --author 'Aurélie Vache' --product "pkg:oci/demo@sha256:$HASH?repository_url=$REGISTRY/$HARBOR_PROJECT/demo" --vuln "$VULN_ID" --status 'not_affected' --justification 'vulnerable_code_not_present' --impact-statement "HTTP/2 vulnerability $VULN_ID is not exploitable because the image is compiled with Go 1.20, which contains the patched library."<br><br># With govulncheck (for Go apps)<br>$ govulncheck -format openvex ./... &gt; ../demo.vex.json</code></pre>



<p>For the moment, OVHcloud MPR (managed Harbor) does not support VEX files (and the OpenVEX format) <a href="https://github.com/goharbor/harbor/issues/22720" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">but it is planned in the future</a>.</p>



<p>💡But the good news is that you can configure a CVEs whitelist with the list of not exploitable CVEs to ignore them during vulnerability scanning:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="522" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-42-1024x522.png" alt="" class="wp-image-30434" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-42-1024x522.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-42-300x153.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-42-768x391.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-42-1536x782.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-42.png 1814w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>You can optionally uncheck the <strong>Never expires</strong> checkbox and use the calendar selector to set an expiry date for the allowlist.</p>



<h3 class="wp-block-heading">Sign your images</h3>



<p>It’s recommended to sign your images to ensure they haven’t been modified and originate from your pipeline (CI/CD).</p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="278" height="282" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-38.png" alt="" class="wp-image-30412" style="width:128px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-38.png 278w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-38-70x70.png 70w" sizes="auto, (max-width: 278px) 100vw, 278px" /></figure>



<p>Signing your images is crucial for protecting them against compromised registries and unauthorised image replacements.</p>



<p><strong>Without a signature, there’s no guarantee the deployed image is the one you originally built!</strong></p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="818" height="302" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-37.png" alt="" class="wp-image-30410" style="aspect-ratio:2.708559106290115;width:482px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-37.png 818w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-37-300x111.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-37-768x284.png 768w" sizes="auto, (max-width: 818px) 100vw, 818px" /></figure>



<p>You can sign your images with <a href="https://github.com/sigstore/cosign" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><u>Sigstore Cosign</u></a> or <a href="https://github.com/notaryproject/notation" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><u>Notation</u></a> tools:</p>



<pre class="wp-block-code"><code class="">$ export HARBOR_PROJECT=supply-chain<br>$ export IMAGE=xxxxxx.c1.de1.container-registry.ovh.net/$HARBOR_PROJECT/demo<br>$ export HASH=$(skopeo inspect docker://${IMAGE}:latest | jq -r .Digest | sed "s/^sha256://")<br><br># Sign with Cosign<br>## Generate a private and a public key<br>$ cosign generate-key-pair<br>## Sign the image with the OCI 1.1 Referrers API<br>$ cosign sign -y --key cosign.key $IMAGE@sha256:$HASH <br><br># Sign with Notation<br>## Generate a RSA key &amp; a self-signed X.509 test certificate<br>$ notation cert generate-test --default "test"<br><br>## Sign the image with the OCI 1.1 Refferrers API<br>$ export NOTATION_EXPERIMENTAL=1 ; notation sign -d --allow-referrers-api ${IMAGE}@sha256:${HASH}</code></pre>



<p>You can use Cosign or Notation to sign your images, OVHcloud MPR supports both.</p>



<p>Your signature will appear beside your image as an accessory, plus a green checkmark ✅ in your column:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="227" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-26-1024x227.png" alt="" class="wp-image-30382" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-26-1024x227.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-26-300x67.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-26-768x170.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-26-1536x341.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-26-2048x455.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>⚠️ Keep in mind, MPR (Harbor) doesn’t support signatures generated by Cosign v3 (the signature will upload and appear as an accessory, but the mark will stay red instead of turning green). This bug should <a href="https://github.com/goharbor/harbor/issues/22401" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><u>be fixed in Harbor 2.15</u></a> 💪.</p>



<p>Signing your OCI artifacts and linking them to your images is recommended, and you can do this using Cosign:</p>



<pre class="wp-block-code"><code class="">$ cosign attest -y --predicate sbom.spdx.json --key cosign.key $IMAGE@sha256:$HASH</code></pre>



<p>They will be uploaded to the OVHcloud private registry and listed as accessories.</p>



<h4 class="wp-block-heading">Ensure only verified images are pushed to your registry’s projects</h4>



<p>To allow only verified/signed images to be deployed on a project, click the project your image is part of, navigate to the <em>‘<strong>Configuration</strong>’</em> tab, and tick the <strong>Cosign</strong> and/or <strong>Notation </strong>checkbox:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="191" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-39-1024x191.png" alt="" class="wp-image-30418" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-39-1024x191.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-39-300x56.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-39-768x143.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-39.png 1406w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>When checked, the registry will only allow verified images to be pulled from the project. Verified images are determined by <strong>Cosign</strong> or <strong>Notation</strong>, depending on the policy you have checked. Note that if you have both Cosign and Notation policies enforced, then images will need to be signed by both Cosign and Notation to be pulled.</p>



<h3 class="wp-block-heading">Tag immutability</h3>



<p>By default, tags are mutables, it means that you can push an image demo with the tag 1.0.0, do a modification in the code and push again to this same tag.</p>



<p>It could be useful to fix a bug but in term of security a mutable tag does not guarantee that the image you&#8217;ve built and pushed for the 1.0.0 version is the same image that exists now in the registry.</p>



<p>Moreover, on Harbor (so on OVHcloud MPR), due to limitations in the upstream OCI Distribution specification, the registry does not enforce a strict link between a tag and an image digest.</p>



<p>As a result, a tag can be reassigned to a different artifact. And it causes a side effect on the registry, this causes the tag to migrate across the artifacts and every artifact that has its tag taken away becomes tagless.</p>



<p>To prevent this situation, you can configure tag immutability rules. Tag immutability guarantees that an immutable tagged artifact cannot be deleted, and also cannot be altered in any way such as through re-pushing, re-tagging, or replication from another target registry.</p>



<p>To do that, click on your project and on the <strong>Policy</strong> tab and select <strong>TAG IMMUTABILITY</strong>:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="469" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-44-1024x469.png" alt="" class="wp-image-30438" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-44-1024x469.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-44-300x137.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-44-768x352.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-44-1536x704.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-44.png 2030w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>And then click the <strong>ADD RULE</strong> button.</p>



<p>Fill the repositories and tags list according to your needs.</p>



<p>Example:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="522" src="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-45-1024x522.png" alt="" class="wp-image-30439" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-45-1024x522.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-45-300x153.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-45-768x392.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-45-1536x783.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/01/image-45-2048x1044.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>⚠️ You can add a maximum of 15 immutability rules per project.</p>



<h3 class="wp-block-heading">To wrap thing up</h3>



<p>Software supply chain security is super important these days. Everything is changing quickly &#8211; the concept, standards, and tools. So, leveraging useful tools like OVHcloud MPR and knowing how to set them up can boost your Software Supply Chain Security efforts.</p>



<p>To learn more about how to use and configure <a href="https://help.ovhcloud.com/csm/fr-documentation-public-cloud-containers-orchestration-managed-private-registry?id=kb_browse_cat&amp;kb_id=574a8325551974502d4c6e78b7421938&amp;kb_category=7939e6a464282d10476b3689cb0d0ed7&amp;spa=1" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">OVHcloud private registries</a>, don’t hesitate to follow our guides.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fsecure-your-software-supply-chain-with-ovhcloud-managed-private-registry-mpr%2F&amp;action_name=Secure%20your%20Software%20Supply%20Chain%20with%20OVHcloud%20Managed%20Private%20Registry%20%28MPR%29&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Moving Beyond Ingress: Why should OVHcloud Managed Kubernetes Service (MKS) users start looking at the Gateway API?</title>
		<link>https://blog.ovhcloud.com/moving-beyond-ingress-why-should-ovhcloud-managed-kubernetes-service-mks-users-start-looking-at-the-gateway-api/</link>
		
		<dc:creator><![CDATA[Aurélie Vache&#160;and&#160;Antonin Anchisi]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 09:26:36 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=30016</guid>

					<description><![CDATA[For years, the Kubernetes Ingress API, and the popular Ingress NGINX controller (ingress-nginx), have been the default way to expose applications running inside a Kubernetes cluster. But the ecosystem is changing: the Kubernetes SIG network has announced the retirement of Ingress NGINX in March 2026. After March 2026 the Ingress NGINX will no longer get [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmoving-beyond-ingress-why-should-ovhcloud-managed-kubernetes-service-mks-users-start-looking-at-the-gateway-api%2F&amp;action_name=Moving%20Beyond%20Ingress%3A%20Why%20should%20OVHcloud%20Managed%20Kubernetes%20Service%20%28MKS%29%20users%20start%20looking%20at%20the%20Gateway%20API%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="680" src="https://blog.ovhcloud.com/wp-content/uploads/2025/12/Gribouillis-2025-12-02-13.47.59.631-1024x680.png" alt="" class="wp-image-30084" style="width:669px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/12/Gribouillis-2025-12-02-13.47.59.631-1024x680.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/12/Gribouillis-2025-12-02-13.47.59.631-300x199.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/12/Gribouillis-2025-12-02-13.47.59.631.png 1505w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>For years, the Kubernetes <strong>Ingress</strong> API, and the popular Ingress NGINX controller (ingress-nginx), have been the default way to expose applications running inside a Kubernetes cluster.</p>



<p>But the ecosystem is changing: the Kubernetes SIG network has announced the <a href="https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">retirement of Ingress NGINX</a> in March 2026.</p>



<p>After <strong>March 2026 </strong>the Ingress NGINX will no longer get new features, new releases, security patches and bug fixes.</p>



<p>Furthermore, the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubernetes project <strong>recommends using Gateway instead of Ingress</strong></a>.</p>



<p>The Ingress API has already been frozen, which means it is no longer being developed, and will have no further changes or updates made to it. The Kubernetes project has no plans to remove Ingress from Kubernetes.</p>



<p>While OVHcloud Managed Kubernetes Service (MKS) does not yet provide a native <strong>GatewayClass</strong>, you can already benefit from Gateway API capabilities today by deploying your own controller 💪 .</p>



<p>Also, until Gateway API becomes fully integrated with OpenStack providers, there is an <strong>intermediate option</strong>: using a <strong>modern, actively maintained Ingress controller</strong> other than ingress-nginx.</p>



<h3 class="wp-block-heading">The limitations of the current Ingress controller model</h3>



<p>The traditional Kubernetes Ingress model was intentionally simple: define an <code>Ingress</code>, install an <code>Ingress Controller</code>, and let it configure a single proxy (usually Nginx) to route traffic.</p>



<p>This design works, but it comes with limitations:</p>



<p>&#8211; Single Monolithic “Entry Point”: All HTTP routing for the entire cluster goes through <strong>one shared proxy</strong>. It adds complexity, configuration conflicts and scaling challenges.<br>&#8211; Protocol limitations: only <strong>HTTP and HTTPS</strong>.Support for gRPC, HTTP/2, TCP, UDP or TLS passthrough is inconsistent and controller-specific.<br>&#8211; Heavy Reliance on Annotations: Advanced features (timeouts, rewrites, header handling…) rely on custom annotations.<br>&#8211; Strong 3rd parties and cloud Load Balancers support: Every <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Ingress controllers</a> (3rd parties providers) come with their specialized annotations.</p>



<p>Finally, as mentioned, the most used Ingress controller, Ingress NGINX, will be retired in March 2026.</p>



<h3 class="wp-block-heading">A Transitional Solution: Using a Modern Ingress Controller (Traefik, Contour, HAProxy…)</h3>



<p>Before moving to the Gateway API, as a transitional solution, OVHcloud MKS users can simply replace Ingress Nginx with a <strong>modern, actively maintained Ingress controller</strong>.</p>



<p>This allows you to:</p>



<p>&#8211; keep using your existing <code>Ingress</code> manifests<br>&#8211; keep the same architecture: Service type LoadBalancer → OVHcloud Public Cloud Load Balancer → Ingress Controller<br>&#8211; avoid relying on unsupported or deprecated components<br>&#8211; gain features (better gRPC support, built‑in dashboards, improved L7 behaviour&#8230;)</p>



<h4 class="wp-block-heading">Popular alternatives:</h4>



<p><a href="https://doc.traefik.io/traefik/providers/kubernetes-ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><strong>Traefik</strong></a>:<br>&#8211; Very easy to deploy<br>&#8211; Excellent support for HTTP/2, gRPC, WebSockets<br>&#8211; Built‑in dashboard<br>&#8211; Supports both Ingress and Gateway API<br>&#8211; Actively maintained<br>&#8211; Seamless migration from NGINX Ingress Controller to Traefik with <a href="https://doc.traefik.io/traefik/reference/routing-configuration/kubernetes/ingress-nginx/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NGINX annotation compatibility</a></p>



<p><strong><a href="https://projectcontour.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Contour</a> (Envoy)</strong>:<br>&#8211; Envoy-based Ingress Controller<br>&#8211; Excellent performance<br>&#8211; Good stepping‑stone toward Gateway API</p>



<p><a href="https://www.haproxy.com/documentation/kubernetes-ingress/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><strong>HAProxy Ingress</strong></a>:<br>&#8211; Extremely performant<br>&#8211; Enterprise-grade L7 routing<br>&#8211; Optional Gateway API support</p>



<p><strong><a href="https://docs.nginx.com/nginx-gateway-fabric/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">NGINX Gateway Fabric</a> (NGF)</strong>:<br>&#8211; The successor to Ingress NGINX<br>&#8211; Built directly around Gateway API<br>&#8211; Still maturing but a strong long‑term candidate</p>



<p>If you are interested, you can read the more<a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> exhaustive list of Ingress controllers</a>.</p>



<h3 class="wp-block-heading">Installing an Alternative Ingress Controller on OVHcloud MKS</h3>



<p>We will show you how to install <strong>Traefik</strong>, as an alternative Ingress controller and use it to spawn a single OVHcloud Public Cloud Load Balancer (based on OpenStack Octavia).</p>



<p>Install Traefik:</p>



<pre class="wp-block-code"><code class="">helm repo add traefik https://traefik.github.io/charts<br>helm repo update<br><br>helm install traefik traefik/traefik --namespace traefik --create-namespace --set service.type=LoadBalancer</code></pre>



<p>This automatically triggers:<br>&#8211; the OpenStack CCM (used by OVHcloud)<br>&#8211; the creation of an OVHcloud Public Cloud Load Balancer<br>&#8211; exposure of Traefik through a public IP</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="179" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-1024x179.png" alt="" class="wp-image-30035" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-1024x179.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-300x52.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-768x134.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-1536x268.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-11-2048x358.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>After several seconds, the Load Balancer will be active.</p>



<p>Check that Traefik is running:</p>



<pre class="wp-block-code"><code class="">$ kubectl get all -n traefik<br>NAME                           READY   STATUS    RESTARTS   AGE<br>pod/traefik-6777c5db85-pddd6   1/1     Running   0          31s<br><br>NAME              TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE<br>service/traefik   LoadBalancer   10.3.129.188   &lt;pending&gt;     80:30267/TCP,443:30417/TCP   31s<br><br>NAME                      READY   UP-TO-DATE   AVAILABLE   AGE<br>deployment.apps/traefik   1/1     1            1           31s<br><br>NAME                                 DESIRED   CURRENT   READY   AGE<br>replicaset.apps/traefik-6777c5db85   1         1         1       31s</code></pre>



<p>Then in order to use it, create an <code>ingress.yaml</code> file with the following content:</p>



<pre class="wp-block-code"><code class="">apiVersion: networking.k8s.io/v1<br>kind: Ingress<br>metadata:<br>  name: my-app-ingress<br>  namespace: default<br>  annotations:<br>    kubernetes.io/ingress.class: "traefik"  # Specifies Traefik as the ingress controller<br>spec:<br>  rules:<br>    - host: my-app.local<br>      http:<br>        paths:<br>          - path: /<br>            pathType: Prefix<br>            backend:<br>              service:<br>                name: my-app-service<br>                port:<br>                  number: 80</code></pre>



<p>And apply it in your cluster:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f ingress.yaml</code></pre>



<p>Using this type of alternative provides a <strong>fully supported, modern Ingress Controller</strong> while you prepare a long‑term transition to the Gateway API.</p>



<h3 class="wp-block-heading">Gateway API: A modern, flexible networking model</h3>



<p>The <strong>Gateway API</strong> is the next-generation Kubernetes networking specification. It introduces clearer roles and more flexible architectures.</p>



<p>Gateway API splits responsibilities across:<br>&#8211; <strong>GatewayClass</strong>: defines the type of gateway and which controller manages it<br>&#8211; <strong>Gateway</strong>: the actual entry point (e.g., a Load Balancer)<br>&#8211; <strong>Routes</strong>: routing rules, protocol-specific (HTTPRoute, TLSRoute, GRPCRoute, TCPRoute…)</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="800" height="700" src="https://blog.ovhcloud.com/wp-content/uploads/2025/12/image-1.png" alt="" class="wp-image-30065" style="width:558px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/12/image-1.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/12/image-1-300x263.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/12/image-1-768x672.png 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p>Gateway API supports:<br>&#8211; HTTP(S)<br>&#8211; HTTP/2<br>&#8211; gRPC<br>&#8211; TCP<br>&#8211; TLS passthrough<br>…in a consistent and portable way.</p>



<p>Unlike Ingress, Gateway API is explicitly designed to allow providers like OVHcloud, AWS, GCP, Azure to:<br>&#8211; provision Load Balancers (LB)<br>&#8211; manage listeners<br>&#8211; expose multiple ports<br>&#8211; integrate with their LB features<br>This paves the way for native OVHcloud <strong>GatewayClass</strong> support.</p>



<h3 class="wp-block-heading">How does it work today on OVHcloud MKS?</h3>



<p>OVHcloud MKS relies on the OpenStack Cloud Controller Manager (CCM) to provision OVHcloud <strong>Public Cloud</strong> Load Balancers in response to a Service of type <code>LoadBalancer</code>.</p>



<p>Since MKS does not yet include a native <code>GatewayClass</code>, you can use Gateway API today as follows:</p>



<p>1. You deploy an existing Gateway Controller (Envoy Gateway, Traefik, Contour/Envoy…) and its GatewayClass.<br>2. The controller deploys a Data Plane proxy inside the cluster.<br>3. To expose that proxy, you still have to create a <code>Service</code> of type <strong>LoadBalancer</strong> (and your app of course).<br>4. The CCM provisions an OVHcloud Public Cloud Load Balancer and forwards traffic to your proxy.</p>



<p>Thanks to that, you will have a fully functional Gateway API. The workflow is very similar to that which is required for using NGINX Ingress controller.</p>



<h3 class="wp-block-heading">Using the Gateway API on OVHcloud MKS today</h3>



<p>You can already use the Gateway API by deploying your preferred controller.</p>



<p>Here’s an example using<a href="https://gateway.envoyproxy.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> Envoy Gateway</a>, one of the most future-proof options.</p>



<p>Install Gateway API CRDs:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/latest/download/standard-install.yaml</code></pre>



<p>Deploy Envoy Gateway:</p>



<pre class="wp-block-code"><code class="">helm install eg oci://docker.io/envoyproxy/gateway-helm -n envoy-gateway-system --create-namespace</code></pre>



<p>You should have a result like this:</p>



<pre class="wp-block-code"><code class="">$ helm install eg oci://docker.io/envoyproxy/gateway-helm -n envoy-gateway-system --create-namespace<br><br>Pulled: docker.io/envoyproxy/gateway-helm:1.6.0<br>Digest: sha256:5c55e7844ae8cff3152ca00330234ef61b1f9fa3d466f50db2c63a279f1cd1df<br>NAME: eg<br>LAST DEPLOYED: Mon Dec  1 16:27:07 2025<br>NAMESPACE: envoy-gateway-system<br>STATUS: deployed<br>REVISION: 1<br>TEST SUITE: None<br>NOTES:<br>**************************************************************************<br>*** PLEASE BE PATIENT: Envoy Gateway may take a few minutes to install ***<br>**************************************************************************<br><br>Envoy Gateway is an open source project for managing Envoy Proxy as a standalone or Kubernetes-based application gateway.<br><br>Thank you for installing Envoy Gateway! 🎉<br><br>Your release is named: eg. 🎉<br><br>Your release is in namespace: envoy-gateway-system. 🎉<br><br>To learn more about the release, try:<br><br>  $ helm status eg -n envoy-gateway-system<br>  $ helm get all eg -n envoy-gateway-system<br><br>To have a quickstart of Envoy Gateway, please refer to https://gateway.envoyproxy.io/latest/tasks/quickstart.<br><br>To get more details, please visit https://gateway.envoyproxy.io and https://github.com/envoyproxy/gateway.</code></pre>



<p>Check the Envoy gateway is running:</p>



<pre class="wp-block-code"><code class="">$ kubectl get po -n envoy-gateway-system<br>NAME                            READY   STATUS    RESTARTS   AGE<br>envoy-gateway-9cbbc577c-5h5qw   1/1     Running   0          16m</code></pre>



<p>As a quickstart, you can install directly the <a href="https://gateway-api.sigs.k8s.io/api-types/gatewayclass/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">GatewayClass</a>, <a href="https://gateway-api.sigs.k8s.io/api-types/gateway/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Gateway</a>, <a href="https://gateway-api.sigs.k8s.io/api-types/httproute/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">HTTPRoute</a> and an example app:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/quickstart.yaml -n default</code></pre>



<p>This command deploys a <code>GatewayClass</code>, a <code>Gateway</code>, a <code>HTTPRoute</code> and an app deployed in a deployment and exposed through a service:</p>



<pre class="wp-block-code"><code class="">gatewayclass.gateway.networking.k8s.io/eg created<br>gateway.gateway.networking.k8s.io/eg created<br>serviceaccount/backend created<br>service/backend created<br>deployment.apps/backend created<br>httproute.gateway.networking.k8s.io/backend created</code></pre>



<p>As you can see, a GatewayClass have been deployed:</p>



<pre class="wp-block-code"><code class="">$ kubectl get gatewayclass -o yaml | kubectl neat<br>apiVersion: v1<br>items:<br>- apiVersion: gateway.networking.k8s.io/v1<br>  kind: GatewayClass<br>  metadata:<br>    name: eg<br>  spec:<br>    controllerName: gateway.envoyproxy.io/gatewayclass-controller<br>kind: List<br>metadata:<br>  resourceVersion: ""</code></pre>



<p>Note that a GatewayClass is a cluster-wide resource so you don&#8217;t have to specify any namespace.</p>



<p>A Gateway have been deployed also:</p>



<pre class="wp-block-code"><code class="">$ kubectl get gateway -o yaml -n default | kubectl neat<br>apiVersion: v1<br>items:<br>- apiVersion: gateway.networking.k8s.io/v1<br>  kind: Gateway<br>  metadata:<br>    name: eg<br>    namespace: default<br>  spec:<br>    gatewayClassName: eg<br>    listeners:<br>    - allowedRoutes:<br>        namespaces:<br>          from: Same<br>      name: http<br>      port: 80<br>      protocol: HTTP<br>kind: List<br>metadata:<br>  resourceVersion: ""</code></pre>



<p>A HTTPRoute also:</p>



<pre class="wp-block-code"><code class="">$ kubectl get httproute -o yaml -n default | kubectl neat<br>apiVersion: v1<br>items:<br>- apiVersion: gateway.networking.k8s.io/v1<br>  kind: HTTPRoute<br>  metadata:<br>    name: backend<br>    namespace: default<br>  spec:<br>    hostnames:<br>    - www.example.com<br>    parentRefs:<br>    - group: gateway.networking.k8s.io<br>      kind: Gateway<br>      name: eg<br>    rules:<br>    - backendRefs:<br>      - group: ""<br>        kind: Service<br>        name: backend<br>        port: 3000<br>        weight: 1<br>      matches:<br>      - path:<br>          type: PathPrefix<br>          value: /<br>kind: List<br>metadata:<br>  resourceVersion: ""</code></pre>



<p>In order to retrieve the external IP (of the external Load Balancer), you just have to get information about the Gateway and export it in an environment variable:</p>



<pre class="wp-block-code"><code class="">$ kubectl get gateway eg<br>NAME   CLASS   ADDRESS        PROGRAMMED   AGE<br>eg     eg      xx.xxx.xx.xxx   True        18m<br><br>$ export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')<br><br>$ echo $GATEWAY_HOST<br>xx.xxx.xx.xxx</code></pre>



<p>And finally, a <code>backend</code> service have been deployed with its deployment:</p>



<pre class="wp-block-code"><code class="">$ kubectl get pod,svc -l app=backend -n default<br>NAME                           READY   STATUS    RESTARTS   AGE<br>pod/backend-765694d47f-zr6hh   1/1     Running   0          21m<br><br>NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE<br>service/backend   ClusterIP   10.3.114.179   &lt;none&gt;        3000/TCP   21m</code></pre>



<p>In order to create your own <code>Gateway</code> and <code>*Route</code> resources, don&#8217;t hesitate to take a look at the <a href="https://gateway-api.sigs.k8s.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Gateway API website</a>.</p>



<h3 class="wp-block-heading">Conclusion</h3>



<p>Two migration paths are currently available for OVHcloud MKS users:</p>



<ul class="wp-block-list">
<li>Short-term: switch to a modern Ingress Controller (Traefik, Contour, HAProxy, NGF&#8230;). It provides full support for current Ingress usage, without requiring API changes.</li>



<li>Long-term: adopt the Gateway API. Gateway API brings multi‑protocol support, clearer separation of roles, and is the strategic direction of Kubernetes networking.</li>
</ul>



<p>Which approach and which tool should you choose? Well, it’s up to you, depending on your use cases, your teams, your needs… 🙂</p>



<p>As we have seen in this blog post, OVHcloud MKS users can begin adopting these technologies today, safely and incrementally.</p>



<p>This ecosystem is evolving quickly, so stay tuned to find out about the coming release of a pre-installed official GatewayClass (based on OpenStack Octavia) 💪.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmoving-beyond-ingress-why-should-ovhcloud-managed-kubernetes-service-mks-users-start-looking-at-the-gateway-api%2F&amp;action_name=Moving%20Beyond%20Ingress%3A%20Why%20should%20OVHcloud%20Managed%20Kubernetes%20Service%20%28MKS%29%20users%20start%20looking%20at%20the%20Gateway%20API%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Manage your secrets using OVHcloud Secret Manager with External Secrets Operator (ESO) on OVHcloud Managed Kubernetes Service (MKS)</title>
		<link>https://blog.ovhcloud.com/manage-your-secrets-through-ovhcloud-secret-manager-thanks-to-external-secrets-operator-eso-on-ovhcloud-managed-kubernetes-service-mks/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Tue, 25 Nov 2025 14:44:52 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[IAM]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[MKS]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<category><![CDATA[Secret Manager]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29374</guid>

					<description><![CDATA[Secrets resources in Kubernetes help us keep sensitive information like logins, passwords, tokens, credentials and certificates secure. But just a heads up: Secrets in Kubernetes are base64 encoded, not encrypted so anyone can read and decode them if they know how. The good news is that OVHcloud has just launched the Secret Manager Beta, which [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmanage-your-secrets-through-ovhcloud-secret-manager-thanks-to-external-secrets-operator-eso-on-ovhcloud-managed-kubernetes-service-mks%2F&amp;action_name=Manage%20your%20secrets%20using%20OVHcloud%20Secret%20Manager%20with%20External%20Secrets%20Operator%20%28ESO%29%20on%20OVHcloud%20Managed%20Kubernetes%20Service%20%28MKS%29&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="675" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/IMG_1547-1-1024x675.jpg" alt="" class="wp-image-30006" style="width:638px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/IMG_1547-1-1024x675.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/IMG_1547-1-300x198.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/IMG_1547-1-768x507.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/IMG_1547-1.jpg 1536w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Secrets resources in Kubernetes help us keep sensitive information like logins, passwords, tokens, credentials and certificates secure. But just a heads up: Secrets in Kubernetes are base64 encoded, not encrypted so anyone can read and decode them if they know how.</p>



<p>The good news is that OVHcloud has just launched the<a href="https://www.ovhcloud.com/fr/identity-security-operations/secret-manager/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> Secret Manager</a> Beta, which you can use within your Kubernetes clusters via the External Secrets Operator (ESO) 🎉.</p>



<h2 class="wp-block-heading">External Secrets Operator</h2>



<p>The External Secrets Operator (ESO) extends Kubernetes with Custom Resource Definitions (CRDs) ) that define <strong>where</strong> secrets are and <strong>how</strong> to sync them.</p>



<p>The controller <strong>retrieves secrets from an external API</strong> and <strong>creates Kubernetes Secrets</strong>. If the secret changes in the external API, the controller updates the secret in the Kubernetes cluster.</p>



<p>Basically, the ESO can connect to an external Secret Manager like OVHcloud, Vault, AWS, or GCP using a (Cluster)SecretStore, and an ExternalSecret to figure out which Secret it needs to fetch. It then creates a Secret in the Kubernetes cluster with the fetched secret’s value.</p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1020" height="942" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-10.png" alt="" class="wp-image-29378" style="width:435px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-10.png 1020w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-10-300x277.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-10-768x709.png 768w" sizes="auto, (max-width: 1020px) 100vw, 1020px" /></figure>



<p>Plus, it can sync secrets across all the namespaces in your Kubernetes cluster (I love this feature ❤️):</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="577" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-11-1024x577.png" alt="" class="wp-image-29380" style="width:502px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-11-1024x577.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-11-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-11-768x433.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-11.png 1282w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>You can use External Secrets with different<a href="https://external-secrets.io/latest/provider/aws-secrets-manager/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> Providers</a>, including AWS Secrets Manager, HashiCorp Vault, Google Secret Manager. In this blog I’ll show you how to create a secret in the new OVHcloud Secret Manager using<a href="https://external-secrets.io/latest/provider/hashicorp-vault/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> Hashicorp Vault</a>.</p>



<p>For more details, read the <a href="https://external-secrets.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">ESO official documentation</a>.</p>



<h2 class="wp-block-heading">Let&#8217;s jump in!</h2>



<h3 class="wp-block-heading">Create an IAM local user</h3>



<p>To fetch secrets in Secret Manager, you’ll need an IAM user with the right permissions. You can either set it up or use an existing one.</p>



<p>In the<a href="https://www.ovh.com/manager" data-wpel-link="exclude"> OVHcloud Control Panel</a> (UI), go to ‘Identity and Access Management’, then ‘Identities’.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="760" height="636" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/identity.png" alt="" class="wp-image-29967" style="width:232px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/identity.png 760w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/identity-300x251.png 300w" sizes="auto, (max-width: 760px) 100vw, 760px" /></figure>



<p>Click the ‘Add user’ button to create an IAM local user and complete the fields as shown below:</p>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="907" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-9-2-1024x907.png" alt="" class="wp-image-29994" style="width:561px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-9-2-1024x907.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-9-2-300x266.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-9-2-768x681.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-9-2.png 1194w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="473" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-10-1-1024x473.png" alt="" class="wp-image-29995" style="width:560px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-10-1-1024x473.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-10-1-300x139.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-10-1-768x355.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-10-1.png 1194w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Quick note, I’ve named the user ‘secretmanager-’ followed by the ID of the OKMS domain I want to use.</p>



<p>The user needs to be an ADMIN, or, ideally, have the following policies:</p>



<pre class="wp-block-code"><code class="">okms:apikms:secret/create<br>okms:apikms:secret/version/getData<br>okms:apiovh:secret/get</code></pre>



<h3 class="wp-block-heading">Get the Personal Access Token (PAT)</h3>



<p>The ESO ClusterSecretStore needs the permission to fetch secrets from Secret Manager, so you’ll need a token (PAT).</p>



<p>You can access it via our API, which you’ll find here: <a href="https://eu.api.ovh.com/console/?section=%2Fme&amp;branch=v1#post-/me/identity/user/-user-/token" data-wpel-link="exclude">https://eu.api.ovh.com/console/?section=%2Fme&amp;branch=v1#post-/me/identity/user/-user-/token</a></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="542" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-1-3-1024x542.png" alt="" class="wp-image-29997" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-1-3-1024x542.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-1-3-300x159.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-1-3-768x406.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-1-3-1536x813.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-1-3.png 1546w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Path parameters</strong></p>



<p>user: secretmanager-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx</p>



<p><strong>Request body:</strong></p>



<pre class="wp-block-code"><code class="">{<br>  "description": "PAT secretmanager-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx",<br>  "name": "pat-secretmanager-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx"<br>}</code></pre>



<p>You should obtain a response like this:</p>



<pre class="wp-block-code"><code class="">{<br>  "creation": "2025-11-07T14:02:56.679157188Z",<br>  "description": "PAT secretmanager-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx",<br>  "expiresAt": null,<br>  "lastUsed": null,<br>  "name": "pat-secretmanager-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx",<br>  "token": "eyJhbGciOiJ...punpVAg"<br>}</code></pre>



<p>Save the token value, because you’ll need it in a bit.</p>



<h3 class="wp-block-heading">Create a secret in the Secret Manager</h3>



<p>Here’s how to create a secret with OVHcloud MPR credentials for use in Kubernetes cluster(s).</p>



<p>In the<a href="https://www.ovh.com/manager" data-wpel-link="exclude"> OVHcloud Control Panel</a> (UI), go to ‘Secret Manager’, then create a secret ‘prod/va1/dockerconfigjson’ in the Europe region (France – Paris) eu-west-par:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="309" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-5-1-1024x309.png" alt="" class="wp-image-29973" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-5-1-1024x309.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-5-1-300x91.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-5-1-768x232.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-5-1-1536x464.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-5-1-2048x618.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>You’ll need to activate the region if you’re selecting it for the first time:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="569" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/Capture-decran-2025-11-07-a-14.03.20-1024x569.png" alt="" class="wp-image-29911" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/Capture-decran-2025-11-07-a-14.03.20-1024x569.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/Capture-decran-2025-11-07-a-14.03.20-300x167.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/Capture-decran-2025-11-07-a-14.03.20-768x426.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/Capture-decran-2025-11-07-a-14.03.20-1536x853.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/Capture-decran-2025-11-07-a-14.03.20-2048x1137.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Select an OKMS domain:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="260" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-6-3-1024x260.png" alt="" class="wp-image-29996" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-6-3-1024x260.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-6-3-300x76.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-6-3-768x195.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-6-3.png 1384w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Enter the path and value of your secret. For example:</p>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="708" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-7-1-1024x708.png" alt="" class="wp-image-29975" style="width:558px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-7-1-1024x708.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-7-1-300x208.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-7-1-768x531.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-7-1.png 1402w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Your secret is all set!</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="417" src="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-4-2-1024x417.png" alt="" class="wp-image-29990" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-4-2-1024x417.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-4-2-300x122.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-4-2-768x313.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-4-2-1536x625.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/11/image-4-2-2048x834.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">Install External Secrets Operators on your cluster</h3>



<p>Deploy external secret through Helm:</p>



<pre class="wp-block-code"><code class="">helm repo add external-secrets https://charts.external-secrets.io
helm repo update</code></pre>



<p>Install from the chart repository:</p>



<pre class="wp-block-code"><code class="">helm install external-secrets \<br>   external-secrets/external-secrets \<br>    -n external-secrets \<br>    --create-namespace \<br>    --set installCRDs=true</code></pre>



<p>Your result should look something like this:</p>



<pre class="wp-block-code"><code class="">$ helm install external-secrets \<br>   external-secrets/external-secrets \<br>    -n external-secrets \<br>    --create-namespace \<br>    --set installCRDs=true<br><br>NAME: external-secrets<br>LAST DEPLOYED: Mon Nov 24 17:08:58 2025<br>NAMESPACE: external-secrets<br>STATUS: deployed<br>REVISION: 1<br>TEST SUITE: None<br>NOTES:<br>external-secrets has been deployed successfully in namespace external-secrets!<br><br>In order to begin using ExternalSecrets, you will need to set up a SecretStore<br>or ClusterSecretStore resource (for example, by creating a 'vault' SecretStore).<br><br>More information on the different types of SecretStores and how to configure them<br>can be found in our Github: https://github.com/external-secrets/external-secrets</code></pre>



<p>This command will install the External Secrets Operator in your cluster.</p>



<p>Check ESO is running:</p>



<pre class="wp-block-code"><code class="">$ kubectl get all -n external-secrets<br>NAME                                                    READY   STATUS    RESTARTS   AGE<br>pod/external-secrets-6b9f8ff5d4-jwd6g                   1/1     Running   0          25m<br>pod/external-secrets-cert-controller-7bf8fd894c-d24xb   1/1     Running   0          25m<br>pod/external-secrets-webhook-df488ddff-2xv4t            1/1     Running   0          25m<br><br>NAME                               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE<br>service/external-secrets-webhook   ClusterIP   10.3.106.32   &lt;none&gt;        443/TCP   25m<br><br>NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE<br>deployment.apps/external-secrets                   1/1     1            1           25m<br>deployment.apps/external-secrets-cert-controller   1/1     1            1           25m<br>deployment.apps/external-secrets-webhook           1/1     1            1           25m<br><br>NAME                                                          DESIRED   CURRENT   READY   AGE<br>replicaset.apps/external-secrets-6b9f8ff5d4                   1         1         1       25m<br>replicaset.apps/external-secrets-cert-controller-7bf8fd894c   1         1         1       25m<br>replicaset.apps/external-secrets-webhook-df488ddff            1         1         1       25m</code></pre>



<h3 class="wp-block-heading">Create a Secret contains the PAT</h3>



<p>Encode the PAT in base64:</p>



<pre class="wp-block-code"><code class="">$ echo -n "&lt;token&gt;" | base64<br><br>ZXlKaG...wVkFn</code></pre>



<p>Create a secret with it inside a <strong>secret.yaml</strong> file:</p>



<pre class="wp-block-code"><code class="">apiVersion: v1<br>kind: Secret<br>metadata:<br>  name: ovhcloud-vault-token<br>  namespace: external-secrets<br>data:<br>  token: ZXlKaG...wVkFn</code></pre>



<p>Apply the resource in your cluster:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f secret.yaml</code></pre>



<p>Check that the secret have been created:</p>



<pre class="wp-block-code"><code class="">$ kubectl get secret ovhcloud-vault-token -n external-secrets<br>NAME                   TYPE     DATA   AGE<br>ovhcloud-vault-token   Opaque   1      5m</code></pre>



<h3 class="wp-block-heading">Deploy a ClusterSecretStore to connect ESO to Secret Manager</h3>



<p>Set up a ClusterSecretStore to manage synchronisation with Secret Manager.<br>It will use the HashiCorp Vault provider with token auth, and the OKMS endpoint as the backend.</p>



<p>Create a <strong>clustersecretstore.yaml</strong> file with the content below:</p>



<pre class="wp-block-code"><code class="">apiVersion: external-secrets.io/v1<br>kind: ClusterSecretStore<br>metadata:<br>  name: vault-secret-store<br>spec:<br>  provider:<br>      vault:<br>        server: "https://eu-west-par.okms.ovh.net/api/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # OKMS endpoint, fill with the correct region and your okms_id<br>        path: "secret"<br>        version: "v2"<br>        auth:<br>            tokenSecretRef:<br>              name: ovhcloud-vault-token # The k8s secret that contain your PAT<br>              key: token</code></pre>



<p>Keep in mind, in our example, we’ve selected the “eu-west-par” region. You can enter a different server URL, depending on your desired region.</p>



<p>Apply it:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f clustersecretstore.yaml</code></pre>



<p>Check:</p>



<pre class="wp-block-code"><code class="">$ kubectl get clustersecretstore.external-secrets.io/vault-secret-store<br>NAME                 AGE   STATUS   CAPABILITIES   READY<br>vault-secret-store   2m   Valid    ReadWrite      True</code></pre>



<h3 class="wp-block-heading">Create an ExternalSecret</h3>



<p>Create an <strong>externalsecret.yaml</strong> file with the content below:</p>



<pre class="wp-block-code"><code class="">apiVersion: external-secrets.io/v1<br>kind: ExternalSecret<br>metadata:<br>  name: docker-config-secret<br>  namespace: external-secrets<br>spec:<br>  refreshInterval: 30m<br>  secretStoreRef:<br>    name: vault-secret-store<br>    kind: ClusterSecretStore<br>  target:<br>    template:<br>      type: kubernetes.io/dockerconfigjson<br>      data:<br>        .dockerconfigjson: "{{ .mysecret | toString }}"<br>    name: ovhregistrycred<br>    creationPolicy: Owner<br>  data:<br>  - secretKey: mysecret<br>    remoteRef:<br>      key: prod/va1/dockerconfigjson</code></pre>



<p>Apply it:</p>



<pre class="wp-block-code"><code class="">$ kubectl apply -f externalsecret.yaml<br>externalsecret.external-secrets.io/docker-config-secret created</code></pre>



<p>Check:</p>



<pre class="wp-block-code"><code class="">$ kubectl get externalsecret.external-secrets.io/docker-config-secret -n external-secrets<br>NAME                   STORETYPE            STORE                REFRESH INTERVAL   STATUS         READY<br>docker-config-secret   ClusterSecretStore   vault-secret-store   30m0s              SecretSynced   True</code></pre>



<p>After applying this command, it will create a Kubernetes Secret object.</p>



<pre class="wp-block-code"><code class="">$ kubectl get secret -n external-secrets<br>NAME                                     TYPE                             DATA   AGE<br>...<br>ovhregistrycred                          kubernetes.io/dockerconfigjson   1      17d<br>...</code></pre>



<p>As you can see, the Secret is ready, and you can now use it as an imagePullSecret in your Pods!</p>



<h3 class="wp-block-heading">Conclusion</h3>



<p>In this blog, we’ve explained how to create secrets in the new OVHcloud Secret Manager and integrate them directly in your Kubernetes clusters using the ESO Vault provider.</p>



<p>And here’s some great news: our teams are working on an OVHcloud External Secret Operator, set to go live in the coming months, which you can use 🎉.</p>



<p>Stay tuned and share your thoughts!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmanage-your-secrets-through-ovhcloud-secret-manager-thanks-to-external-secrets-operator-eso-on-ovhcloud-managed-kubernetes-service-mks%2F&amp;action_name=Manage%20your%20secrets%20using%20OVHcloud%20Secret%20Manager%20with%20External%20Secrets%20Operator%20%28ESO%29%20on%20OVHcloud%20Managed%20Kubernetes%20Service%20%28MKS%29&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Create encrypted Persistent Volumes on OVHcloud Managed Kubernetes clusters with LUKS</title>
		<link>https://blog.ovhcloud.com/create-encrypted-persistent-volumes-on-ovhcloud-managed-kubernetes-clusters-with-luks/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Tue, 19 Aug 2025 11:35:41 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Block Storage]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[MKS]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29532</guid>

					<description><![CDATA[Since this summer, it&#8217;s possible to create encrypted OVHcloud Block Storage with OMK (OVHcloud managed key) in RBX, SBG, Paris &#38; BHS regions. More regions will come in the coming months 💪. And the good news is that you can use encrypted Block Storage using Persistent Volumes in your OVHcloud Managed Kubernetes Service (MKS) clusters [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fcreate-encrypted-persistent-volumes-on-ovhcloud-managed-kubernetes-clusters-with-luks%2F&amp;action_name=Create%20encrypted%20Persistent%20Volumes%20on%20OVHcloud%20Managed%20Kubernetes%20clusters%20with%20LUKS&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="681" src="https://blog.ovhcloud.com/wp-content/uploads/2025/08/Gribouillis-2025-08-19-11.53.11.513-1-1024x681.png" alt="" class="wp-image-29585" style="width:495px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/08/Gribouillis-2025-08-19-11.53.11.513-1-1024x681.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/Gribouillis-2025-08-19-11.53.11.513-1-300x200.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/Gribouillis-2025-08-19-11.53.11.513-1-768x511.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/Gribouillis-2025-08-19-11.53.11.513-1.png 1533w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Since this summer, it&#8217;s possible to create <a href="https://github.com/ovh/public-cloud-roadmap/issues/307" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">encrypted OVHcloud Block Storage with OMK (OVHcloud managed key)</a> in RBX, SBG, Paris &amp; BHS regions. More regions will come in the coming months 💪.</p>



<p>And the good news is that you can use encrypted <strong>Block Storage</strong> using <code>Persistent Volumes</code> in your OVHcloud <strong>Managed Kubernetes Service (MKS)</strong> clusters 🎉.</p>



<p>In this post, we’ll show you how to encrypt persistent volumes on an OVHcloud Managed Kubernetes (MKS) cluster using a&nbsp;<code>csi-cinder-high-speed-gen2-luks</code>&nbsp;<code>Storage Class</code>. Leveraging LUKS-based encryption at the storage layer, you’ll learn how to protect your data at rest without sacrificing the performance of NVMe-backed volumes. </p>



<p>We’ll guide you step by step: defining the <code>Storage Class</code>, creating a <code>Persistent Volume Claim</code> (PVC), and deploying a <code>Pod</code> that mounts the encrypted volume.  </p>



<p>This practical walkthrough is designed for developers and platform engineers looking to secure their Kubernetes workloads on OVHcloud in a straightforward way.</p>



<h2 class="wp-block-heading">How to</h2>



<p>You will create a <code>Persistent Volume Claim</code> (PVC), linked to a <code>Storage Class</code>, that will automatically create a <code>Persistent Volume</code> (PV) that will automatically create an associated encrypted Public Cloud <strong>Block Storage</strong> volume.<br>Then you will create a <code>Pod</code> attached to the <code>PVC</code>.</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="970" src="https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-1024x970.png" alt="" class="wp-image-29539" style="width:560px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-1024x970.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-300x284.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-768x728.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/image.png 1144w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">Let’s create an encrypted Persistent Volume in our OVHcloud MKS cluster</h3>



<p>Prerequisite: Have an OVHcloud MKS cluster.</p>



<p>First, create a <code>csi-cinder-high-speed-gen2-luks.yaml</code> file with the following content:</p>



<p>💡 Note that if you deploy in on a MKS 1AZ cluster (instead of my 3AZ MKS cluster), you should define the <code>volumeBindingMode</code> to <code>Immediate</code> instead.</p>



<pre class="wp-block-code"><code class="">apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cinder-high-speed-gen2-luks
allowVolumeExpansion: true
parameters:
  fsType: ext4
  type: high-speed-gen2-luks
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer </code></pre>



<p>This StorageClass is using the same configuration as existing <code>csi-cinder-high-speed-gen2</code> but with the <code>high-speed-gen2-luks</code> type.</p>



<p>So the result will be the usage of SSD disks with NVMe interfaces encrypted with LUKS (Linux Unified Key Setup) which is a standard on-disk format for hard disk encryption.</p>



<p>Apply the manifest file:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f csi-cinder-high-speed-gen2-luks.yaml</code></pre>



<p>⚠️ You can&#8217;t modify the <code>volumeBindingMode</code> value for an existing <code>Storage Class</code>, you have to delete it and create a new one.</p>



<p>List the <code>Storage Class</code>es in the cluster:</p>



<pre class="wp-block-code"><code class="">$ kubectl get sc
NAME                              PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
csi-cinder-high-speed (default)   cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   33d
csi-cinder-high-speed-gen-2       cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   33d
csi-cinder-high-speed-gen2-luks   cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   4s</code></pre>



<p>Create a <code>pvc-luks.yaml</code> file with the following content:</p>



<pre class="wp-block-code"><code class="">apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-luks
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: csi-cinder-high-speed-gen2-luks</code></pre>



<p>Create a new namespace and apply the manifest file into it:</p>



<pre class="wp-block-code"><code class="">kubectl create ns test-pvc-luks
kubectl apply -f pvc-luks.yaml -n test-pvc-luks</code></pre>



<p>Check the status of our newly created <code>PVC</code>:</p>



<pre class="wp-block-code"><code class="">$ kubectl get pvc -n test-pvc-luks<br>NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS                      VOLUMEATTRIBUTESCLASS   AGE<br>pvc-luks   Pending                                      csi-cinder-high-speed-gen2-luks   &lt;unset&gt;                 3s<br><br><br>$ kubectl describe pvc pvc-luks -n test-pvc-luks<br>Name:          pvc-luks<br>Namespace:     test-pvc-luks<br>StorageClass:  csi-cinder-high-speed-gen2-luks<br>Status:        Pending<br>Volume:<br>Labels:        &lt;none&gt;<br>Annotations:   &lt;none&gt;<br>Finalizers:    [kubernetes.io/pvc-protection]<br>Capacity:<br>Access Modes:<br>VolumeMode:    Filesystem<br>Used By:       &lt;none&gt;<br>Events:<br>  Type    Reason                Age                From                         Message<br>  ----    ------                ----               ----                         -------<br>  Normal  WaitForFirstConsumer  10s (x2 over 10s)  persistentvolume-controller  waiting for first consumer to be created before binding<br>$ kubectl describe pvc pvc-luks<br>Name:          pvc-luks<br>Namespace:     test-pvc-luks<br>StorageClass:  csi-cinder-high-speed-gen2-luks<br>Status:        Pending<br>Volume:<br>Labels:        &lt;none&gt;<br>Annotations:   &lt;none&gt;<br>Finalizers:    [kubernetes.io/pvc-protection]<br>Capacity:<br>Access Modes:<br>VolumeMode:    Filesystem<br>Used By:       &lt;none&gt;<br>Events:<br>  Type    Reason                Age                From                         Message<br>  ----    ------                ----               ----                         -------<br>  Normal  WaitForFirstConsumer  10s (x2 over 10s)  persistentvolume-controller  waiting for first consumer to be created before binding</code></pre>



<p>As you can see, your <code>PVC</code> have been creating, with the luks <code>Storage Class</code>, and is <em><strong>Pending</strong></em> to be <strong><em>Bound</em></strong>, until the creation of a <code>Pod</code> with a volume (because of the <code>WaitForFirstConsumer</code> value):</p>



<p>Create a <code>pod.yaml</code> file with the following content:</p>



<pre class="wp-block-code"><code class="">apiVersion: v1
kind: Pod
metadata:
  name: pod-with-encrypted-volume
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - mountPath: "/usr/share/nginx/html"
      name: encrypted-volume
  volumes:
  - name: encrypted-volume
    persistentVolumeClaim:
      claimName: pvc-luks</code></pre>



<p>Create a new <code>namespace</code> and apply the manifest file into it:</p>



<pre class="wp-block-code"><code class="">kubectl apply -f pod.yaml -n test-pvc-luks</code></pre>



<p>The <code>PVC</code> should now be <strong><em>Bound</em></strong> and a new <code>PV</code> should be created:</p>



<pre class="wp-block-code"><code class="">$ kubectl get pvc -n test-pvc-luks
NAME       STATUS   VOLUME                                                                     CAPACITY   ACCESS MODES   STORAGECLASS                      VOLUMEATTRIBUTESCLASS   AGE
pvc-luks   Bound    ovh-managed-kubernetes-siti343p-pvc-3a3b1d2e-ebdf-41a2-8f8f-4ee6984b6149   10Gi       RWO            csi-cinder-high-speed-gen2-luks   &lt;unset&gt;                 3m27s

$ kubectl get pv -n test-pvc-luks
NAME                                                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS                      VOLUMEATTRIBUTESCLASS   REASON   AGE
ovh-managed-kubernetes-siti343p-pvc-3a3b1d2e-ebdf-41a2-8f8f-4ee6984b6149   10Gi       RWO            Delete           Bound    test-pvc-luks/pvc-luks   csi-cinder-high-speed-gen2-luks   &lt;unset&gt;                          32s</code></pre>



<p>First the <code>Pod</code> should be in <code><strong><em>ContainerCreating</em></strong></code> state (waiting the creation and the attachment of the volume) and after few seconds it will be <em><strong>Running</strong></em>:</p>



<pre class="wp-block-code"><code class="">$ kubectl get pod pod-with-encrypted-volume -n test-pvc-luks
NAME                        READY   STATUS              RESTARTS   AGE
pod-with-encrypted-volume   0/1     ContainerCreating   0          44s

# Wait a little...

$ kubectl get pod pod-with-encrypted-volume -n test-pvc-luks
NAME                        READY   STATUS    RESTARTS   AGE
pod-with-encrypted-volume   1/1     Running   0          2m10s</code></pre>



<p>The <code>Pod</code> is now created with an attached volume:</p>



<pre class="wp-block-code"><code class="">$ kubectl describe pod pod-with-encrypted-volume -n test-pvc-luks<br>Name:             pod-with-encrypted-volume<br>Namespace:        test-pvc-luks<br>Priority:         0<br>Service Account:  default<br>Node:             my-pool-zone-c-h5xjf-7n7kt/192.168.142.174<br>Start Time:       Tue, 19 Aug 2025 10:10:41 +0200<br>Labels:           &lt;none&gt;<br>Annotations:      &lt;none&gt;<br>Status:           Running<br>IP:               10.240.0.203<br>IPs:<br>  IP:  10.240.0.203<br>Containers:<br>  nginx:<br>    Container ID:   containerd://c38c0a0e19970503ad1bfaa0c74b5cc320cb9df08456c7613b9a9a8c908b9190<br>    Image:          nginx<br>    Image ID:       docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57<br>    Port:           &lt;none&gt;<br>    Host Port:      &lt;none&gt;<br>    State:          Running<br>      Started:      Tue, 19 Aug 2025 10:11:42 +0200<br>    Ready:          True<br>    Restart Count:  0<br>    Environment:    &lt;none&gt;<br>    Mounts:<br>      /usr/share/nginx/html from encrypted-volume (rw)<br>      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vbcnk (ro)<br>Conditions:<br>  Type                        Status<br>  PodReadyToStartContainers   True<br>  Initialized                 True<br>  Ready                       True<br>  ContainersReady             True<br>  PodScheduled                True<br>Volumes:<br>  encrypted-volume:<br>    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)<br>    ClaimName:  pvc-luks<br>    ReadOnly:   false<br>  kube-api-access-vbcnk:<br>    Type:                    Projected (a volume that contains injected data from multiple sources)<br>    TokenExpirationSeconds:  3607<br>    ConfigMapName:           kube-root-ca.crt<br>    ConfigMapOptional:       &lt;nil&gt;<br>    DownwardAPI:             true<br>QoS Class:                   BestEffort<br>Node-Selectors:              &lt;none&gt;<br>Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s<br>                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s<br>Events:<br>  Type     Reason                  Age                    From                     Message<br>  ----     ------                  ----                   ----                     -------<br>  Normal   Scheduled               3m48s                  default-scheduler        Successfully assigned test-pvc-luks/pod-with-encrypted-volume to my-pool-zone-c-xxxx-xxxx<br>  ...<br>  Normal   SuccessfulAttachVolume  3m8s                   attachdetach-controller  AttachVolume.Attach succeeded for volume "ovh-managed-kubernetes-siti343p-pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"<br>  Normal   Pulling                 2m53s                  kubelet                  Pulling image "nginx"<br>  Normal   Pulled                  2m48s                  kubelet                  Successfully pulled image "nginx" in 5.072s (5.072s including waiting). Image size: 72324501 bytes.<br>  Normal   Created                 2m48s                  kubelet                  Created container: nginx<br>  Normal   Started                 2m48s                  kubelet                  Started container nginx</code></pre>



<p>Logging in the OVHcloud Control Panel, you can see that the encrypted volume have been successfully created:</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="310" src="https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-1-1024x310.png" alt="" class="wp-image-29581" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-1-1024x310.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-1-300x91.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-1-768x233.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-1-1536x465.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/08/image-1.png 2020w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Finally, you can use your volume.</p>



<p>Execute a shell in the Nginx <code>Pod</code> and create an <code>index.html</code> file into it:</p>



<pre class="wp-block-code"><code class="">$ kubectl exec -it pod-with-encrypted-volume -n test-pvc-luks -- /bin/bash

root@pod-with-encrypted-volume:/# echo "Hello from OVHcloud encrypted Block Storage!" &gt; /usr/share/nginx/html/index.html</code></pre>



<p>And curl the webserver: </p>



<pre class="wp-block-code"><code class="">root@pod-with-encrypted-volume:/# apt update
root@pod-with-encrypted-volume:/# apt install curl
root@pod-with-encrypted-volume:/# curl http://localhost/
Hello from OVHcloud encrypted Block Storage!</code></pre>



<p>🎉</p>



<h2 class="wp-block-heading">What&#8217;s next?</h2>



<p>In this blog post we saw a basic (but concrete) usage of the encrypted <code>Persistent Volume</code> on OVHcloud Kubernetes clusters that just bee released, don&#8217;t hesitate to think about it for your sensitive data.<br><br>In the coming months, the encrypted <strong>Block Storage</strong> will be available worldwide. Follow the <a href="https://github.com/ovh/public-cloud-roadmap/issues/307" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Encrypted Block Volumes</a> issue on GitHub to stay informed.<br><br>And don&#8217;t hesitate to take a look to our <a href="https://github.com/orgs/ovh/projects/16" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Cloud Roadmap &amp; Changelog</a> to see the state of all of the coming features in OVHcloud Public Cloud products.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fcreate-encrypted-persistent-volumes-on-ovhcloud-managed-kubernetes-clusters-with-luks%2F&amp;action_name=Create%20encrypted%20Persistent%20Volumes%20on%20OVHcloud%20Managed%20Kubernetes%20clusters%20with%20LUKS&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Using OVHcloud S3-compatible Object Storage as Terraform Backend to store your Terraform/OpenTofu states</title>
		<link>https://blog.ovhcloud.com/using-ovhcloud-s3-compatible-object-storage-as-terraform-backend-to-store-your-terraform-opentofu-states/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Mon, 07 Jul 2025 06:27:02 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Object Storage]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29299</guid>

					<description><![CDATA[When working on Infrastructure as Code projects, with Terraform or OpenTofu, Terraform States files are created and modified locally in a terraform.tfstate file. It&#8217;s a common usage and practice but not convenient when working as a team. Do you know that you can configure Terraform to store data remotely on OVHcloud S3-compatible Object Storage? OVHcloud [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fusing-ovhcloud-s3-compatible-object-storage-as-terraform-backend-to-store-your-terraform-opentofu-states%2F&amp;action_name=Using%20OVHcloud%20S3-compatible%20Object%20Storage%20as%20Terraform%20Backend%20to%20store%20your%20Terraform%2FOpenTofu%20states&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1023" height="1022" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/ovh-object-storage-remote-backend-terraform-1.png" alt="" class="wp-image-29352" style="width:586px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/ovh-object-storage-remote-backend-terraform-1.png 1023w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/ovh-object-storage-remote-backend-terraform-1-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/ovh-object-storage-remote-backend-terraform-1-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/ovh-object-storage-remote-backend-terraform-1-768x767.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/ovh-object-storage-remote-backend-terraform-1-70x70.png 70w" sizes="auto, (max-width: 1023px) 100vw, 1023px" /></figure>



<p>When working on Infrastructure as Code projects, with Terraform or OpenTofu, Terraform States files are created and modified locally in a <code>terraform.tfstate</code> file. It&#8217;s a common usage and practice but not convenient when working as a team.</p>



<p>Do you know that you can configure Terraform to store data remotely on OVHcloud S3-compatible Object Storage?</p>



<h3 class="wp-block-heading">OVHcloud Terraform/OpenTofu provider</h3>



<p>To easily provision your infrastructures, OVHcloud provides a&nbsp;<a href="https://registry.terraform.io/providers/ovh/ovh/latest" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">Terraform provider</a>&nbsp;which is available in the <a href="https://registry.terraform.io/providers/ovh/ovh/latest/docs" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">official Terraform registry</a>.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="346" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-1-1024x346.png" alt="" class="wp-image-29302" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-1-1024x346.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-1-300x102.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-1-768x260.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-1-1536x520.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-1-2048x693.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>The provider is synchronized in the <a href="https://search.opentofu.org/provider/opentofu/ovh/latest" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenTofu registry</a> also:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="370" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-2-1024x370.png" alt="" class="wp-image-29322" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-2-1024x370.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-2-300x108.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-2-768x277.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-2-1536x555.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-2-2048x740.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Read the <a href="https://blog.ovhcloud.com/infrastructure-as-code-iac-on-ovhcloud-part-1-terraform-opentofu/" data-wpel-link="internal">Infrastructure as Code (IaC) on OVHcloud – part 1: Terraform / OpenTofu</a> blog post to have more information about the provider and IaC on OVHcloud.</p>



<p>Note that in the rest of the blog post we will be using <code>terraform</code> CLI and talking about Terraform, but you can also follow the blog post if you are using OpenTofu and <code>tofu</code> CLI instead 😉.</p>



<h3 class="wp-block-heading">How to</h3>



<p>In this blog post we will handle two projects:</p>



<ul class="wp-block-list">
<li><code>object-storage-tf</code>: creation of an OVHcloud S3-compatible Object Sorage and an user and necessary policies</li>



<li><code>my-app</code>: usage of a <code>backend.tf</code> file that store and get TF states in your newly created S3-compatible bucket</li>
</ul>



<p>Note that all the following source code are available on the <a href="https://github.com/ovh/public-cloud-examples/tree/main/use-cases/create-and-use-object-storage-as-tf-backend" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">OVHcloud Public Cloud examples</a> GitHub repository.</p>



<h4 class="wp-block-heading">Prerequisites:</h4>



<ul class="wp-block-list">
<li>Install the <a href="https://www.terraform.io/downloads.html" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Terraform</a> CLI</li>



<li>For non Linux users, install gettext (that included `envsubst` command)</li>
</ul>



<pre class="wp-block-code"><code class="">$ brew install gettext

$ brew link --force gettext</code></pre>



<ul class="wp-block-list">
<li><a href="https://docs.ovh.com/gb/en/customer/first-steps-with-ovh-api/" data-wpel-link="exclude">Get the credentials</a> from the OVHCloud Public Cloud project</li>
</ul>



<h3 class="wp-block-heading">Let&#8217;s create an Object Storage with Terraform</h3>



<p>Create a new folder, named <code>object-storage-tf</code>, for example and go into it.</p>



<p>Create a <code>provider.tf</code> file:</p>



<pre class="wp-block-code"><code class="">terraform {
  required_providers {
    ovh = {
      source  = "ovh/ovh"
    }
    
    random = {
      source  = "hashicorp/random"
      version = "3.6.3"
    }
  }
}

provider "ovh" {
}</code></pre>



<p>The OVHcloud Terraform provider need the endpoint, the secret keys and the Public Cloud ID that needs to be retrieved from your environment variables:</p>



<ul class="wp-block-list">
<li><code>OVH_ENDPOINT</code></li>



<li><code>OVH_APPLICATION_KEY</code></li>



<li><code>OVH_APPLICATION_SECRET</code></li>



<li><code>OVH_CONSUMER_KEY</code></li>



<li><code>OVH_CLOUD_PROJECT_SERVICE</code></li>
</ul>



<p>Then, create a <code>variables.tf.template</code> file with the following content:</p>



<pre class="wp-block-code"><code class="">variable "service_name" {
  default = "$OVH_CLOUD_PROJECT_SERVICE"
}


variable bucket_name {
  type        = string
}

variable bucket_region {
  type        = string
  default     = "GRA"
}</code></pre>



<p>Replace the value of your <code>OVH_CLOUD_PROJECT_SERVICE</code> environment variable in the <code>variables.tf</code> file (in the service_name variable):</p>



<pre class="wp-block-code"><code class="">$ envsubst &lt; variables.tf.template &gt; variables.tf</code></pre>



<p>Define the resources you want to create in a new file called <code>s3.tf</code>:</p>



<pre class="wp-block-code"><code class="">resource "random_string" "bucket_name_suffix" {
  length  = 16
  special = false
  lower   = true
  upper   = false
}

resource "ovh_cloud_project_storage" "s3_bucket" {
  service_name = var.service_name
  region_name = var.bucket_region
  name = "${var.bucket_name}-${random_string.bucket_name_suffix.result}" # the name must be unique within OVHcloud
}

resource "ovh_cloud_project_user" "s3_user" {
  description	= "${var.bucket_name}-${random_string.bucket_name_suffix.result}"
  role_name	= "objectstore_operator"
}

resource "ovh_cloud_project_user_s3_credential" "s3_user_cred" {
  user_id	= ovh_cloud_project_user.s3_user.id
}

resource "ovh_cloud_project_user_s3_policy" "s3_user_policy" {
  service_name = var.service_name
  user_id      = ovh_cloud_project_user.s3_user.id
  policy = jsonencode({
    "Statement": [{
      "Action": ["s3:*"],
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::${ovh_cloud_project_storage.s3_bucket.name}","arn:aws:s3:::${ovh_cloud_project_storage.s3_bucket.name}/*"],
      "Sid": "AdminContainer"
    }]
  })
}</code></pre>



<p>In this file we defined that we want to create a S3-compatible Object Storage bucket and an user (with its credentials) that will have the rights (policies) to do actions on this bucket.</p>



<p>Define the information that you want to get after the creation of the resources, in an <code>output.tf</code> file:</p>



<pre class="wp-block-code"><code class="">output "s3_bucket" {
  value = "${ovh_cloud_project_storage.s3_bucket.name}"
}

output "access_key_id" {
    value = ovh_cloud_project_user_s3_credential.s3_user_cred.access_key_id
}

output "secret_access_key" {
    value = ovh_cloud_project_user_s3_credential.s3_user_cred.secret_access_key
    sensitive = true
}</code></pre>



<p>Now we need to initialise Terraform:</p>



<pre class="wp-block-code"><code class="">$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/random versions matching "3.6.3"...
- Reusing previous version of ovh/ovh from the dependency lock file
- Installing hashicorp/random v3.6.3...
- Installed hashicorp/random v3.6.3 (signed by HashiCorp)
- Using previously-installed ovh/ovh v2.5.0

Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.</code></pre>



<p>Generate the plan and apply it:</p>



<pre class="wp-block-code"><code class="">$ terraform apply -var bucket_name=my-bucket

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # ovh_cloud_project_storage.s3_bucket will be created
  + resource "ovh_cloud_project_storage" "s3_bucket" {
      + created_at    = (known after apply)
      + encryption    = (known after apply)
      + limit         = (known after apply)
      + marker        = (known after apply)
      + name          = (known after apply)
      + objects       = (known after apply)
      + objects_count = (known after apply)
      + objects_size  = (known after apply)
      + owner_id      = (known after apply)
      + prefix        = (known after apply)
      + region        = (known after apply)
      + region_name   = "GRA"
      + replication   = (known after apply)
      + service_name  = "xxxxxxxxxxx"
      + versioning    = (known after apply)
      + virtual_host  = (known after apply)
    }

  # ovh_cloud_project_user.s3_user will be created
  + resource "ovh_cloud_project_user" "s3_user" {
      + creation_date = (known after apply)
      + description   = (known after apply)
      + id            = (known after apply)
      + openstack_rc  = (known after apply)
      + password      = (sensitive value)
      + role_name     = "objectstore_operator"
      + roles         = (known after apply)
      + service_name  = "xxxxxxxxxxx"
      + status        = (known after apply)
      + username      = (known after apply)
    }

  # ovh_cloud_project_user_s3_credential.s3_user_cred will be created
  + resource "ovh_cloud_project_user_s3_credential" "s3_user_cred" {
      + access_key_id     = (known after apply)
      + id                = (known after apply)
      + internal_user_id  = (known after apply)
      + secret_access_key = (sensitive value)
      + service_name      = "xxxxxxxxx"
      + user_id           = (known after apply)
    }

  # ovh_cloud_project_user_s3_policy.s3_user_policy will be created
  + resource "ovh_cloud_project_user_s3_policy" "s3_user_policy" {
      + id           = (known after apply)
      + policy       = (known after apply)
      + service_name = "xxxxxxxx"
      + user_id      = (known after apply)
    }

  # random_string.bucket_name_suffix will be created
  + resource "random_string" "bucket_name_suffix" {
      + id          = (known after apply)
      + length      = 16
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + numeric     = true
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + access_key_id     = (known after apply)
  + s3_bucket         = (known after apply)
  + secret_access_key = (sensitive value)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

random_string.bucket_name_suffix: Creating...
random_string.bucket_name_suffix: Creation complete after 0s [id=4qiyj7ywrt2sspfe]
ovh_cloud_project_user.s3_user: Creating...
ovh_cloud_project_storage.s3_bucket: Creating...
ovh_cloud_project_storage.s3_bucket: Creation complete after 1s [name=my-bucket-4qiyj7ywrt2sspfe]
ovh_cloud_project_user.s3_user: Still creating... [10s elapsed]
ovh_cloud_project_user.s3_user: Creation complete after 20s [id=535967]
ovh_cloud_project_user_s3_credential.s3_user_cred: Creating...
ovh_cloud_project_user_s3_policy.s3_user_policy: Creating...
ovh_cloud_project_user_s3_credential.s3_user_cred: Creation complete after 0s [id=5ab69860beb34575acb42c7ba8553884]
ovh_cloud_project_user_s3_policy.s3_user_policy: Creation complete after 0s [id=xxxxxxxxxxx/535967]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

access_key_id = "5ab69860beb34575acb42c7ba8553884"
s3_bucket = "my-bucket-4qiyj7ywrt2sspfe"
secret_access_key = &lt;sensitive&gt;</code></pre>



<p>🎉</p>



<p> Save the s3 user credentials in environment variables (mandatory for the following section):</p>



<pre class="wp-block-code"><code class="">
$ export AWS_ACCESS_KEY_ID=$(terraform output -raw access_key_id)
$ export AWS_SECRET_ACCESS_KEY=$(terraform output -raw secret_access_key)</code></pre>



<h3 class="wp-block-heading">Let&#8217;s configure an OVHcloud S3-compatible Object Storage as Terraform Backend</h3>



<p>Create a new folder, named <code>my-app</code>, and go into it.</p>



<p>Create a <code>backend.tf</code> file with the following content:</p>



<p>⚠️ If you have a <code>terraform version</code> before 1.6.0:</p>



<pre class="wp-block-code"><code class="">terraform {
    backend "s3" {
      bucket = "&lt;my-bucket&gt;"
      key    = "my-app.tfstate"
      region = "gra"
      endpoint = "s3.gra.io.cloud.ovh.net"
      skip_credentials_validation = true
      skip_region_validation      = true
    }
}</code></pre>



<p>⚠️ Since Terraform version 1.6.0:</p>



<pre class="wp-block-code"><code class="">terraform {
    backend "s3" {
      bucket = "&lt;my-bucket&gt;"
      key    = "my-app.tfstate"
      region = "gra"
      endpoints = {
        s3 = "https://s3.gra.io.cloud.ovh.net/"
      }
      skip_credentials_validation = true
      skip_region_validation      = true
      skip_requesting_account_id  = true
      skip_s3_checksum            = true
    }
}</code></pre>



<p>You can replace <code>&lt;my-bucket&gt;</code> with the newly created bucket or with an existing bucket you created.</p>



<p>Initialise Terraform:</p>



<pre class="wp-block-code"><code class="">$ terraform init

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding latest version of ovh/ovh...
- Installing ovh/ovh v2.5.0...
- Installed ovh/ovh v2.5.0 (signed by a HashiCorp partner, key ID F56D1A6CBDAAADA5)

...</code></pre>



<p>As you can see, now, terraform is using &#8220;s3&#8221; backend! 💪</p>



<h3 class="wp-block-heading">Want to go further?</h3>



<p>In this blog post, we created an S3-compatible Object Storage with basic configuration but be aware that <a href="https://registry.terraform.io/providers/ovh/ovh/latest/docs/resources/cloud_project_storage" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">you can configure a S3-compatible bucket with encryption, versioning and more</a>:</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="531" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-3-1024x531.png" alt="" class="wp-image-29341" style="width:469px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-3-1024x531.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-3-300x156.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-3-768x398.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/image-3.png 1326w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>💡 Terraform States are not encrypted at rest when stored by Terraform so we recommend to enable the encryption the OVHcloud S3-compatible Object Storage bucket 🙂.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fusing-ovhcloud-s3-compatible-object-storage-as-terraform-backend-to-store-your-terraform-opentofu-states%2F&amp;action_name=Using%20OVHcloud%20S3-compatible%20Object%20Storage%20as%20Terraform%20Backend%20to%20store%20your%20Terraform%2FOpenTofu%20states&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Discover Kubernetes 1.33 features &#8211; Topology aware routing in multi-zones Kubernetes clusters</title>
		<link>https://blog.ovhcloud.com/discover-kubernetes-1-33-features-topology-aware-routing-in-multi-zones-kubernetes-clusters/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Tue, 17 Jun 2025 07:05:40 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[3AZ]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Kubernetes 1.33]]></category>
		<category><![CDATA[MKS]]></category>
		<category><![CDATA[multi-zone cluster]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29191</guid>

					<description><![CDATA[Kubernetes 1.33 version has just been released few days/weeks ago.As this new release contains 64 enhancements (!), it can not be easy to know what are the interesting and useful features and how to use them. In this blog post, let&#8217;s discover one of interesting and useful new feature: &#8220;Topology aware routing in multi-zones Kubernetes [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdiscover-kubernetes-1-33-features-topology-aware-routing-in-multi-zones-kubernetes-clusters%2F&amp;action_name=Discover%20Kubernetes%201.33%20features%20%26%238211%3B%20Topology%20aware%20routing%20in%20multi-zones%20Kubernetes%20clusters&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1014" height="1022" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/mks3az-kubernetes-1.33-small.png" alt="" class="wp-image-29240" style="width:436px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/mks3az-kubernetes-1.33-small.png 1014w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/mks3az-kubernetes-1.33-small-298x300.png 298w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/mks3az-kubernetes-1.33-small-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/mks3az-kubernetes-1.33-small-768x774.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/mks3az-kubernetes-1.33-small-70x70.png 70w" sizes="auto, (max-width: 1014px) 100vw, 1014px" /></figure>



<p><a href="https://kubernetes.io/blog/2025/04/23/kubernetes-v1-33-release/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kubernetes 1.33 version</a> has just been released few days/weeks ago.<br>As this new release contains 64 enhancements (!), it can not be easy to know what are the interesting and useful features and how to use them.</p>



<p>In this blog post, let&#8217;s discover one of interesting and useful new feature: &#8220;Topology aware routing in multi-zones Kubernetes clusters&#8221;.</p>



<p>⚠️ Kubernetes 1.33 should be available on OVHcloud MKS clusters at the end of June/beginning of July but the demo is working also on MKS with Kubernetes 1.32 release 😉.</p>



<h2 class="wp-block-heading">Topology aware routing</h2>



<p>Since Kubernetes 1.33, the <a href="https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">topology aware routing and traffic distribution</a> feature is in General Availability (GA).</p>



<p>This feature allows to optimize service traffic in multi-zone clusters and reduce latency and cross-zone data transfer cost.</p>



<p>Topology Aware Routing provides a mechanism to help <strong>keep traffic within the zone</strong> it originated from.</p>



<p>In a context of multi-zone clusters, it helps reliability, performance, <strong>reduce costs</strong> or <strong>improve network performance</strong>.</p>



<p>As OVHcloud just released, in Beta, the launch of their <a href="https://labs.ovhcloud.com/en/managed-kubernetes-service-mks-premium-plan/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">Managed Kubernetes clusters (MKS) on 3 AZ (Availability Zones)</a>, it&#8217;s the perfect occasion for me to test this brand new Kubernetes feature 🙂.</p>



<h2 class="wp-block-heading">Demo</h2>



<p>Prerequisite: Have a Kubernetes cluster with at least 2 nodes running in 2 different zones.</p>



<p>If you already don&#8217;t have one, you can follow <a href="https://blog.ovhcloud.com/deploy-your-workloads-on-3-availability-zones-with-our-new-managed-kubernetes-services-mks-premium-plan/" data-wpel-link="internal">this blog post</a> in order to <a href="https://blog.ovhcloud.com/deploy-your-workloads-on-3-availability-zones-with-our-new-managed-kubernetes-services-mks-premium-plan/" data-wpel-link="internal">create an OVHcloud MKS cluster with 3 nodes pools</a>, one per AZ.</p>



<p>On my side I set-up a MKS cluster in 3AZ (one per node pool), with 3 nodes per node pool:</p>



<pre class="wp-block-code"><code class="">$ kubectx kubernetes-admin@multi-zone-mks
Switched to context "kubernetes-admin@multi-zone-mks".

$ kubectl get np
NAME             FLAVOR   AUTOSCALED   MONTHLYBILLED   ANTIAFFINITY   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   MIN   MAX   AGE
my-pool-zone-a   b3-8     false        false           false          3         3         3            3           0     100   20d
my-pool-zone-b   b3-8     false        false           false          3         3         3            3           0     100   20d
my-pool-zone-c   b3-8     false        false           false          3         3         3            3           0     100   20d

$ kubectl get no
NAME                         STATUS   ROLES    AGE   VERSION
my-pool-zone-a-b9ztj-brgpq   Ready    &lt;none&gt;   20d   v1.32.3
my-pool-zone-a-b9ztj-gt5vd   Ready    &lt;none&gt;   20d   v1.32.3
my-pool-zone-a-b9ztj-mss8j   Ready    &lt;none&gt;   20d   v1.32.3
my-pool-zone-b-tr6wf-5wfgz   Ready    &lt;none&gt;   20d   v1.32.3
my-pool-zone-b-tr6wf-ct7fs   Ready    &lt;none&gt;   20d   v1.32.3
my-pool-zone-b-tr6wf-vlkwg   Ready    &lt;none&gt;   20d   v1.32.3
my-pool-zone-c-wgrl6-b2f9s   Ready    &lt;none&gt;   20d   v1.32.3
my-pool-zone-c-wgrl6-lp22l   Ready    &lt;none&gt;   20d   v1.32.3
my-pool-zone-c-wgrl6-slkq5   Ready    &lt;none&gt;   20d   v1.32.3</code></pre>



<p>⚠️ As you saw, the Kubernetes version installed on my cluster is not equals to 1.33, but the <code>ServiceTrafficDistribution</code> feature gate is in Beta and it is activated:</p>



<pre class="wp-block-code"><code class="">$ kubectl get --raw /metrics | grep kubernetes_feature_enabled | grep Traffic

kubernetes_feature_enabled{name="ServiceTrafficDistribution",stage="BETA"} 1</code></pre>



<p class="has-text-align-center">A visual architecture of my MKS cluster:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="800" height="556" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-11.png" alt="" class="wp-image-29192" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-11.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-11-300x209.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-11-768x534.png 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p>⚠️ In MKS Standard clusters, don&#8217;t forget to <a href="https://help.ovhcloud.com/csm/en-gb-public-cloud-kubernetes-customizing-cilium?id=kb_article_view&amp;sysparm_article=KB0074067" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">enable the topology aware routing for 3AZ region</a>. </p>



<p>In order to test this feature, in a new namespace, we will deploy:</p>



<ul class="wp-block-list">
<li>a deployment with two pods named <code>receiver-xxx</code></li>



<li>a ClusterIP service named <code>svc-prefer-close</code> with the feature enabled</li>



<li>a Pod named <code>sender</code></li>
</ul>



<p>Let&#8217;s do that!</p>



<p>Create a <code>deploy.yaml</code> file with the following content:</p>



<pre class="wp-block-code"><code class="">apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: service-traffic-example
  name: receiver
  namespace: prefer-close
spec:
  replicas: 2
  selector:
    matchLabels:
      app: service-traffic-example
  template:
    metadata:
      labels:
        app: service-traffic-example
    spec:
      containers:
      - image: scraly/hello-pod:1.0.1
        name: receiver
        ports:
        - containerPort: 8080
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName</code></pre>



<p>Create a <code>svc.yaml</code> file with the following content:</p>



<pre class="wp-block-code"><code class="">apiVersion: v1
kind: Service
metadata:
  name: svc-prefer-close
  namespace: prefer-close
  annotations:
    service.kubernetes.io/topology-mode: auto
spec:
  ports:
    - name: http
      protocol: TCP
      port: 8080
      targetPort: 8080
  selector:
    app: service-traffic-example
  type: ClusterIP
  trafficDistribution: PreferClose</code></pre>



<p>As you can see, this Service has two specific configurations.<br>First, we added the <code>service.kubernetes.io/topology-mode: auto</code> annotation to enable Topology Aware Routing for a Service.<br>Then, we configured the <code>trafficDistribution</code> to <code>PreferClose</code> in order to ask Kubernetes to send the traffic, preferably, to a pod that is &#8220;closed&#8221; to the sender.</p>



<p>Create a new namespace and apply the manifest files:</p>



<pre class="wp-block-code"><code class="">$ kubectl create ns prefer-close
$ kubectl apply -f deploy.yaml
$ kubectl apply -f svc.yaml</code></pre>



<p>Result:<br>You should have two running Pods on 2 differents Nodes.</p>



<pre class="wp-block-code"><code class="">$ kubectl get po -o wide -n prefer-close

NAME                        READY   STATUS              RESTARTS   AGE   IP            NODE                         NOMINATED NODE   READINESS GATES
receiver-7cfd89d78d-dhv6z   1/1     Running             0          94s   10.240.4.91   my-pool-zone-c-wgrl6-slkq5   &lt;none&gt;           &lt;none&gt;
receiver-7cfd89d78d-hrxrt   1/1     Running             0          94s   10.240.5.63   my-pool-zone-a-b9ztj-mss8j   &lt;none&gt;           &lt;none&gt;</code></pre>



<p>OK, <code>receiver-xxxxxxxx-dhv6z</code> is running on <code>my-pool-zone-c-xxxx</code> and the other pod is running on <code>my-pool-zone-a-xxxx</code>. There are running on differents Availability Zones.</p>



<p>Now, we can create a Pod <code>sender</code>. it will be scheduled on a Node:</p>



<figure class="wp-block-image aligncenter size-full"><img loading="lazy" decoding="async" width="800" height="556" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-12.png" alt="" class="wp-image-29193" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-12.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-12-300x209.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-12-768x534.png 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p>Run it and execute a <code>curl</code> command to test the traffic redirection to the &#8220;svc-prefer-close&#8221; Service:</p>



<pre class="wp-block-code"><code class="">$ kubectl run sender -n prefer-close --image=curlimages/curl -it -- sh
If you don't see a command prompt, try pressing enter.
~ $ curl http://svc-prefer-close.prefer-close:8080
Version: 1.0.1
Hostname: receiver-7cfd89d78d-dhv6z
Node: my-pool-zone-c-wgrl6-slkq5</code></pre>



<p>Let&#8217;s verify where are our Pods:</p>



<pre class="wp-block-code"><code class="">$ kubectl get po -n prefer-close -o wide
NAME                        READY   STATUS    RESTARTS     AGE   IP             NODE                         NOMINATED NODE   READINESS GATES
receiver-7cfd89d78d-dhv6z   1/1     Running   0            9d    10.240.4.91    my-pool-zone-c-wgrl6-slkq5   &lt;none&gt;           &lt;none&gt;
receiver-7cfd89d78d-hrxrt   1/1     Running   0            9d    10.240.5.63    my-pool-zone-a-b9ztj-mss8j   &lt;none&gt;           &lt;none&gt;
sender                      1/1     Running   1 (5s ago)   21s   10.240.3.134   my-pool-zone-c-wgrl6-b2f9s   &lt;none&gt;           &lt;none&gt;</code></pre>



<p>Kube-proxy sent the traffic from <code>sender</code> to a <code>receiver-xx</code> Pod on the same Availability Zone 🎉</p>



<p>⚠️ Note that because <code>preferClose</code> means &#8220;topologically proximate&#8221;, it may vary across implementations and could encompass endpoints within the same node, rack, zone, or even region.</p>



<h2 class="wp-block-heading"><a href="https://dev.to/aurelievache/discover-kubernetes-133-topology-aware-routing-with-trafficdistribution-preferclose-2m66-temp-slug-8063145?preview=9c6673fc1c1d618ab0b2d7e86274fa1bcad2630e2e947e73c16022ee80128700654e53730ba787bd5407154bcb2dde6f5bed3b7e112a11034df4aefc#how-is-it-working" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"></a> How is it working?</h2>



<p>When calculating the endpoints for a Service, the EndpointSlice controller considers the topology (region and zone) of each endpoint and populates the hints field to allocate it to a zone.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="800" height="598" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-13.png" alt="" class="wp-image-29194" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-13.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-13-300x224.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-13-768x574.png 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p>Cluster components such as <em>kube-proxy</em> can then consume those hints, and use them to influence how the traffic is routed (favoring topologically closer endpoints).</p>



<p>So, with <code>PreferClose</code> value for <code>trafficDistribution</code>, we ask kube-proxy to redirect traffic to the nearest available endpoints based on the network topology.</p>



<p>That&#8217;s why the option is called <code>Prefer</code><code>Close</code>.</p>



<h2 class="wp-block-heading"><a href="https://dev.to/aurelievache/discover-kubernetes-133-topology-aware-routing-with-trafficdistribution-preferclose-2m66-temp-slug-8063145?preview=9c6673fc1c1d618ab0b2d7e86274fa1bcad2630e2e947e73c16022ee80128700654e53730ba787bd5407154bcb2dde6f5bed3b7e112a11034df4aefc#whats-next" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"></a> What&#8217;s next?</h2>



<p>In the future you will be able to configure the <code>trafficDistribution</code> field with other values.</p>



<p>Indeed, two new values, more explicit, are currently in Alpha since the Kubernetes 1.33 release: <code>PreferSameZone</code> and <code>PreferSameNode</code>.</p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="800" height="917" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-14.png" alt="" class="wp-image-29195" style="width:527px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-14.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-14-262x300.png 262w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/image-14-768x880.png 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p>Personally I can&#8217;t wait to test them 😇.</p>



<h2 class="wp-block-heading">Want to go further?</h2>



<p>Want to learn more on this topic? In the coming days, we will publish a blog post about MKS Premium plan.</p>



<p>Visit our <a href="https://labs.ovhcloud.com/en/managed-kubernetes-service-mks-premium-plan/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">Managed Kubernetes Service (MKS) Premium plan</a> in the OVHcloud Labs website to know more about Premium MKS.</p>



<p>Join the <strong>free</strong> Beta: <a href="https://labs.ovhcloud.com/en/managed-kubernetes-service-mks-premium-plan/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">https://labs.ovhcloud.com/en/managed-kubernetes-service-mks-premium-plan/</a></p>



<p>Read the documentation about the new <a href="https://help.ovhcloud.com/csm/fr-public-cloud-kubernetes-premium?id=kb_article_view&amp;sysparm_article=KB0067581" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">Managed Kubernetes Service (MKS) Premium plan</a>.</p>



<p>Join us on <a href="https://discord.com/channels/850031577277792286/1366761790150541402" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">Discord</a> and give us your feedbacks.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdiscover-kubernetes-1-33-features-topology-aware-routing-in-multi-zones-kubernetes-clusters%2F&amp;action_name=Discover%20Kubernetes%201.33%20features%20%26%238211%3B%20Topology%20aware%20routing%20in%20multi-zones%20Kubernetes%20clusters&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Deploy your workloads on 3 availability zones with our new Managed Kubernetes Services (MKS) &#8216;Premium&#8217; plan</title>
		<link>https://blog.ovhcloud.com/deploy-your-workloads-on-3-availability-zones-with-our-new-managed-kubernetes-services-mks-premium-plan/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Mon, 19 May 2025 05:20:42 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[3AZ]]></category>
		<category><![CDATA[Beta]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[milti-AZ]]></category>
		<category><![CDATA[MKS]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=28796</guid>

					<description><![CDATA[This blog post will first explain briefly what is the new MKS Premium plan, for who and which use case, then you will see how to deploy a new MKS cluster in 3 availability zones and how to deploy your workloads with this new architecture of Kubernetes cluster. What&#8217;s inside the Premium MKS? The 30th [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploy-your-workloads-on-3-availability-zones-with-our-new-managed-kubernetes-services-mks-premium-plan%2F&amp;action_name=Deploy%20your%20workloads%20on%203%20availability%20zones%20with%20our%20new%20Managed%20Kubernetes%20Services%20%28MKS%29%20%26%238216%3BPremium%26%238217%3B%20plan&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="890" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/mks-3Apremium-ovh-890x1024.png" alt="" class="wp-image-28908" style="width:336px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/mks-3Apremium-ovh-890x1024.png 890w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/mks-3Apremium-ovh-261x300.png 261w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/mks-3Apremium-ovh-768x884.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/mks-3Apremium-ovh-1335x1536.png 1335w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/mks-3Apremium-ovh-1780x2048.png 1780w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/mks-3Apremium-ovh.png 2048w" sizes="auto, (max-width: 890px) 100vw, 890px" /></figure>



<p>This blog post will first explain briefly what is the new MKS Premium plan, for who and which use case, then you will see how to deploy a new MKS cluster in 3 availability zones and how to deploy your workloads with this new architecture of Kubernetes cluster.</p>



<h2 class="wp-block-heading">What&#8217;s inside the Premium MKS?</h2>



<figure class="wp-block-image aligncenter size-full"><img loading="lazy" decoding="async" width="120" height="120" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/pci_product-managed-kubernetes-service.png" alt="" class="wp-image-28902" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/pci_product-managed-kubernetes-service.png 120w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/pci_product-managed-kubernetes-service-70x70.png 70w" sizes="auto, (max-width: 120px) 100vw, 120px" /></figure>



<p>The 30th of April, we launched, in Beta, our brand new &#8220;Premium plan&#8221; of our Managed Kubernetes Services (MKS) 🎉</p>



<p>Concretely, with MKS Premium you will have:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="455" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-19-1024x455.png" alt="" class="wp-image-28924" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-19-1024x455.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-19-300x133.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-19-768x341.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-19-1536x683.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-19.png 1570w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>💡 For the moment, only Paris is available for the 3AZ region but several new regions will be available in the coming months including Milan.</p>



<p>Behind this new plan, this new version of our MKS offering actually represents a complete overhaul of our platform based on several <a href="https://www.cncf.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Cloud Native</a> Open Source projects like <a href="https://cluster-api.sigs.k8s.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Cluster API</a>, <a href="https://kamaji.clastix.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kamaji</a>, <a href="https://argo-cd.readthedocs.io/en/stable/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">ArgoCD</a> and several homemade Kubernetes operators.</p>



<h2 class="wp-block-heading">For who? For what?</h2>



<p>The new MKS Premium plan has been designed for those who wants high availability and scalability of their critical applications.</p>



<p>Thanks to a dedicated and fully managed control plane, resilience across multiple availability zones, dedicated resources for the Kubernetes control plane, and the ability to deploy the data plane across multiple availability zones.</p>



<p>You will be able to design cloud-native applications that are resilient to failures and deploy highly resilient cloud-native applications across our multi-zones region. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="485" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-1024x485.png" alt="" class="wp-image-28799" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-1024x485.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-300x142.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-768x364.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image.png 1120w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>You will have the full control on how to deploy your worker node in our <strong>new 3AZ region</strong> (EU-WEST-PAR).</p>



<p>Deploying your cloud-native applications in our new Paris 3-AZ region also means enjoying the full range of services available:</p>



<ul class="wp-block-list">
<li>Well architected application relying on resilient managed services (MKS + Load Balancer + Gateway + DBaaS + Object Storage &#8230;),</li>



<li>Advanced internal cluster networking with the new <a href="https://cilium.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Cilium</a> CNI</li>



<li>Better API server performances and scaling capacity</li>



<li>And much more to come!</li>
</ul>



<h2 class="wp-block-heading">Let&#8217;s deploy a MKS Premium cluster in 3 AZ at Paris!</h2>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="960" height="797" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-14.png" alt="" class="wp-image-28906" style="width:300px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-14.png 960w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-14-300x249.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-14-768x638.png 768w" sizes="auto, (max-width: 960px) 100vw, 960px" /></figure>



<p>Like the actual Standard MKS, you can deploy MKS on the 3AZ via the Control Panel (OVHcloud UI), the API and also our Infrastructure as Code (IaC) providers (Terraform/OpenTofu, Pulumi&#8230;).</p>



<p>In this blog post, we will deploy a new MKS cluster, in a 3AZ region (Paris) with 3 node pools (one per availability zone).</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="547" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-21-1024x547.png" alt="" class="wp-image-28933" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-21-1024x547.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-21-300x160.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-21-768x410.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-21-1536x820.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-21.png 1854w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">With OVHcloud Control Panel</h3>



<p>Log in to the&nbsp;<a href="https://www.ovh.com/auth/?action=gotomanager&amp;from=https://www.ovh.co.uk/&amp;ovhSubsidiary=GB" data-wpel-link="exclude">OVHcloud Control Panel</a>, go to the&nbsp;<code><strong>Public Cloud</strong></code>&nbsp;section and select the <strong>Public Cloud </strong>project concerned.</p>



<p>In the left panel, go in the <strong>Containers &amp; Orchestration</strong> section, click on <strong>Managed Kubernetes Service</strong> link and click on the <strong>Create a Kubernetes cluster</strong> button</p>



<p>Fill the name of the cluster, choose a 3AZ region, click on Paris (EU-WEST-PAR) and select the Premium plan:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="695" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-3-1024x695.png" alt="" class="wp-image-28816" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-3-1024x695.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-3-300x204.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-3-768x521.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-3-1536x1043.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-3.png 1750w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Then, select the Kubernetes version and the security policy.</p>



<p>⚠️  Contrary to the Standard MKS, which is public by default, the Premium MKS is private by default so it is mandatory to create a private network, a subnet and a gateway.</p>



<p>Then, create one node pool by Availability Zone, with 3 nodes by node pool, for example:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="487" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-6-1024x487.png" alt="" class="wp-image-28871" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-6-1024x487.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-6-300x143.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-6-768x365.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-6-1536x730.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-6.png 1884w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Confirm the creation of your cluster and wait its creation.</p>



<p>Finally, click on the new created cluster and get the kubeconfig file.</p>



<h3 class="wp-block-heading">With Terraform</h3>



<p>In a previous blog post, we showed you <a href="https://blog.ovhcloud.com/infrastructure-as-code-iac-on-ovhcloud-part-1-terraform-opentofu/" data-wpel-link="internal">how to deploy a MKs cluster with Terraform/OpenTofu</a>. Please read the post if you are not familiar with Terraform or OpenTofu.</p>



<p>Create a <strong>ovh_kube.tf</strong> file with the following content:</p>



<pre class="wp-block-code"><code class="">resource "ovh_cloud_project_network_private" "network" {<br>  service_name = var.service_name<br>  vlan_id      = 84<br>  name         = "terraform_mks_multiaz_private_net"<br>  regions      = ["EU-WEST-PAR"]<br>}<br><br>resource "ovh_cloud_project_network_private_subnet" "subnet" {<br>  service_name = ovh_cloud_project_network_private.network.service_name<br>  network_id   = ovh_cloud_project_network_private.network.id<br><br>  # whatever region, for test purpose<br>  region     = "EU-WEST-PAR"<br>  start      = "192.168.142.100"<br>  end        = "192.168.142.200"<br>  network    = "192.168.142.0/24"<br>  dhcp       = true<br>  no_gateway = false<br>}<br><br>resource "ovh_cloud_project_gateway" "gateway" {<br>  service_name = ovh_cloud_project_network_private.network.service_name<br>  name       = "gateway"<br>  model      = "s"<br>  region     = "EU-WEST-PAR"<br>  network_id = tolist(ovh_cloud_project_network_private.network.regions_attributes[*].openstackid)[0]<br>  subnet_id  = ovh_cloud_project_network_private_subnet.subnet.id<br>}<br><br>resource "ovh_cloud_project_kube" "my_multizone_cluster" {<br>  service_name  = ovh_cloud_project_network_private.network.service_name<br>  name          = "multi-zone-mks"<br>  region        = "EU-WEST-PAR"<br>  plan          = "standard"<br><br>  private_network_id = tolist(ovh_cloud_project_network_private.network.regions_attributes[*].openstackid)[0]<br>  nodes_subnet_id    = ovh_cloud_project_network_private_subnet.subnet.id<br><br>  depends_on    = [ ovh_cloud_project_gateway.gateway ] //Gateway is mandatory for multizones cluster<br>}<br><br>resource "ovh_cloud_project_kube_nodepool" "node_pool_multi_zones_a" {<br>  service_name       = ovh_cloud_project_network_private.network.service_name<br>  kube_id            = ovh_cloud_project_kube.my_multizone_cluster.id<br>  name               = "my-pool-zone-a" //Warning: "_" char is not allowed!<br>  flavor_name        = "b3-8"<br>  desired_nodes      = 3<br>  availability_zones = ["eu-west-par-a"] //Currently, only one zone is supported<br>}<br><br>resource "ovh_cloud_project_kube_nodepool" "node_pool_multi_zones_b" {<br>  service_name       = ovh_cloud_project_network_private.network.service_name<br>  kube_id            = ovh_cloud_project_kube.my_multizone_cluster.id<br>  name               = "my-pool-zone-b"<br>  flavor_name        = "b3-8"<br>  desired_nodes      = 3<br>  availability_zones = ["eu-west-par-b"]<br>}<br><br>resource "ovh_cloud_project_kube_nodepool" "node_pool_multi_zones_c" {<br>  service_name       = ovh_cloud_project_network_private.network.service_name<br>  kube_id            = ovh_cloud_project_kube.my_multizone_cluster.id<br>  name               = "my-pool-zone-c"<br>  flavor_name        = "b3-8"<br>  desired_nodes      = 3<br>  availability_zones = ["eu-west-par-c"]<br>}<br><br>output "kubeconfig_file_eu_west_par" {<br>  value     = ovh_cloud_project_kube.my_multizone_cluster.kubeconfig<br>  sensitive = true<br>}</code></pre>



<p>This HCL configuration will create several OVHcloud services:</p>



<ul class="wp-block-list">
<li>a private network</li>



<li>a subnet</li>



<li>a gateway (S size)</li>



<li>a MKS cluster in EU_WEST_PAR region</li>



<li>one node pool in <strong>eu-west-par-a</strong> availability zone with 3 nodes</li>



<li>one node pool in <strong>eu-west-par-b</strong> availability zone with 3 nodes</li>



<li>one node pool in <strong>eu-west-par-c</strong> availability zone with 3 nodes</li>
</ul>



<p>Apply the configuration:</p>



<pre class="wp-block-code"><code class="">$ terraform apply

...

ovh_cloud_project_network_private.network: Creating...
ovh_cloud_project_network_private.network: Still creating... [10s elapsed]
ovh_cloud_project_network_private.network: Creation complete after 14s [id=pn-xxxxxxxx_xx]
ovh_cloud_project_network_private_subnet.subnet: Creating...
ovh_cloud_project_network_private_subnet.subnet: Creation complete after 3s [id=c14cbb87-xxxx-xxxx-xxxx-7b9d4940d857]
ovh_cloud_project_gateway.gateway: Creating...
ovh_cloud_project_gateway.gateway: Still creating... [10s elapsed]
ovh_cloud_project_gateway.gateway: Creation complete after 13s [id=7dafdcfe-xxxx-xxxx-xxxx-240df8f93af1]
ovh_cloud_project_kube.my_multizone_cluster: Creating...
ovh_cloud_project_kube.my_multizone_cluster: Still creating... [10s elapsed]
ovh_cloud_project_kube.my_multizone_cluster: Still creating... [20s elapsed]
ovh_cloud_project_kube.my_multizone_cluster: Still creating... [30s elapsed]
...
ovh_cloud_project_kube.my_multizone_cluster: Still creating... [1m40s elapsed]
ovh_cloud_project_kube.my_multizone_cluster: Still creating... [1m50s elapsed]
ovh_cloud_project_kube.my_multizone_cluster: Still creating... [2m0s elapsed]
ovh_cloud_project_kube.my_multizone_cluster: Creation complete after 2m2s [id=0196cd9a-xxxx-xxxx-xxxx-3acbb48d6dda]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Creating...
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Creating...
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Creating...
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Still creating... [10s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Still creating... [10s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Still creating... [10s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Still creating... [20s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Still creating... [20s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Still creating... [20s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Still creating... [30s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Still creating... [30s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Still creating... [30s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Still creating... [40s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Still creating... [40s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Still creating... [40s elapsed]
...
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Still creating... [4m0s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Still creating... [4m0s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Still creating... [4m10s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Still creating... [4m10s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Still creating... [4m10s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Still creating... [4m20s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Still creating... [4m20s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Still creating... [4m20s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_c: Creation complete after 4m24s [id=0196cd9c-xxxx-xxxx-xxxx-8e1925c4c18e]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_b: Creation complete after 4m24s [id=0196cd9c-xxxx-xxxx-xxxx-96a18b9202ff]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Still creating... [4m30s elapsed]
ovh_cloud_project_kube_nodepool.node_pool_multi_zones_a: Creation complete after 4m35s [id=0196cd9c-xxxx-xxxx-xxxx-8a08cdc2e68d]

Apply complete! Resources: 7 added, 0 changed, 0 destroyed.

Outputs:

kubeconfig_file_eu_west_par = &lt;sensitive&gt;</code></pre>



<p>Our MKS in 3AZ have been deployed 🎉</p>



<p>To connect into it, retrieve the kubeconfig file locally:</p>



<pre class="wp-block-code"><code class="">$ terraform output -raw kubeconfig_file_eu_west_par &gt; ~/.kube/multi-zone-mks.yml</code></pre>



<h3 class="wp-block-heading">Connect and discover your MKS cluster</h3>



<p>Initialize or append the KUBECONFIG environment variable with the new kubeconfig files:</p>



<pre class="wp-block-code"><code class="">export KUBECONFIG=/Users/my-user/.kube/mks.yml:/Users/my-user/.kube/multi-zone-mks.yml</code></pre>



<p>Display the node pools. Our cluster have 3 nodes pools, one per AZ:</p>



<pre class="wp-block-code"><code class="">$ kubectl get np
NAME             FLAVOR   AUTOSCALED   MONTHLYBILLED   ANTIAFFINITY   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   MIN   MAX   AGE
my-pool-zone-a   b3-8     false        false           false          3         3         3            3           0     100   7h8m
my-pool-zone-b   b3-8     false        false           false          3         3         3            3           0     100   7h8m
my-pool-zone-c   b3-8     false        false           false          3         3         3            3           0     100   7h8m</code></pre>



<p>You can also display the control plane&#8217;s pods in order to discover the new components of the MKS Premium:</p>



<pre class="wp-block-code"><code class="">$ kubectl get po -n kube-system</code></pre>



<h2 class="wp-block-heading">How To</h2>



<h3 class="wp-block-heading">Deploy pods accross several availability zones</h3>



<p>Now, let&#8217;s create a Depoyment with 6 pods and ask Kubernetes to deploy them in our 3 AZ (in the three node pools).</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="713" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-12-1024x713.png" alt="" class="wp-image-28897" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-12-1024x713.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-12-300x209.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-12-768x535.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-12-1536x1070.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-12.png 1588w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>To do that, create a <strong>nginx-cross-az.yam</strong>l file with the following content:</p>



<pre class="wp-block-code"><code class="">apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-cross-az
  labels:
    app: nginx-cross-az
spec:
  replicas: 6
  selector:
    matchLabels:
      app: nginx-cross-az
  template:
    metadata:
      labels:
        app: nginx-cross-az
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: "topology.kubernetes.io/zone"
                operator: In
                values:
                - eu-west-par-a
                - eu-west-par-b
                - eu-west-par-c
      containers:
      - name: nginx
        image: nginx:1.28.0
        ports:
        - containerPort: 80</code></pre>



<p>Thanks to the nodeAffinity feature of Kubernetes, we declare that we want 6 replicas (pods) running in 3 zones: <code>eu-west-par-a</code>, <code>eu-west-par-b</code>, <code>eu-west-par-c</code>.</p>



<p>Create a new namespace and apply the deployment:</p>



<pre class="wp-block-code"><code class="">$ kubectl create ns hello-app
$ kubectl apply -f nginx-cross-az.yaml -n hello-app</code></pre>



<p>As you can see, 6 pods have been created, and they are running on the nodes located in the 3 AZ.</p>



<pre class="wp-block-code"><code class="">$ kubectl get po -o wide -l app=nginx-cross-az -n hello-app
NAME                             READY   STATUS    RESTARTS   AGE    IP             NODE                         NOMINATED NODE   READINESS GATES
nginx-cross-az-6ffd957c4-7528p   1/1     Running   0          6s     10.240.2.140   my-pool-zone-b-tr6wf-5wfgz   &lt;none&gt;           &lt;none&gt;
nginx-cross-az-6ffd957c4-96mnh   1/1     Running   0          6s     10.240.3.91    my-pool-zone-c-wgrl6-b2f9s   &lt;none&gt;           &lt;none&gt;
nginx-cross-az-6ffd957c4-b48cv   1/1     Running   0          115m   10.240.6.182   my-pool-zone-c-wgrl6-lp22l   &lt;none&gt;           &lt;none&gt;
nginx-cross-az-6ffd957c4-k7rwf   1/1     Running   0          115m   10.240.1.237   my-pool-zone-b-tr6wf-ct7fs   &lt;none&gt;           &lt;none&gt;
nginx-cross-az-6ffd957c4-pb7zp   1/1     Running   0          115m   10.240.8.195   my-pool-zone-a-b9ztj-gt5vd   &lt;none&gt;           &lt;none&gt;
nginx-cross-az-6ffd957c4-vhhcw   1/1     Running   0          6s     10.240.7.40    my-pool-zone-a-b9ztj-brgpq   &lt;none&gt;           &lt;none&gt;</code></pre>



<h3 class="wp-block-heading">Deploy pods only in a desired availability zone</h3>



<p>You can also choose to deploy a Deployment with 3 replicas, only in the AZ of your choice, only in <strong>eu-west-par-a</strong> for example.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="713" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-13-1024x713.png" alt="" class="wp-image-28898" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-13-1024x713.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-13-300x209.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-13-768x535.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-13-1536x1070.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-13.png 1588w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Create a <strong>nginx-one-az.yaml</strong> file with the following content:</p>



<pre class="wp-block-code"><code class="">apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-one-az
  labels:
    app: nginx-one-az
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-one-az
  template:
    metadata:
      labels:
        app: nginx-one-az
    spec:
      nodeSelector:
        topology.kubernetes.io/zone: eu-west-par-a
      containers:
      - name: nginx
        image: nginx:1.28.0
        ports:
        - containerPort: 80</code></pre>



<p>Deploy the manifest file in your cluster:</p>



<pre class="wp-block-code"><code class="">$ kubectl apply -f nginx-one-az.yaml -n hello-app
deployment.apps/nginx-one-az created</code></pre>



<p>As you can see, our three pods are running in the PAR region only in the <code><strong>zone-a</strong></code> nodes:</p>



<pre class="wp-block-code"><code class="">$ kubectl get po -o wide -l app=nginx-one-az -n hello-app
NAME                            READY   STATUS    RESTARTS   AGE    IP             NODE                         NOMINATED NODE   READINESS GATES
nginx-one-az-6b5f9bdccc-8vv9l   1/1     Running   0          98s    10.240.7.13    my-pool-zone-a-b9ztj-brgpq   &lt;none&gt;           &lt;none&gt;
nginx-one-az-6b5f9bdccc-ck99s   1/1     Running   0          100s   10.240.5.216   my-pool-zone-a-b9ztj-mss8j   &lt;none&gt;           &lt;none&gt;
nginx-one-az-6b5f9bdccc-tlg4d   1/1     Running   0          96s    10.240.8.221   my-pool-zone-a-b9ztj-gt5vd   &lt;none&gt;           &lt;none&gt;</code></pre>



<h2 class="wp-block-heading">Want to go further?</h2>



<p>Want to learn more on this topic? In the coming days, we will publish a blog post about MKS Premium plan.</p>



<p>Visit our <a href="https://labs.ovhcloud.com/en/managed-kubernetes-service-mks-premium-plan/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Managed Kubernetes Service (MKS) Premium plan</a> in the OVHcloud Labs website to know more about Premium MKS.</p>



<p>Join the <strong>free</strong> Beta: <a href="https://labs.ovhcloud.com/en/managed-kubernetes-service-mks-premium-plan/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://labs.ovhcloud.com/en/managed-kubernetes-service-mks-premium-plan/</a></p>



<p>Read the documentation about the new <a href="https://help.ovhcloud.com/csm/fr-public-cloud-kubernetes-premium?id=kb_article_view&amp;sysparm_article=KB0067581" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Managed Kubernetes Service (MKS) Premium plan</a>.</p>



<p>Join us on <a href="https://discord.com/channels/850031577277792286/1366761790150541402" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Discord</a> and give us your feedbacks.</p>



<p></p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploy-your-workloads-on-3-availability-zones-with-our-new-managed-kubernetes-services-mks-premium-plan%2F&amp;action_name=Deploy%20your%20workloads%20on%203%20availability%20zones%20with%20our%20new%20Managed%20Kubernetes%20Services%20%28MKS%29%20%26%238216%3BPremium%26%238217%3B%20plan&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Solutions at OVHcloud to overcome the Docker Hub pull rate limits</title>
		<link>https://blog.ovhcloud.com/solutions-at-ovhcloud-to-overcome-the-docker-hub-pull-rate-limits/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Fri, 11 Apr 2025 06:53:38 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Docker Hub]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<category><![CDATA[OVHcloud Managed Kubernetes]]></category>
		<category><![CDATA[OVHcloud Managed Private Registry]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<category><![CDATA[registry]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=28623</guid>

					<description><![CDATA[For the past few months, Docker has been announcing the implementation of new pull rate limits for the Docker Hub. The most significant change is the 10 pulls-per-hour limit, per IP address, for unauthenticated users that can quickly lead to a &#8220;You have reached your pull rate limit&#8221; error message. Even if these changes have [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fsolutions-at-ovhcloud-to-overcome-the-docker-hub-pull-rate-limits%2F&amp;action_name=Solutions%20at%20OVHcloud%20to%20overcome%20the%20Docker%20Hub%20pull%20rate%20limits&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="960" height="540" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/ovh_solutions_overcome_docker_hub_pull_rate_limits-1.png" alt="" class="wp-image-28707" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/ovh_solutions_overcome_docker_hub_pull_rate_limits-1.png 960w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/ovh_solutions_overcome_docker_hub_pull_rate_limits-1-300x169.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/ovh_solutions_overcome_docker_hub_pull_rate_limits-1-768x432.png 768w" sizes="auto, (max-width: 960px) 100vw, 960px" /></figure>



<p>For the past few months, <a href="https://www.docker.com/blog/revisiting-docker-hub-policies-prioritizing-developer-experience/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Docker has been announcing the implementation of new pull rate limits for the Docker Hub</a>. The most significant change is the 10 pulls-per-hour limit, per IP address, for unauthenticated users that can quickly lead to a &#8220;You have reached your pull rate limit&#8221; error message.</p>



<p>Even if these changes have been implemented and rollbacked as of April 1, 2025, at OVHcloud, we are aware that these upcoming changes could impact your deployments and daily work.</p>



<p>In this blog post, you will find several solutions and best practices that can help you reduce Docker pull commands and avoid hitting Docker Hub&#8217;s pull rate limit.</p>



<h3 class="wp-block-heading">Use OVHcloud Managed Private Registry and activate the proxy cache</h3>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="800" height="800" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry.png" alt="" class="wp-image-28658" style="width:181px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry.png 800w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/managed_private_registry-70x70.png 70w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p><a href="https://www.ovhcloud.com/en/public-cloud/managed-rancher-service/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Managed Private Registry</a> (MPR) is a container image registry, based on CNCF project Harbor. It allows you to store and manage Docker (or OCI-compliant) container images and artifacts in a private, secure, and scalable environment, hosted in OVHcloud&#8217;s infrastructure.</p>



<p>MPR provides a <strong>proxy cache</strong> feature that helps you mirror and cache images from external registries, like <strong>Docker Hub</strong>, <strong>Github Container Registry</strong>, <strong>Quay</strong>, <strong>JFrog Artifactory Registry</strong>, etc. External registries can be private or public. This improves performance and reduces rate limits imposed by external registries 💪.</p>



<h4 class="wp-block-heading">Configure proxy cache in OVHcloud Managed Private Registry</h4>



<p>If you don&#8217;t have deployed a MPR yet, you can deploy it through the <a href="https://help.ovhcloud.com/csm/en-gb-public-cloud-private-registry-creation?id=kb_article_view&amp;sysparm_article=KB0050325" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Control Panel</a>, the <a href="https://help.ovhcloud.com/csm/en-public-cloud-private-registry-creation-via-terraform?id=kb_article_view&amp;sysparm_article=KB0050330" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Terraform provider</a>, the <a href="https://help.ovhcloud.com/csm/en-public-cloud-private-registry-creation-with-pulumi?id=kb_article_view&amp;sysparm_article=KB0061073" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Pulumi provider</a> and even the API. Follow the guide according to your needs.</p>



<p>First, log in the <a href="https://help.ovhcloud.com/csm/en-gb-public-cloud-private-registry-connect-to-ui?id=kb_article_view&amp;sysparm_article=KB0050321" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Harbor user interface</a> on your private registry, follow the guide if you needed to.</p>



<p>⚠️ In order to activate the proxy cache, you need to log in the Harbor UI with an administrator account.</p>



<h5 class="wp-block-heading">Registry endpoint creation</h5>



<p>In the left sidebar, click on <strong>Registries</strong> (inside the Administration section).</p>



<p>Then click on the <strong>New endpoint</strong> button.</p>



<p>Select Docker Hub in the provider list, enter a name (&#8220;Docker Hub&#8221; for example), fill your Docker Hub login in Access ID field and fill your Docker Hub password in Access Secret field.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="674" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-1024x674.png" alt="" class="wp-image-28663" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-1024x674.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-300x197.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-768x505.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21-1536x1010.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.21.png 1818w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>⚠️ Note that we <strong>strongly recommend</strong> using a <strong>Docker account</strong> (even a free one) to <strong>avoid rate limits</strong>, for unanthenticated users, when pulling images. Without authentication, Docker Hub enforces strict pull limits, which may cause failures when pulling frequently used images.</p>



<p>Click on the <strong>Test connection</strong> button to test if your login and password are correct.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="620" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39-1024x620.png" alt="" class="wp-image-28664" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39-1024x620.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39-300x182.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39-768x465.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.39.png 1228w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Now click on the <strong>OK</strong> button in order to create the new endpoint.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="330" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-1024x330.png" alt="" class="wp-image-28665" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-1024x330.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-300x97.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-768x247.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-1536x494.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-11.16.56-2048x659.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>The Docker Hub endpoint is created 🎉</p>



<h5 class="wp-block-heading">Proxy cache project creation</h5>



<p>In the left sidebar, click on <strong>Projects</strong>, then click on the <strong>New project</strong> button.</p>



<p>Enter a project name (&#8220;docker-hub&#8221; for example), enable the Proxy Cache, click on the Docker Hub endpoint in the list and click on the <strong>OK</strong> button.</p>



<p>ℹ️ Note that a project is private by default, so you have to click on the Public checkbox if you want to change the visibilty of a project.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="735" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33-1024x735.png" alt="" class="wp-image-28669" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33-1024x735.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33-300x215.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33-768x551.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-33.png 1182w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>⚠️ The name of a proxy cache project should not contains dot(s), indeed it can causes issues with external tools like Kaniko.</p>



<p>Your proxy cache project have been created 🎉</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="373" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-1024x373.png" alt="" class="wp-image-28670" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-1024x373.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-300x109.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-768x280.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-1536x560.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-34-2048x746.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>⚠️ A proxy cache project works similarly to a normal Harbor project, except that you are not able to push images to a proxy cache project.</p>



<p>Now when you want to pull a Docker images hosted in the Docker Hub you proxy cached, instead of pulling directly from the Docker Hub, you need to configure your docker/podman pull commands and Kubernetes pod manifests to pull images from the OVHcloud Managed Private Registry:</p>



<pre class="wp-block-code"><code class="">$ docker pull xxxxxxxx.c1.de1.container-registry.ovh.net/docker-hub/ovhcom/ovh-platform-hello:latest
latest: Pulling from docker-hub/ovhcom/ovh-platform-hello
1f3e46996e29: Pull complete 
6aa905c35cc0: Pull complete 
Digest: sha256:fddb76f0eb92d95b3721bfa0ea87350c5d39ea262e90cd30d66f429bb40c8b07
Status: Downloaded newer image for xxxxxxxx.c1.de1.container-registry.ovh.net/docker-hub/ovhcom/ovh-platform-hello:latest
xxxxxxxx.c1.de1.container-registry.ovh.net/docker-hub/ovhcom/ovh-platform-hello:latest</code></pre>



<h3 class="wp-block-heading">Disable the AlwaysPullImages admission plugin on your MKS cluster</h3>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="200" height="200" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Managed-Kubernetes-Service.png" alt="" class="wp-image-28702" style="width:186px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Managed-Kubernetes-Service.png 200w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Managed-Kubernetes-Service-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Managed-Kubernetes-Service-70x70.png 70w" sizes="auto, (max-width: 200px) 100vw, 200px" /></figure>



<p>By default, the <strong>AlwaysPullImages</strong> Kubernetes admission plugin is enabled in your OVHcloud Managed Kubernetes (MKS) cluster.</p>



<p>⚠️ When it is enabled, this forces the imagePullPolicy of a container to be set to <strong>Always</strong>, no matter how it is specified when creating the resource.</p>



<p>This is useful in a multitenant cluster so that users can be assured that their private images can only be used by those who have the credentials to pull them. Without this admission controller, once an image has been pulled to a node, any pod from any user can use it by knowing the image&#8217;s name (assuming the Pod is scheduled onto the right node), without any authorization check against the image.</p>



<p>But, it can cause a lot of pull requests to the Docker Hub and you can reach the rate limits.</p>



<p>So a solution can be to deactivate the AlwaysPullImages admission plugin in your MKS cluster.</p>



<p>In this blog post, we will deactivate it in the OVHcloud Control Panel.</p>



<h5 class="wp-block-heading">Enable/Disable MKS admission plugins</h5>



<p>Log in the OVHcloud Control Panel. In the left sidebar, click on the <strong>Managed Kubernetes Service</strong> and then click on the wanted MKS cluster.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="777" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-1024x777.png" alt="" class="wp-image-28687" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-1024x777.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-300x227.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-768x582.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01-1536x1165.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/Capture-decran-2025-04-10-a-15.35.01.png 2044w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>In the <strong>Cluster information</strong> section, scroll down and click on <strong>Enable/disable plugin</strong>. A popup will appear.</p>



<p>Then click on <strong>Disable</strong> for the Always Pull Images plugin and click on the <strong>Save</strong> button.</p>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" width="896" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36-896x1024.png" alt="" class="wp-image-28691" style="width:387px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36-896x1024.png 896w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36-262x300.png 262w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36-768x878.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-36.png 936w" sizes="auto, (max-width: 896px) 100vw, 896px" /></figure>



<p>⚠️ Any changes on the Admission plugins require a redeployment of the MKS cluster API server (without data loss) so the API server can be potentially not available during the redeployment.</p>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" width="541" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-37-541x1024.png" alt="" class="wp-image-28695" style="width:228px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-37-541x1024.png 541w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-37-159x300.png 159w, https://blog.ovhcloud.com/wp-content/uploads/2025/04/image-37.png 572w" sizes="auto, (max-width: 541px) 100vw, 541px" /></figure>



<h3 class="wp-block-heading">Conclusion</h3>



<p>To learn more about how to use and configure <a href="https://help.ovhcloud.com/csm/fr-documentation-public-cloud-containers-orchestration-managed-private-registry?id=kb_browse_cat&amp;kb_id=574a8325551974502d4c6e78b7421938&amp;kb_category=7939e6a464282d10476b3689cb0d0ed7&amp;spa=1" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud private registries</a> and <a href="https://help.ovhcloud.com/csm/world-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s?id=kb_browse_cat&amp;kb_id=574a8325551974502d4c6e78b7421938&amp;kb_category=f334d555f49801102d4ca4d466a7fdd2&amp;spa=1" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud MKS clusters</a>, don&#8217;t hesitate to follow our guides.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fsolutions-at-ovhcloud-to-overcome-the-docker-hub-pull-rate-limits%2F&amp;action_name=Solutions%20at%20OVHcloud%20to%20overcome%20the%20Docker%20Hub%20pull%20rate%20limits&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Enhancing Kubernetes Security: Detecting Threats in OVHcloud Managed Kubernetes cluster (MKS) Audit Logs with Falco</title>
		<link>https://blog.ovhcloud.com/enhancing-kubernetes-security-detecting-threats-in-ovhcloud-managed-kubernetes-cluster-mks-audit-logs-with-falco/</link>
		
		<dc:creator><![CDATA[Aurélie Vache]]></dc:creator>
		<pubDate>Tue, 11 Feb 2025 08:58:40 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=27886</guid>

					<description><![CDATA[Several month ago we discovered Falco, a Cloud Native near real-time threats detection tool, and we saw how to install it on an OVHcloud MKS cluster. Today we will connect our Falco instance to a MKS cluster in order to retrieve Kubernetes Audit Logs events and watch if everything is OK in our cluster. Concretely, [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fenhancing-kubernetes-security-detecting-threats-in-ovhcloud-managed-kubernetes-cluster-mks-audit-logs-with-falco%2F&amp;action_name=Enhancing%20Kubernetes%20Security%3A%20Detecting%20Threats%20in%20OVHcloud%20Managed%20Kubernetes%20cluster%20%28MKS%29%20Audit%20Logs%20with%20Falco&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="484" src="https://blog.ovhcloud.com/wp-content/uploads/2025/02/falco-blogpost-plugin-mks-1-1024x484.jpg" alt="" class="wp-image-28194" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/02/falco-blogpost-plugin-mks-1-1024x484.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/02/falco-blogpost-plugin-mks-1-300x142.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/02/falco-blogpost-plugin-mks-1-768x363.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/02/falco-blogpost-plugin-mks-1-1536x725.jpg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/02/falco-blogpost-plugin-mks-1.jpg 1749w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Several month ago we discovered <a href="https://falco.org/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Falco</a>, a Cloud Native near real-time threats detection tool, and we saw <a href="https://blog.ovhcloud.com/near-real-time-threats-detection-with-falco-on-ovhcloud-managed-kubernetes/" data-wpel-link="internal">how to install it on an OVHcloud MKS cluster</a>.</p>



<p>Today we will connect our Falco instance to a MKS cluster in order to retrieve <strong>Kubernetes Audit Logs</strong> events and watch if everything is OK in our cluster.</p>



<p>Concretely, in this blog post we will:</p>



<ul class="wp-block-list">
<li>deploy an OVHcloud LDP (Logs Data Platform)</li>



<li>create a data stream into this LDP</li>



<li>connect an OVHcloud MKS cluster to the data stream (to send Audit Logs into it)</li>



<li>use the <strong>k8saudit-ovh</strong> Falco plugin to retrieve in realtime the Audit Logs of a MKS cluster</li>



<li>test a rule and detect security events based on MKS audit logs activity</li>
</ul>



<h2 class="wp-block-heading">Prerequisites</h2>



<p>This blog post presupposes that you already have a working&nbsp;<a href="https://www.ovhcloud.com/fr/public-cloud/kubernetes/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">OVHcloud Managed Kubernetes</a>&nbsp;(MKS) cluster, and a running instance of Falco.</p>



<p>If it is not the case, follow the <a href="https://blog.ovhcloud.com/near-real-time-threats-detection-with-falco-on-ovhcloud-managed-kubernetes/" data-wpel-link="internal">Near real-time threats detection with Falco on OVHcloud Managed Kubernetes</a> blog post.</p>



<h2 class="wp-block-heading">Deploying a Logs Data Platform (LDP)</h2>



<p>LDP is the managed platform for collecting, processing, analyzing and storing your logs of the OVHcloud products. To be able to access to our Kubernetes clusters Audit Logs we need to deploy a LDP.</p>



<p>Find more information on our&nbsp;dedicated<a href="https://www.ovhcloud.com/en/identity-security-operations/logs-data-platform/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">&nbsp;LDP page</a>.</p>



<p>We can deploy a LDP through the OVHcloud Control Panel and the API. In this blog post, we will deploy it through the Control Panel.</p>



<p>First, you have to log in to the&nbsp;<a href="https://www.ovh.com/manager/#/dedicated/dbaas/logs/order" target="_blank" rel="noreferrer noopener" data-wpel-link="exclude">OVHcloud Control Panel</a>, click on the <strong>Bare Metal Cloud</strong> section located at the top in the header and then click on the <strong>Logs Data Platform</strong> in the sidebar.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="529" src="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-1-1024x529.png" alt="" class="wp-image-27901" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-1-1024x529.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-1-300x155.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-1-768x396.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-1-1536x793.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-1-2048x1057.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Choose the LDP plan you want: <em>Standard</em> (free) or <em>Enterprise</em> one, depending on your needs.</p>



<p>Select a <strong>region</strong> (<em>North America</em> or <em>Europe</em>). We will choose &#8220;<strong>GRA</strong>&#8221; for this blog post, click on <strong>Order</strong> button and follow the instructions.</p>



<p>After several minutes your LDP will be created. </p>



<p>Refresh the page, click on the new deployed LDP, then enter a password and click on the <strong>Save</strong> button.</p>



<h2 class="wp-block-heading">Creating a Data stream and retrieving the Websocket URL</h2>



<p>Our Kubernetes Audit Logs will be stored in a data stream so click on the <strong>Data stream</strong> tab and then click on the <strong>Add data stream</strong> button.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="466" src="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-3-1024x466.png" alt="" class="wp-image-27905" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-3-1024x466.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-3-300x137.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-3-768x350.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-3-1536x700.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-3-2048x933.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Choose a name of the data stream. On my side I like to call it with the name of my MKS cluster following by &#8220;-audit-logs&#8221; to know easily what it is this data stream for. My MKS cluster&#8217;s name is &#8220;my-rancher-mks-cluster&#8221; so let&#8217;s name it &#8220;my-rancher-mks-cluster-audit-logs&#8221;. Fill the description (mandatory).</p>



<p>The OVHcloud Audit Logs Falco plugin you will use receive the audit logs through Websocket so you need to enable <strong>Websocket broadcasting</strong> then click on the <strong>Save</strong> button.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="730" src="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-5-1024x730.png" alt="" class="wp-image-27909" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-5-1024x730.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-5-300x214.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-5-768x548.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-5-1536x1095.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-5-2048x1460.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Now, to retrieve the Websocket URL of your data stream, click on the<strong> Data stream</strong> tab, then click on the<strong> &#8230;</strong> button (located at the right in the line of your data stream), and click on <strong>Monitor in real time</strong> action.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="674" src="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-6-1024x674.png" alt="" class="wp-image-27913" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-6-1024x674.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-6-300x197.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-6-768x505.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-6-1536x1011.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-6-2048x1347.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Finally, click on the <strong>Action</strong> button and in the <strong>Copy Websocket address</strong>, then save the LDP Websocket URL somewhere ;-).</p>



<p>Note that the Websocket address have this kind of format: <code>w<em>ss://&lt;region&gt;.logs.ovh.com/tail/?tk=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx</em></code></p>



<h2 class="wp-block-heading">Connect a MKS cluster to a LDP data stream</h2>



<p>Now we need to send the Kubernetes Audit Logs of our MKS cluster in the data stream. </p>



<p>For that, in the OVHcloud Control Panel, click on the <strong>Public Cloud</strong> section in the header and then in <strong>Managed Kubernetes Service</strong> in the sidebar.</p>



<p>Click on your Kubernetes cluster (my-rancher-mks-cluster for example), then in the <strong>Logs</strong> tab and click on the <strong>Subscribe</strong> button.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="500" src="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-7-1024x500.png" alt="" class="wp-image-27917" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-7-1024x500.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-7-300x146.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-7-768x375.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-7-1536x750.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-7.png 2040w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Click on the <strong>Add data stream</strong> button to visualize in real time the Audit Logs of your cluster. Then select the LDP instance and click on the <strong>Subscribe</strong> button for the data stream your created:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="544" src="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-8-1024x544.png" alt="" class="wp-image-27918" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-8-1024x544.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-8-300x159.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-8-768x408.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-8-1536x815.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/01/image-8.png 2046w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Retrieve the MKS Audit Logs with Falco</h2>



<p>Falco can receive <strong>Events</strong>, compare them to a set of <strong>Rules</strong> to determine the actions to perform and generate <strong>Alerts</strong> to different endpoints. </p>



<p>Thanks to the <strong>k8saudit-ovh</strong> plugin, Falco can receive a new sort of <strong>Events</strong>: the Audit Logs of your MKS cluster. These events have also some <a href="https://github.com/falcosecurity/plugins/blob/main/plugins/k8saudit/rules/k8s_audit_rules.yaml" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">rules to follow</a>.</p>



<p>Concretely, when an user will execute some <strong>kubectl</strong> commands in an OVHcloud MKS cluster, Audit Logs will be generated. Falco is listening from them and depending on the configured rules, it will generate some alerts.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="961" height="327" src="https://blog.ovhcloud.com/wp-content/uploads/2025/02/image.png" alt="" class="wp-image-28190" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/02/image.png 961w, https://blog.ovhcloud.com/wp-content/uploads/2025/02/image-300x102.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/02/image-768x261.png 768w" sizes="auto, (max-width: 961px) 100vw, 961px" /></figure>



<p>Let&#8217;s install or update a Falco configuration running in a MKS cluster and use this plugin.</p>



<p>Create a <strong>values.yaml</strong> file with the following content:</p>



<pre class="wp-block-code"><code class="">tty: true<br>kubernetes: false<br><br># Just a Deployment with 1 replica (instead of a Daemonset) to have only one Pod that pulls the MKS Audit Logs from a OVHcloud LDP<br>controller:<br>  kind: deployment<br>  deployment:<br>    replicas: 1<br><br>falco:<br>  rule_matching: all<br>  rules_files:<br>    - /etc/falco/k8s_audit_rules.yaml<br>    - /etc/falco/rules.d<br>  plugins:<br>    - name: k8saudit-ovh<br>      library_path: libk8saudit-ovh.so<br>      open_params: "&lt;region&gt;.logs.ovh.com/tail/?tk=&lt;ID&gt;" # Replace with your LDP Websocket URL<br>    - name: json<br>      library_path: libjson.so<br>      init_config: ""<br>  # Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container.<br>  load_plugins: [k8saudit-ovh, json]<br><br>driver:<br>  enabled: false<br>collectors:<br>  enabled: false<br><br># use falcoctl to install automatically the plugin and the rules<br>falcoctl:<br>  artifact:<br>    install:<br>      enabled: true<br>    follow:<br>      enabled: true<br>  config:<br>    indexes:<br>    - name: falcosecurity<br>      url: https://falcosecurity.github.io/falcoctl/index.yaml<br>    artifact:<br>      allowedTypes:<br>        - plugin<br>        - rulesfile<br>      install:<br>        resolveDeps: false<br>        refs: [k8saudit-rules:0, k8saudit-ovh:0.1, json:0]<br>      follow:<br>        refs: [k8saudit-rules:0]</code></pre>



<p>This <strong>values.yaml </strong>file will install Falco with the <strong>k8saudit-ovh</strong> and the <strong>json</strong> plugins. </p>



<p>Install the latest version of Falco with&nbsp;<code>helm install</code>&nbsp;command:</p>



<pre class="wp-block-code"><code class="">$ helm install falco --create-namespace --namespace falco --values=values.yaml falcosecurity/falco</code></pre>



<p>This command will install the latest version of Falco, with the k8saudit-ovh and json plugins, and create a new&nbsp;<code>falco</code>&nbsp;namespace:</p>



<pre class="wp-block-code"><code class="">$ helm install falco --create-namespace --namespace falco --values=values.yaml falcosecurity/falco

NAME: falco
LAST DEPLOYED: Mon Feb 10 10:15:20 2025
NAMESPACE: falco
STATUS: deployed
REVISION: 1
NOTES:
No further action should be required.</code></pre>



<p>Or if you already have Falco deployed in a Kubernetes cluster, you can use the <code>helm update</code> command instead:</p>



<pre class="wp-block-code"><code class="">$ helm upgrade falco --create-namespace --namespace falco --values=values.yaml falcosecurity/falco</code></pre>



<p>You can check if the Falco pods are correctly running:</p>



<pre class="wp-block-code"><code class="">$ kubectl get pods -n falco

NAME                                      READY   STATUS    RESTARTS   AGE
falco-6b8bc77d8b-v24jr                    2/2     Running   0          96s
falco-falcosidekick-67877d6946-4hmbn      1/1     Running   0          96s
falco-falcosidekick-67877d6946-tpjk6      1/1     Running   0          96s
falco-falcosidekick-ui-78b96fd57d-4wb6q   1/1     Running   0          96s
falco-falcosidekick-ui-78b96fd57d-v7rnm   1/1     Running   0          96s
falco-falcosidekick-ui-redis-0            1/1     Running   0          96s</code></pre>



<p>Wait and execute the command again if the pods are in “Init” or “ContainerCreating” state.</p>



<p>Once the Falco pod is ready, run the following command to see the logs:</p>



<pre class="wp-block-code"><code class="">kubectl logs -l app.kubernetes.io/name=falco -n falco -c falco</code></pre>



<p>You should see logs like that:</p>



<pre class="wp-block-code"><code class="">$ kubectl logs -l app.kubernetes.io/name=falco -n falco -c falco

Mon Feb 10 09:15:35 2025:    /etc/falco/k8s_audit_rules.yaml | schema validation: ok
Mon Feb 10 09:15:35 2025: Hostname value has been overridden via environment variable to: my-pool-1-node-921b61
Mon Feb 10 09:15:35 2025: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
Mon Feb 10 09:15:35 2025: Starting health webserver with threadiness 2, listening on 0.0.0.0:8765
Mon Feb 10 09:15:35 2025: Loaded event sources: syscall, k8s_audit
Mon Feb 10 09:15:35 2025: Enabled event sources: k8s_audit
Mon Feb 10 09:15:35 2025: Opening 'k8s_audit' source with plugin 'k8saudit-ovh'
{"hostname":"my-pool-1-node-921b61","output":"09:15:40.698757000: Warning K8s Operation performed by user not in allowed list of users (user=csi-cinder-controller target=csi-6afb06dce281b86b7bab718b5d966dc261b2b1554941ae449519a128cb2e3fb3/volumeattachments verb=patch uri=/apis/storage.k8s.io/v1/volumeattachments/csi-6afb06dce281b86b7bab718b5d966dc261b2b1554941ae449519a128cb2e3fb3/status resp=200)","output_fields":{"evt.time":1739178940698757000,"ka.response.code":"200","ka.target.name":"csi-6afb06dce281b86b7bab718b5d966dc261b2b1554941ae449519a128cb2e3fb3","ka.target.resource":"volumeattachments","ka.uri":"/apis/storage.k8s.io/v1/volumeattachments/csi-6afb06dce281b86b7bab718b5d966dc261b2b1554941ae449519a128cb2e3fb3/status","ka.user.name":"csi-cinder-controller","ka.verb":"patch"},"priority":"Warning","rule":"Disallowed K8s User","source":"k8s_audit","tags":["k8s"],"time":"2025-02-10T09:15:40.698757000Z"}
{"hostname":"my-pool-1-node-921b61","output":"09:15:57.508657000: Warning K8s Operation performed by user not in allowed list of users (user=yacht target=my-pool-1.18051c0a88716868/events verb=patch uri=/api/v1/namespaces/default/events/my-pool-1.18051c0a88716868 resp=403)","output_fields":{"evt.time":1739178957508657000,"ka.response.code":"403","ka.target.name":"my-pool-1.18051c0a88716868","ka.target.resource":"events","ka.uri":"/api/v1/namespaces/default/events/my-pool-1.18051c0a88716868","ka.user.name":"yacht","ka.verb":"patch"},"priority":"Warning","rule":"Disallowed K8s User","source":"k8s_audit","tags":["k8s"],"time":"2025-02-10T09:15:57.508657000Z"}
{"hostname":"my-pool-1-node-921b61","output":"09:15:57.807013000: Warning K8s Operation performed by user not in allowed list of users (user=yacht target=my-pool-1/nodepools verb=update uri=/apis/kube.cloud.ovh.com/v1alpha1/nodepools/my-pool-1/status resp=200)","output_fields":{"evt.time":1739178957807013000,"ka.response.code":"200","ka.target.name":"my-pool-1","ka.target.resource":"nodepools","ka.uri":"/apis/kube.cloud.ovh.com/v1alpha1/nodepools/my-pool-1/status","ka.user.name":"yacht","ka.verb":"update"},"priority":"Warning","rule":"Disallowed K8s User","source":"k8s_audit","tags":["k8s"],"time":"2025-02-10T09:15:57.807013000Z"}</code></pre>



<p>The logs confirm that Falco <strong>k8saudit-ovh</strong> plugin and the <strong>k8saudit</strong> rules have been loaded correctly 💪.</p>



<h2 class="wp-block-heading"> Testing Falco</h2>



<p>In order to test Falco we need to know which rules are installed by default. In our case, as we defined it in the values.yaml file, the <strong>k8saudit-ovh</strong> plugin follow the <a href="https://github.com/falcosecurity/plugins/blob/main/plugins/k8saudit/rules/k8s_audit_rules.yaml" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">k8s_audit_rules.yaml</a> file. You can take a look at them in order to know them.</p>



<p>In this blog post we will test one of well-known default k8s audit rules:</p>



<pre class="wp-block-code"><code class="">- rule: Attach/Exec Pod
  desc: &gt;
    Detect any attempt to attach/exec to a pod
  condition: kevt_started and pod_subresource and (kcreate or kget) and ka.target.subresource in (exec,attach) and not user_known_exec_pod_activities
  output: Attach/Exec to pod (user=%ka.user.name pod=%ka.target.name resource=%ka.target.resource ns=%ka.target.namespace action=%ka.target.subresource command=%ka.uri.param[command])
  priority: NOTICE
  source: k8s_audit
  tags: [k8s]</code></pre>



<p>This rule is interesting because an event will be generated if/when an user execute commands in a pod.</p>



<p>Let&#8217;s test the rule!</p>



<p>In a tab of your terminal, watch the coming logs:</p>



<pre class="wp-block-code"><code class="">$ kubectl logs -l app.kubernetes.io/name=falco -n falco -c falco -f</code></pre>



<p>In an another tab of your terminal, create a Nginx pod and execute a command into it:</p>



<pre class="wp-block-code"><code class="">$ kubectl run nginx --image=nginx<br><br>$ kubectl exec -it nginx -- cat /etc/shadow</code></pre>



<p>Several seconds later, in the logs you should see this you will see this <strong>Attach/Exec to pod</strong> logs:</p>



<pre class="wp-block-code"><code class="">...
{"hostname":"my-pool-1-node-921b61","output":"09:29:46.302906000: Notice Attach/Exec to pod (user=kubernetes-admin pod=nginx-676b6c5bbc-4xc6t resource=pods ns=hello-app action=exec command=cat)","output_fields":{"evt.time":1739179786302906000,"ka.target.name":"nginx-676b6c5bbc-4xc6t","ka.target.namespace":"hello-app","ka.target.resource":"pods","ka.target.subresource":"exec","ka.uri.param[command]":"cat","ka.user.name":"kubernetes-admin"},"priority":"Notice","rule":"Attach/Exec Pod","source":"k8s_audit","tags":["k8s"],"time":"2025-02-10T09:29:46.302906000Z"}
...</code></pre>



<p>🎉</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>Ensuring the security of Kubernetes clusters is important and in general we have a lot of information in the Audit Logs but we don&#8217;t use them so don&#8217;t hesitate to use this new plugin.</p>



<p>We installed the new k8saudit-ovh plugin in an OVHcloud MKS cluster but note that you can deploy it in a Kubernetes cluster in another Cloud provider and even in a Falco instance running locally 💪.</p>



<p>We visualized the logs/the events in the terminal but you can also visualize them in the <a href="https://github.com/falcosecurity/falcosidekick" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">sidekick</a> UI, create a custom rule and even use <a href="https://github.com/falcosecurity/falco-talon" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Talon</a> to execute some actions.</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fenhancing-kubernetes-security-detecting-threats-in-ovhcloud-managed-kubernetes-cluster-mks-audit-logs-with-falco%2F&amp;action_name=Enhancing%20Kubernetes%20Security%3A%20Detecting%20Threats%20in%20OVHcloud%20Managed%20Kubernetes%20cluster%20%28MKS%29%20Audit%20Logs%20with%20Falco&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
