<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Stéphane Philippart, Author at OVHcloud Blog</title>
	<atom:link href="https://blog.ovhcloud.com/author/stephane-philippart/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.ovhcloud.com/author/stephane-philippart/</link>
	<description>Innovation for Freedom</description>
	<lastBuildDate>Wed, 01 Apr 2026 12:56:38 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Extract Text from Images with OCR using Python and OVHcloud AI Endpoints</title>
		<link>https://blog.ovhcloud.com/extract-text-from-images-with-ocr-using-python-and-ovhcloud-ai-endpoints/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Wed, 01 Apr 2026 12:55:19 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Endpoints]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=30992</guid>

					<description><![CDATA[If you want to have more information on&#160;AI Endpoints, please read the&#160;following blog post.&#160;You can, also, have a look at our&#160;previous blog posts&#160;on how use AI Endpoints. You can find the full code example in the GitHub repository. In this article,&#160;we will explore how to perform OCR&#160;(Optical Character Recognition)&#160;on images using a vision-capable LLM,&#160;the&#160;OpenAI Python library,&#160;and [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fextract-text-from-images-with-ocr-using-python-and-ovhcloud-ai-endpoints%2F&amp;action_name=Extract%20Text%20from%20Images%20with%20OCR%20using%20Python%20and%20OVHcloud%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p><em>If you want to have more information on&nbsp;<a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>, please read the&nbsp;<a href="https://blog.ovhcloud.com/enhance-your-applications-with-ai-endpoints/" data-wpel-link="internal">following blog post</a>.</em>&nbsp;<em>You can, also, have a look at our&nbsp;<a href="https://blog.ovhcloud.com/tag/ai-endpoints/" data-wpel-link="internal">previous blog posts</a>&nbsp;on how use AI Endpoints.</em></p>



<p><em>You can find the full code example in the <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/ai-endpoints/python-ocr" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">GitHub repository</a>.</em></p>



<p>In this article,&nbsp;we will explore how to perform OCR&nbsp;(Optical Character Recognition)&nbsp;on images using a vision-capable LLM,&nbsp;the&nbsp;<a href="https://github.com/openai/openai-python" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenAI Python library</a>,&nbsp;and OVHcloud&nbsp;<a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>.</p>



<h3 class="wp-block-heading">Introduction to OCR with Vision Models</h3>



<p>Optical Character Recognition has been around for decades,&nbsp;but traditional OCR engines often struggle with complex layouts,&nbsp;handwritten text,&nbsp;or noisy images.&nbsp;Vision-capable Large Language Models bring a new approach:&nbsp;instead of relying on specialized OCR pipelines,&nbsp;you can simply send an image to a model that understands both visual and textual content.</p>



<p>In this example,&nbsp;we use the&nbsp;<a href="https://github.com/openai/openai-python" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenAI Python library</a>&nbsp;to create a simple OCR script powered by a vision model hosted on OVHcloud&nbsp;<a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>.</p>



<p>The whole application is a single Python file:  no complex setup, just <code><strong>pip install openai</strong></code> and you&#8217;re ready to go.</p>



<h3 class="wp-block-heading">Setting up the Environment Variables</h3>



<p>Before running the script, you need to set the following environment variables:</p>



<pre title="Environment variablesexport OVH_AI_ENDPOINTS_ACCESS_TOKEN=&quot;your-access-token&quot; export OVH_AI_ENDPOINTS_MODEL_URL=&quot;https://your-model-url&quot; export OVH_AI_ENDPOINTS_VLLM_MODEL=&quot;your-vision-model-name&quot;" class="wp-block-code"><code lang="" class=" line-numbers">export OVH_AI_ENDPOINTS_ACCESS_TOKEN="your-access-token"<br>export OVH_AI_ENDPOINTS_MODEL_URL="https://your-model-url"<br>export OVH_AI_ENDPOINTS_VLLM_MODEL="your-vision-model-name"</code></pre>



<p>You can find how to create your access token, model URL, and model name in the <a href="https://endpoints.ai.cloud.ovh.net/catalog" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints catalog</a>. Make sure to choose a <strong>vision-capable model</strong> from the <a href="https://endpoints.ai.cloud.ovh.net/catalog" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints catalog</a>.</p>



<h3 class="wp-block-heading">Installing Dependencies</h3>



<p>The only dependency is the OpenAI Python library:</p>



<pre title="OpenAI dependency" class="wp-block-code"><code lang="bash" class="language-bash">pip install openai</code></pre>



<h3 class="wp-block-heading">Define the System Prompt</h3>



<p>The first step is to define a system prompt that describes what our OCR service does.&nbsp;This prompt tells the model how to behave:</p>



<pre title="System prompt" class="wp-block-code"><code lang="" class=" line-numbers">SYSTEM_PROMPT = """You are an expert OCR engine.<br>Extract every piece of text visible in the provided image.<br>Preserve the original layout as faithfully as possible (line breaks, columns, tables).<br>Do NOT interpret, summarise, or translate the content.<br>Use markdown formatting to represent the layout (e.g. tables, lists).<br>If the image contains no text, reply with: "No text found."<br>"""</code></pre>



<p>We tell it to behave as an expert OCR engine, to preserve the original layout, and to use markdown formatting for structured content like tables or lists.<br></p>



<h3 class="wp-block-heading">Load the Image</h3>



<p>Before sending the image to the model,&nbsp;we need to encode it as a base64 string.&nbsp;Here is a simple helper function that reads a local PNG file and returns a base64-encoded string:</p>



<pre title="Image loading" class="wp-block-code"><code lang="" class=" line-numbers">import base64<br>from pathlib import Path<br><br>def load_image_as_base64(path: Path) -&gt; str:<br>    """Load a local image and encode it as base64."""<br>    with open(path, "rb") as f:<br>        return base64.b64encode(f.read()).decode("utf-8")</code></pre>



<p>The base64-encoded data is what gets sent to the vision model as part of the prompt.</p>



<p></p>



<h3 class="wp-block-heading">Extract Text from the Image</h3>



<p>The <code><strong>extract_text</strong></code> function sends the image to the vision model and returns the extracted text:</p>



<pre title="Extract text from image" class="wp-block-code"><code lang="" class=" line-numbers">def extract_text(client: OpenAI, image_base64: str, model: str) -&gt; str:<br>    """Extract text from an image using the vision model."""<br>    response = client.chat.completions.create(<br>        model=model,<br>        temperature=0.0,<br>        messages=[<br>            {"role": "system", "content": SYSTEM_PROMPT},<br>            {<br>                "role": "user",<br>                "content": [<br>                    {<br>                        "type": "image_url",<br>                        "image_url": {<br>                            "url": f"data:image/png;base64,{image_base64}"<br>                        }<br>                    }<br>                ]<br>            }<br>        ]<br>    )<br>    return response.choices[0].message.content</code></pre>



<p>The image is passed as a data URL inside the <code><strong>image_url</strong></code> field, following the OpenAI Vision API format. The temperature is set to <code>0.0</code> because we want deterministic, faithful text extraction and not creative output.</p>



<h3 class="wp-block-heading">Configure the Client</h3>



<p>This example uses a vision-capable model hosted on OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>. Since AI Endpoints exposes an OpenAI-compatible API, we use the <code>OpenAI</code> client and just point it to the OVHcloud endpoint:</p>



<pre title="Open AI client configuration" class="wp-block-code"><code lang="" class=" line-numbers">import os<br>from openai import OpenAI<br><br>client = OpenAI(<br>    api_key=os.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN"),<br>    base_url=os.getenv("OVH_AI_ENDPOINTS_MODEL_URL"),<br>)<br><br>model_name = os.getenv("OVH_AI_ENDPOINTS_VLLM_MODEL")</code></pre>



<p>A few things to note:</p>



<ul class="wp-block-list">
<li>The <strong>API key</strong>, <strong>base URL</strong>, and <strong>model name</strong> are read from environment variables. </li>



<li>The OpenAI library is compatible with any OpenAI compatible API, making it perfect for use with AI Endpoints.</li>
</ul>



<h3 class="wp-block-heading">Assemble and Run</h3>



<p>With the client configured, extracting text from an image is straightforward:</p>



<pre title="Run the OCR" class="wp-block-code"><code lang="" class=" line-numbers">image_base64 = load_image_as_base64(Path("./doc.png"))<br>result = extract_text(client, image_base64, model_name)<br>print(result)</code></pre>



<p>And that&#8217;s it!</p>



<p>Here is the image used for this example:</p>



<figure class="wp-block-image aligncenter size-full is-resized"><img fetchpriority="high" decoding="async" width="946" height="693" src="https://blog.ovhcloud.com/wp-content/uploads/2026/03/doc-1.png" alt="Used image for OCR example" class="wp-image-31002" style="width:600px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/03/doc-1.png 946w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/doc-1-300x220.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/doc-1-768x563.png 768w" sizes="(max-width: 946px) 100vw, 946px" /></figure>



<p>And the result:</p>



<pre title="Run the OCR" class="wp-block-code"><code lang="" class=" line-numbers">$ python ocr_demo.py<br>📄 Loading image: doc.png<br>🔍 Running OCR with Qwen2.5-VL-72B-Instruct via OVHcloud AI Endpoints...<br><br>📝 Extracted text 📝<br>Every month, the OVHcloud Developer Advocate team creates content, shares knowledge, and connects with the tech community. Here’s a look at what we did in March 2026. 🚀<br><br>🎙️ “Tranches de Tech” – Our monthly podcast<br><br>A new episode of our French-language podcast Tranches de Tech🥑 just dropped!<br><br>🎧 Episode 102: Tranches de Tech #26 – Architecte, c’est une bonne situation ça ?<br><br>This month we sat down with Alexandre Touret, Architect at Worldline to discuss the evolving role of software architects and the growing impact of AI on development practices. From Spotify’s claim that their devs no longer code, to agentic tools like OpenClaw and Claude Code reshaping workflows. We also cover ANSSI’s revised open-source policy, IBM tripling junior hires, and the critical responsibility of mentoring the next generation of developers in an AI-driven world.<br><br>📺 Live on Twitch<br><br>We streamed live on Twitch this month! Here’s what we covered:<br><br>🎥 Rémy Vandepoel discussed with Hugo Allabert and François Loiseau about our Public VCFaaS. Catch the replay on YouTube ▶️.<br><br>🎤 Conference Talks<br><br>The team hit the road (and the stage) at several conferences this month:<br><br>🇳🇱 KubeCon Amsterdam – Amsterdam, Netherlands 🇳🇱<br><br>Aurélie Vache gave a talk: The Ultimate Kubernetes Challenge: An Interactive Trivia Game</code></pre>



<h3 class="wp-block-heading">Conclusion</h3>



<p>In this article,&nbsp;we have seen how to use a vision-capable LLM to perform OCR on images using the&nbsp;<a href="https://github.com/openai/openai-python" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OpenAI Python library</a>&nbsp;and OVHcloud&nbsp;<a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>.&nbsp;The OpenAI library makes it very easy to send images to a vision model and extract text,&nbsp;and Python allows us to run the whole thing as a simple script.</p>



<p>You have a dedicated Discord channel&nbsp;(#<em>ai-endpoints</em>)&nbsp;on our Discord server&nbsp;(<em><a href="https://discord.gg/ovhcloud" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://discord.gg/ovhcloud</a></em>),&nbsp;see you there!</p>



<p></p>
<img decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fextract-text-from-images-with-ocr-using-python-and-ovhcloud-ai-endpoints%2F&amp;action_name=Extract%20Text%20from%20Images%20with%20OCR%20using%20Python%20and%20OVHcloud%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What’s new with the OVHcloud Developer Advocate team &#8211; March 2026</title>
		<link>https://blog.ovhcloud.com/whats-new-with-the-ovhcloud-developer-advocate-team-march-2026/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 09:42:06 +0000</pubDate>
				<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Developer Advocate]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=30933</guid>

					<description><![CDATA[Every month, the OVHcloud Developer Advocate team creates content, shares knowledge, and connects with the tech community. Here’s a look at what we did in March 2026. 🚀 🎙️ “Tranches de Tech” – Our monthly podcast A new episode of our French-language podcast Tranches de Tech 🥑 just dropped! 🎧 Episode 26:&#160;Tranches de Tech #26 [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fwhats-new-with-the-ovhcloud-developer-advocate-team-march-2026%2F&amp;action_name=What%E2%80%99s%20new%20with%20the%20OVHcloud%20Developer%20Advocate%20team%20%26%238211%3B%20March%202026&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large"><img decoding="async" width="1024" height="576" src="https://blog.ovhcloud.com/wp-content/uploads/2026/03/talks-1024x576.jpg" alt="An advocate giving a talk" class="wp-image-30934" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/03/talks-1024x576.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/talks-300x169.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/talks-768x432.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/talks-1536x864.jpg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/talks.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<p>Every month, the OVHcloud Developer Advocate team creates content, shares knowledge, and connects with the tech community. Here’s a look at what we did in March 2026. 🚀</p>



<h3 class="wp-block-heading">🎙️ “Tranches de Tech” – Our monthly podcast</h3>



<p>A new episode of our French-language podcast Tranches de Tech 🥑 just dropped!</p>



<h5 class="wp-block-heading">🎧 Episode 26:&nbsp;<a href="https://podcast.ausha.co/tranches-de-tech/tranches-de-tech-26-architecte-c-est-une-bonne-situation-ca" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Tranches de Tech #26 &#8211; Architecte, c&#8217;est une bonne situation ça ?</a></h5>



<p>This month we sat down with Alexandre Touret, Architect at&nbsp;<a href="https://worldline.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Worldline</a>&nbsp;to discuss the evolving role of software architects and the growing impact of AI on development practices. From Spotify&#8217;s claim that their devs no longer code, to agentic tools like OpenClaw and Claude Code reshaping workflows. We also cover ANSSI&#8217;s revised open-source policy, IBM tripling junior hires, and the critical responsibility of mentoring the next generation of developers in an AI-driven world.</p>



<h3 class="wp-block-heading">📺 Live on Twitch</h3>



<p>We streamed live on&nbsp;<a href="https://www.twitch.tv/ovhcloud_com" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Twitch</a>&nbsp;this month! Here’s what we covered:<br>🎥 Rémy Vandepoel discussed with Hugo Allabert and François Loiseau about our Public VCFaaS. Catch the replay on&nbsp;<a href="https://www.youtube.com/playlist?list=PL0DynEzr_sE4c4cAv9K_qXJNnDFtUE0v5" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">YouTube ▶️</a>.</p>



<h3 class="wp-block-heading">🎤 Conference Talks</h3>



<p>The team hit the road (and the stage) at several conferences this month:</p>



<h5 class="wp-block-heading" id="kubecon-amsterdam---amsterdam-netherlands-">🏴󠁧󠁢󠁥󠁮󠁧󠁿 <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">KubeCon Amsterdam</a>&nbsp;&#8211; Amsterdam, Netherlands 🇳🇱 </h5>



<p>Aurélie Vache gave a talk:&nbsp;<a href="https://kccnceu2026.sched.com/event/2CW4r/the-ultimate-kubernetes-challenge-an-interactive-trivia-game-aurelie-vache-ovhcloud" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">The Ultimate Kubernetes Challenge: An Interactive Trivia Game</a></p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="768" src="https://blog.ovhcloud.com/wp-content/uploads/2026/03/aurelie-kubecon-1024x768.jpg" alt="" class="wp-image-30965" style="width:600px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/03/aurelie-kubecon-1024x768.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/aurelie-kubecon-300x225.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/aurelie-kubecon-768x576.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/aurelie-kubecon-1536x1152.jpg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/aurelie-kubecon.jpg 1600w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<h5 class="wp-block-heading" id="voxxed-days-zurich---zurich-switzerland-">🏴󠁧󠁢󠁥󠁮󠁧󠁿 <a href="https://vdz26.voxxeddays.ch/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Voxxed Days Zurich</a>&nbsp;&#8211; Zurich, Switzerland 🇨🇭</h5>



<p>Stéphane Philippart gave a talk:&nbsp;<a href="https://m.devoxx.com/events/vdz26/talks/4692/jbang,-a-java-file-to-rule-them-all%3F-%F0%9F%92%8D" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">JBang, a Java file to rule them all? 💍</a></p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="576" src="https://blog.ovhcloud.com/wp-content/uploads/2026/03/voxxed-days-zurich-stephane-1024x576.jpg" alt="" class="wp-image-30950" style="width:640px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/03/voxxed-days-zurich-stephane-1024x576.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/voxxed-days-zurich-stephane-300x169.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/voxxed-days-zurich-stephane-768x432.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/voxxed-days-zurich-stephane-1536x864.jpg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/voxxed-days-zurich-stephane-2048x1152.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<h3 class="wp-block-heading" id="-community-engagement">🤝 Community Engagement</h3>



<p>We connected with the community through more than just conferences:</p>



<h5 class="wp-block-heading" id="-meetup-tech-speakher--gdg-toulouse---march-12---toulouse-france-">🏫 Meetup Tech Speak&#8217;Her &amp; GDG Toulouse &#8211; March, 12 &#8211; Toulouse, France 🇫🇷</h5>



<p>Aurélie Vache gave one talk: J&#8217;ai packagé mon application en image docker, et maintenant ?</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="771" src="https://blog.ovhcloud.com/wp-content/uploads/2026/03/march2026-aurelie-meetup-1024x771.jpg" alt="" class="wp-image-30935" style="width:640px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/03/march2026-aurelie-meetup-1024x771.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/march2026-aurelie-meetup-300x226.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/march2026-aurelie-meetup-768x578.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/march2026-aurelie-meetup-1536x1157.jpg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/march2026-aurelie-meetup-2048x1542.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<h5 class="wp-block-heading" id="-iaam-meetup---march-5----marseille-france-">🏫&nbsp;<a href="https://www.linkedin.com/company/intelligence-artificielle-aix-marseille/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">IAAM meetup</a>&nbsp;&#8211; March, 5 &#8211; Marseille, France 🇫🇷</h5>



<p>Stéphane Philippart gave a talk:&nbsp;<a href="https://www.meetup.com/fr-FR/intelligence-artificielle-aix-marseille/events/313367147/?utm_version=v2&amp;member_id=358247706" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Et si on apprenait à une IA à jouer à chifoumi ? 🪨📃✂</a></p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="800" height="800" src="https://blog.ovhcloud.com/wp-content/uploads/2026/03/iaam-stephane.jpg" alt="Stephane's IAAM meetup" class="wp-image-30936" style="width:640px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/03/iaam-stephane.jpg 800w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/iaam-stephane-300x300.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/iaam-stephane-150x150.jpg 150w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/iaam-stephane-768x768.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/iaam-stephane-70x70.jpg 70w" sizes="auto, (max-width: 800px) 100vw, 800px" /></figure>



<p></p>



<h5 class="wp-block-heading" id="-sopra-steria-code2learn---march-10-17--18---lyon-nantes-rennes-france-">🏫&nbsp;<a href="https://www.soprasteria.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Sopra Steria Code2Learn</a>&nbsp;&#8211; March, 10, 17 &amp; 18 &#8211; Lyon, Nantes, Rennes, France 🇫🇷</h5>



<p>Stéphane Philippart gave three tech labs: 🧩 Développer avec l&#8217;IA : et si c&#8217;était aussi simple qu&#8217;ajouter une librairie ? 🤘</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="576" src="https://blog.ovhcloud.com/wp-content/uploads/2026/03/sopra-stephane-1024x576.jpg" alt="Stephane's Sopra Steria tech mab" class="wp-image-30937" style="width:640px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/03/sopra-stephane-1024x576.jpg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/sopra-stephane-300x169.jpg 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/sopra-stephane-768x432.jpg 768w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/sopra-stephane-1536x864.jpg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2026/03/sopra-stephane.jpg 1599w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<h3 class="wp-block-heading">📝 Our latest blog posts</h3>



<p>Here are the articles our team published on the OVHcloud Blog this month.</p>



<h5 class="wp-block-heading">📝&nbsp;<a href="https://blog.ovhcloud.com/secure-your-software-supply-chain-with-ovhcloud-managed-private-registry-mpr/" data-wpel-link="internal">Secure your Software Supply Chain with OVHcloud Managed Private Registry (MPR)</a>&nbsp;— by Aurélie Vache</h5>



<p>In this blog post, we explore how OVHcloud Managed Private Registry (MPR) can help you secure your software supply chain. We cover the key features of MPR, including vulnerability scanning, SBOM generation, signature and automation, to show you how to protect your container images and ensure the integrity of your applications.</p>



<h3 class="wp-block-heading">💻 Code Samples and Open Source</h3>



<p>We regularly publish code samples and open-source projects to help you get started with OVHcloud products. Check out our&nbsp;<a href="https://file+.vscode-resource.vscode-cdn.net/Users/sphilipp/Developments/devrel-monthly/blog-posts/what-s-new/github.com/ovh/public-cloud-examples" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">public-cloud-examples</a>&nbsp;repository on GitHub.</p>



<p>New this month:</p>



<ul class="wp-block-list">
<li>🆕 Work with Cilium contributors to implement Kubernetes traffic routing new fields:&nbsp;<a href="https://github.com/cilium/cilium/pull/44771" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">traffic distribution: support PreferSameZone and PreferSameNode</a></li>



<li>🆕 New release:&nbsp;<a href="https://github.com/ovh/pulumi-ovh/releases/tag/v2.12.0" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Pulumi provider v2.12.0</a></li>



<li>🆕 Contributions in the new&nbsp;<a href="https://github.com/ovh/terraform-provider-ovh/releases/tag/v2.12.0" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Terraform provider v2.12.0</a></li>
</ul>



<h3 class="wp-block-heading">🗓️ Coming up next</h3>



<p>Here’s a sneak peek at what’s coming next.</p>



<h5 class="wp-block-heading">🗓️ &#8211; April, 8 &#8211; 1h PM CET &#8211; Very Tech Talk Twitch about Managed Kubernetes Service (MKS)</h5>



<p>📺 <a href="https://www.twitch.tv/ovhcloud_com" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Twitch channel</a></p>



<h5 class="wp-block-heading">🗓️ &#8211; April, 16 &amp; 17 &#8211; MixIT, in Lyon</h5>



<p>🎤 Aurélie Vache is giving one talk (Thursday the 16th at 2:40 PM): <a href="https://mixitconf.org/en/2026/comprendre-kubernetes-de-maniere-visuelle" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Comprendre Kubernetes de manière visuelle</a> </p>



<h5 class="wp-block-heading">🗓️ &#8211; April, 22 to 24 &#8211; Devoxx France, in Paris (several OVHcloud speakers 🎉)</h5>



<p><strong>🎁 Come and see us, OVHcloud will have a stand!</strong></p>



<p>🎤 Aurélie Vache is giving one talk (Wednesday the 22nd at 5 PM): <a href="https://m.devoxx.com/events/devoxxfr2026/talks/2723/question-pour-un-cluster-kubernetes-quiz-sur-kubernetes-ses-concepts" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Question pour un cluster Kubernetes : Quiz sur Kubernetes &amp; ses concepts</a></p>



<p>🎤 Stéphane Philippart is giving two talks:</p>



<ul class="wp-block-list">
<li><a href="https://m.devoxx.com/events/devoxxfr2026/talks/5586/-apprendre-notre-ia-apprendre-" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">🤖 Apprendre à notre IA à &#8230; apprendre 🧠</a> on Wednesday the 22nd at 10:30 AM</li>



<li><a href="https://m.devoxx.com/events/devoxxfr2026/talks/2745/dvelopper-avec-lia-et-si-ctait-aussi-simple-quajouter-une-librairie-" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Développer avec l&#8217;IA : et si c&#8217;était aussi simple qu&#8217;ajouter une librairie ?</a> on Wednesday the 22nd at 1:30 PM with Mathieu Busquet from OVHcloud</li>
</ul>



<p>But also other OVHclouders are giving talks! 🥳</p>



<ul class="wp-block-list">
<li>🎤 Benoît Masson and Sébastien Chédor are giving one talk (Thursday the 23rd at 10:30 AM): <a href="https://m.devoxx.com/events/devoxxfr2026/talks/45201/-qr-codes-suivez-les-points-sans-vous-perdre-" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">▣ QR Codes : suivez les points sans vous perdre ! ▣</a></li>



<li>🎤 Benoît Masson and Théo Bougé are giving one talk (Friday the 24th at 2:35 AM): <a href="https://m.devoxx.com/events/devoxxfr2026/talks/7857/noms-de-domaines-la-grande-histoire-des-petites-extensions" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Noms de domaines : la grande histoire des petites extensions</a></li>



<li>🎤 Fanny Bouton is giving one talk (Thursday the 23rd at 1:30 PM): <a href="https://m.devoxx.com/events/devoxxfr2026/talks/37763/informatique-quantique-ce-coupci-on-vous-dit-tout-" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Informatique quantique, ce coup-ci on vous dit tout !</a></li>



<li>🎤 Héla Ben Khalfallah is giving one talk (Friday the 24th at  3:30 PM): <a href="https://m.devoxx.com/events/devoxxfr2026/talks/4015/refactorer-sans-tout-casser-anatomie-des-patterns-de-modernisation-incrmentale" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Refactorer sans tout casser: anatomie des patterns de modernisation incrémentale</a></li>



<li>🎤 Sébastien Ferrer is giving two talks:
<ul class="wp-block-list">
<li><a href="https://m.devoxx.com/events/devoxxfr2026/talks/4019/et-si-crire-du-sql-redevenait-cool-" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Et si écrire du SQL redevenait cool ?</a> on Friday the 24th at 3:30 PM</li>



<li><a href="https://m.devoxx.com/events/devoxxfr2026/talks/4018/dtectives-de-la-prod-rsoudre-lenqute-avant-le-crash" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Détectives de la prod : résoudre l’enquête avant le crash</a> on Friday the 24th at 2:35 PM</li>
</ul>
</li>
</ul>



<h5 class="wp-block-heading">🗓️ New &#8220;Tranches de Tech&#8221; podcast episode</h5>



<p>🎧 All episodes are available on <a href="https://podcast.ausha.co/tranches-de-tech" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Ausha</a> and all your favorite podcast applications!</p>



<h3 class="wp-block-heading">💬 Stay in Touch</h3>



<p>Want to chat with us, share your thoughts, or just say hi? Here’s how to get in touch with the Developer Advocate team:</p>



<ul class="wp-block-list">
<li>🟣&nbsp;<strong>Discord</strong>:&nbsp;<a href="https://discord.gg/ovhcloud" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud Discord server</a></li>



<li>🐦&nbsp;<strong>X / Twitter</strong>:&nbsp;<a href="https://twitter.com/OVHcloud" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">@OVHcloud</a></li>



<li>💼&nbsp;<strong>LinkedIn</strong>:&nbsp;<a href="https://www.linkedin.com/company/ovhgroup" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud LinkedIn</a></li>



<li>🐙&nbsp;<strong>GitHub</strong>:&nbsp;<a href="https://github.com/ovh" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">github.com/ovh</a></li>
</ul>



<p>See you next month! 👋</p>



<p></p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fwhats-new-with-the-ovhcloud-developer-advocate-team-march-2026%2F&amp;action_name=What%E2%80%99s%20new%20with%20the%20OVHcloud%20Developer%20Advocate%20team%20%26%238211%3B%20March%202026&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>🎙️ Tranches de Tech #26 &#8211; Architecte, c&#8217;est une bonne situation ça ?</title>
		<link>https://blog.ovhcloud.com/%f0%9f%8e%99%ef%b8%8f-tranches-de-tech-26-architecte-cest-une-bonne-situation-ca/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 13:13:43 +0000</pubDate>
				<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Tranches de Tech]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=30841</guid>

					<description><![CDATA[👤 Présentation d’Alexandre &#8211; ⏱️ 0&#8243;37s 📰 News Techs&#160; 🤖 Intelligence Artificielle &#8211; ⏱️ 15&#8243;40s Spotify indique que ses développeurs ne codent plus depuis décembre grâce à l’IA OpenClaw OpenClaw founder Peter Steinberger is joining OpenAI IA au quotidien &#8211; Paralléliser sa production agentique de code 👩‍💻 Développement &#8211; ⏱️ 48&#8243;40s Java has evolved.Your code can [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2F%25f0%259f%258e%2599%25ef%25b8%258f-tranches-de-tech-26-architecte-cest-une-bonne-situation-ca%2F&amp;action_name=%F0%9F%8E%99%EF%B8%8F%20Tranches%20de%20Tech%20%2326%20%26%238211%3B%20Architecte%2C%20c%26%238217%3Best%20une%20bonne%20situation%20%C3%A7a%20%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="759" height="757" src="https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond.png" alt="Tranche de Tech logo (avocado)" class="wp-image-30480" style="aspect-ratio:1;object-fit:cover;width:400px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond.png 759w, https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond-70x70.png 70w" sizes="auto, (max-width: 759px) 100vw, 759px" /></figure>



<ul class="wp-block-list">
<li>👤 Invitée : Alexandre Touret
<ul class="wp-block-list">
<li>Bluesky : @<a href="http://touret.info" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">touret.info</a></li>



<li>LinkedIn : <a href="https://www.linkedin.com/in/atouret/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.linkedin.com/in/atouret/</a></li>
</ul>
</li>



<li>🗓️ Date d&#8217;enregistrement : 27 février 2026</li>



<li>🎧 <a href="https://smartlink.ausha.co/tranches-de-tech/tranches-de-tech-26-architecte-c-est-une-bonne-situation-ca" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Lien vers l&#8217;épisode</a></li>
</ul>



<h3 class="wp-block-heading">👤 Présentation d’Alexandre &#8211; ⏱️ 0&#8243;37s</h3>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<h3 class="wp-block-heading">📰 News Techs&nbsp;</h3>



<h4 class="wp-block-heading">🤖 Intelligence Artificielle &#8211; ⏱️ 15&#8243;40s</h4>



<h5 class="wp-block-heading">Spotify indique que ses développeurs ne codent plus depuis décembre grâce à l’IA</h5>



<ul class="wp-block-list">
<li><a href="https://developers.slashdot.org/story/26/02/13/1834228/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://developers.slashdot.org/story/26/02/13/1834228/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai</a></li>
</ul>



<h5 class="wp-block-heading">OpenClaw</h5>



<ul class="wp-block-list">
<li><a href="https://openclaw.ai/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://openclaw.ai/</a></li>



<li><a href="https://www.infostealers.com/article/clawdbot-the-new-primary-target-for-infostealers-in-the-ai-era/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.infostealers.com/article/clawdbot-the-new-primary-target-for-infostealers-in-the-ai-era/</a>  </li>



<li><a href="https://www.youtube.com/watch?v=F0EammZyMaA" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.youtube.com/watch?v=F0EammZyMaA</a></li>
</ul>



<h5 class="wp-block-heading">OpenClaw founder Peter Steinberger is joining OpenAI</h5>



<ul class="wp-block-list">
<li><a href="https://www.theverge.com/ai-artificial-intelligence/879623/openclaw-founder-peter-steinberger-joins-openai " data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.theverge.com/ai-artificial-intelligence/879623/openclaw-founder-peter-steinberger-joins-openai </a></li>
</ul>



<h5 class="wp-block-heading">IA au quotidien &#8211; Paralléliser sa production agentique de code</h5>



<ul class="wp-block-list">
<li><a href="https://www.linkedin.com/pulse/claude-code-au-quotidien-parall%C3%A9liser-ses-t%C3%A2ches-fr%C3%A9d%C3%A9ric-camblor-jqjpe/?trackingId=h9dz1qUyRxeD6qwWMb9vlg%3D%3D" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.linkedin.com/pulse/claude-code-au-quotidien-parall%C3%A9liser-ses-t%C3%A2ches-fr%C3%A9d%C3%A9ric-camblor-jqjpe/?trackingId=h9dz1qUyRxeD6qwWMb9vlg%3D%3D</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<h4 class="wp-block-heading">👩‍💻 Développement &#8211; ⏱️ 48&#8243;40s</h4>



<h5 class="wp-block-heading">Java has evolved.Your code can too</h5>



<ul class="wp-block-list">
<li><a href="https://javaevolved.github.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://javaevolved.github.io/</a></li>
</ul>



<h5 class="wp-block-heading">Claude code skills</h5>



<ul class="wp-block-list">
<li><a href="https://resources.anthropic.com/hubfs/The-Complete-Guide-to-Building-Skill-for-Claude.pdf" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://resources.anthropic.com/hubfs/The-Complete-Guide-to-Building-Skill-for-Claude.pdf</a></li>



<li><a href="https://bsky.app/profile/k33gorg.bsky.social/post/3me6zw6klkk2d" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://bsky.app/profile/k33gorg.bsky.social/post/3me6zw6klkk2d</a></li>
</ul>



<h5 class="wp-block-heading">The Augmented Developer: My Journey with Cursor CLI</h5>



<ul class="wp-block-list">
<li><a href="https://david.pilato.fr/posts/2026-02-06-the-augmented-developer/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://david.pilato.fr/posts/2026-02-06-the-augmented-developer/</a></li>
</ul>



<h5 class="wp-block-heading">L’ANSSI révise sa doctrine vis-à-vis du logiciel libre</h5>



<ul class="wp-block-list">
<li><a href="https://linuxfr.org/news/l-anssi-revise-sa-doctrine-vis-a-vis-du-logiciel-libre" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://linuxfr.org/news/l-anssi-revise-sa-doctrine-vis-a-vis-du-logiciel-libre</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<h4 class="wp-block-heading">👥 Sociétal &#8211; ⏱️ 1h&#8221;02&#8243;30s</h4>



<h5 class="wp-block-heading">IBM triple le nombre de jeunes diplômés dans leur recrutement</h5>



<ul class="wp-block-list">
<li><a href="https://fortune.com/2026/02/13/tech-giant-ibm-tripling-gen-z-entry-level-hiring-according-to-chro-rewriting-jobs-ai-era/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://fortune.com/2026/02/13/tech-giant-ibm-tripling-gen-z-entry-level-hiring-according-to-chro-rewriting-jobs-ai-era/</a></li>
</ul>



<h5 class="wp-block-heading">Forklifts Require Training : Sleepwalking into labor collapse</h5>



<ul class="wp-block-list">
<li><a href="https://www.zacsweers.dev/forklifts-require-training/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.zacsweers.dev/forklifts-require-training/</a></li>
</ul>



<h4 class="wp-block-heading">🎤 Conférences / meetup &#8211; ⏱️ 1h&#8221;13&#8243;20s</h4>



<ul class="wp-block-list">
<li><a href="https://developers.events/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://developers.events/</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<p class="has-text-align-center">💡 Retrouvez l&#8217;ensemble des autres épisodes ici : <a href="https://smartlink.ausha.co/tranches-de-tech" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://smartlink.ausha.co/tranches-de-tech</a> 💡</p>



<p></p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2F%25f0%259f%258e%2599%25ef%25b8%258f-tranches-de-tech-26-architecte-cest-une-bonne-situation-ca%2F&amp;action_name=%F0%9F%8E%99%EF%B8%8F%20Tranches%20de%20Tech%20%2326%20%26%238211%3B%20Architecte%2C%20c%26%238217%3Best%20une%20bonne%20situation%20%C3%A7a%20%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>🎙️ Tranches de Tech #25 &#8211; PM et IA, mariage réussi ?</title>
		<link>https://blog.ovhcloud.com/%f0%9f%8e%99%ef%b8%8f-tranches-de-tech-25-pm-et-ia-mariage-reussi/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 07:19:04 +0000</pubDate>
				<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Tranches de Tech]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=30479</guid>

					<description><![CDATA[👤 Présentation d’Estelle &#8211; ⏱️ 1&#8243;14 📰 News Techs&#160; 🤖 Intelligence Artificielle&#160;&#8211; ⏱️ 45&#8243;35 L&#8217;IA pour gérer les produits Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl Welcome to the BMad Method LLM prompts and AI IDE setup Matrix agents ☁️ Cloud &#8211; ⏱️ 58&#8243;25 Managed Private Registry est maintenant disponible à Mumbai (APAC) &#160;MKS est maintenant [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2F%25f0%259f%258e%2599%25ef%25b8%258f-tranches-de-tech-25-pm-et-ia-mariage-reussi%2F&amp;action_name=%F0%9F%8E%99%EF%B8%8F%20Tranches%20de%20Tech%20%2325%20%26%238211%3B%20PM%20et%20IA%2C%20mariage%20r%C3%A9ussi%20%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="759" height="757" src="https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond.png" alt="Avocado logo for Tranches de Tech podcast" class="wp-image-30480" style="width:400px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond.png 759w, https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2026/02/Tranches-de-Tech-visuel-rond-70x70.png 70w" sizes="auto, (max-width: 759px) 100vw, 759px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<ul class="wp-block-list">
<li>👤 Invitée&nbsp;: Estelle Landry
<ul class="wp-block-list">
<li>  X : <a href="https://x.com/estelandry" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://x.com/estelandry</a></li>



<li>  LinkedIn : <a href="https://www.linkedin.com/in/estelle-landry-61866b71/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.linkedin.com/in/estelle-landry-61866b71/</a></li>
</ul>
</li>



<li>🗓️ Date d&#8217;enregistrement : 30 janvier 2026</li>



<li>🎧 <a href="https://podcast.ausha.co/tranches-de-tech/tranches-de-tech-25-et-si-l-ia-aidait-nos-pm" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Lien vers l&#8217;épisode</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<h3 class="wp-block-heading">👤 Présentation d’Estelle &#8211; ⏱️ 1&#8243;14</h3>



<ul class="wp-block-list">
<li><a href="https://www.ellesbougent.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Elles Bougent</a></li>



<li><a href="https://femmes-et-maths.fr/reforme-du-lycee-filles-et-sciences-legalite-en-question/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://femmes-et-maths.fr/reforme-du-lycee-filles-et-sciences-legalite-en-question/</a></li>



<li><a href="https://www.tutteo.com/fr/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Tutteo</a></li>



<li><a href="https://dust.tt/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://dust.tt/</a></li>



<li><a href="https://www.aidiscipline.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.aidiscipline.com/</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<h3 class="wp-block-heading">📰 News Techs&nbsp;</h3>



<h4 class="wp-block-heading">🤖 Intelligence Artificielle&nbsp;&#8211; ⏱️ 45&#8243;35</h4>



<h5 class="wp-block-heading">L&#8217;IA pour gérer les produits</h5>



<ul class="wp-block-list">
<li>Ressources vidéo IA pour le PM &#8211; Adam Faik
<ul class="wp-block-list">
<li><a href="https://www.theaithinker.com/p/how-to-go-from-zero-to-ai-first-in" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.theaithinker.com/p/how-to-go-from-zero-to-ai-first-in</a></li>



<li><a href="https://substack.com/home/post/p-185455972" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://substack.com/home/post/p-185455972</a></li>
</ul>
</li>



<li>Benjamin Code vidéo
<ul class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=-aUFe2r9fpE" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.youtube.com/watch?v=-aUFe2r9fpE</a></li>
</ul>
</li>
</ul>



<h5 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-UnderstandingSpec-Driven-Development:Kiro,spec-kit,andTessl">Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl</h5>



<ul class="wp-block-list">
<li><a href="https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html</a></li>
</ul>



<h5 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-WelcometotheBMadMethod">Welcome to the BMad Method</h5>



<ul class="wp-block-list">
<li><a href="https://docs.bmad-method.org/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://docs.bmad-method.org/</a>&nbsp;</li>



<li><a href="https://www.youtube.com/@titimoby" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.youtube.com/@titimoby</a></li>
</ul>



<h5 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-LLMpromptsandAIIDEsetup">LLM prompts and AI IDE setup</h5>



<ul class="wp-block-list">
<li><a href="https://angular.dev/ai/develop-with-ai" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://angular.dev/ai/develop-with-ai</a></li>



<li><a href="https://ngbaguette.angulardevs.fr/en/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://ngbaguette.angulardevs.fr/en/</a>&nbsp;</li>
</ul>



<h5 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-Matrixagents">Matrix agents</h5>



<ul class="wp-block-list">
<li><a href="https://github.com/roryp/matrixagents/tree/quarkus?tab=readme-ov-file" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://github.com/roryp/matrixagents/tree/quarkus?tab=readme-ov-file</a></li>



<li><a href="https://ca-ha2hn25llqvmc.greenforest-2db50d0c.eastus2.azurecontainerapps.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://ca-ha2hn25llqvmc.greenforest-2db50d0c.eastus2.azurecontainerapps.io/</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<h4 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-☁️Cloud">☁️ Cloud &#8211; ⏱️ 58&#8243;25</h4>



<h5 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-ManagedPrivateRegistryestmaintenantdisponibleàMumbai(APAC)">Managed Private Registry est maintenant disponible à Mumbai (APAC)</h5>



<ul class="wp-block-list">
<li><a href="https://www.ovhcloud.com/fr/public-cloud/managed-private-registry/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.ovhcloud.com/fr/public-cloud/managed-private-registry/</a></li>
</ul>



<h5 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-MKSestmaintenantdisponibleàMumbai&amp;Roubaix">&nbsp;MKS est maintenant disponible à Mumbai &amp; Roubaix&nbsp;</h5>



<ul class="wp-block-list">
<li><a href="https://www.ovhcloud.com/fr/public-cloud/kubernetes/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.ovhcloud.com/fr/public-cloud/kubernetes/</a></li>
</ul>



<h5 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-FileStoragepourPublicCloud,RWXcapabilitiesenBétapublique">File Storage pour Public Cloud, RWX capabilities en Béta publique&nbsp;</h5>



<ul class="wp-block-list">
<li><a href="https://labs.ovhcloud.com/en/file-storage/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://labs.ovhcloud.com/en/file-storage/</a></li>



<li><a href="https://help.ovhcloud.com/csm/fr-public-cloud-kubernetes-configure-multi-attach-persistent-volumes-enterprise-file-storage?id=kb_article_view&amp;sysparm_article=KB0065980" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://help.ovhcloud.com/csm/fr-public-cloud-kubernetes-configure-multi-attach-persistent-volumes-enterprise-file-storage?id=kb_article_view&amp;sysparm_article=KB0065980</a></li>
</ul>



<h5 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-Nouvellefonctionnalité:FloatingIPsdisponiblepourMKS">Nouvelle fonctionnalité: Floating IPs disponible pour MKS</h5>



<ul class="wp-block-list">
<li><a href="https://www.ovhcloud.com/fr/public-cloud/floating-ip/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://www.ovhcloud.com/fr/public-cloud/floating-ip/</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<h3 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-🎤Conférences/meetup(REX,datesCFP,datesconf,…)">🎤 Conférences / meetup &#8211; ⏱️ 1h03&#8243;30</h3>



<h4 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-LesconfsoùonpeutvoirEstelle?">Les conférences où on peut voir Estelle ?</h4>



<ul class="wp-block-list">
<li><a href="https://touraine.tech/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">TouraineTech</a></li>



<li><a href="https://www.devoxx.fr/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Devoxx France</a></li>



<li><a href="https://sunny-tech.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Sunnytech</a>&nbsp;</li>
</ul>



<h4 class="wp-block-heading" id="id-📝TranchesdeTech25EstelleLandry-ListedesconférencesetCFPouverts">Liste des conférences et CFP ouverts</h4>



<ul class="wp-block-list">
<li><a href="https://developers.events/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://developers.events/</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity" />



<p class="has-text-align-center">💡 Retrouvez l&#8217;ensemble des autres épisodes ici : <a href="https://smartlink.ausha.co/tranches-de-tech" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://smartlink.ausha.co/tranches-de-tech</a> 💡</p>



<p></p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2F%25f0%259f%258e%2599%25ef%25b8%258f-tranches-de-tech-25-pm-et-ia-mariage-reussi%2F&amp;action_name=%F0%9F%8E%99%EF%B8%8F%20Tranches%20de%20Tech%20%2325%20%26%238211%3B%20PM%20et%20IA%2C%20mariage%20r%C3%A9ussi%20%3F&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Create a podcast transcript with Whisper by AI Endpoints</title>
		<link>https://blog.ovhcloud.com/create-a-podcast-transcript-with-whisper-by-ai-endpoints/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Thu, 28 Aug 2025 07:03:04 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[AI Endpoints]]></category>
		<category><![CDATA[Audio]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29389</guid>

					<description><![CDATA[Check out this blog post if you want to know more about AI Endpoints.You can also find more info on AI Endpoints in our previous blog posts. This blog post explains how to create a podcast transcript using Whisper, a powerful automatic speech recognition (ASR) system developed by OpenAI. Whisper integrates with AI Endpoints and [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fcreate-a-podcast-transcript-with-whisper-by-ai-endpoints%2F&amp;action_name=Create%20a%20podcast%20transcript%20with%20Whisper%20by%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02.png" alt="A robot listening a podcast" class="wp-image-29401" style="width:640px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-70x70.png 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Check out this<a href="https://blog.ovhcloud.com/enhance-your-applications-with-ai-endpoints/" data-wpel-link="internal"> blog post</a> if you want to know more about AI Endpoints.<br>You can also find more info on <a href="https://endpoints.ai.cloud.ovh.net" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a> in our <a href="https://blog.ovhcloud.com/tag/ai-endpoints/" data-wpel-link="internal">previous blog posts</a>.</p>



<p>This blog post explains how to create a podcast transcript using Whisper, a powerful automatic speech recognition (ASR) system developed by OpenAI. Whisper integrates with <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a> and makes it easy to transcribe audio files and add features, like speaker diarization.</p>



<p><em>ℹ️ You can find the full code on <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/ai-endpoints/podcast-transcript-whisper/python" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Github</a> ℹ️</em></p>



<h3 class="wp-block-heading">Environment Setup</h3>



<p>Define your environment variables for accessing <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>:</p>



<pre title="AI Endpoints environment variables" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">$ export OVH_AI_ENDPOINTS_WHISPER_URL=&lt;whisper model URL&gt;
$ export OVH_AI_ENDPOINTS_ACCESS_TOKEN=&lt;your_access_token&gt;
$ export OVH_AI_ENDPOINTS_WHISPER_MODEL=whisper-large-v3</code></pre>



<p>Install dependencies:</p>



<pre title="Dependencies installation" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">$ pip install -r requirements.txt</code></pre>



<h3 class="wp-block-heading">Audio transcription</h3>



<p>With Whisper and the OpenAI client, transcribing audio is as simple as writing a few lines of code:</p>



<pre title="Audio transcription" class="wp-block-code"><code lang="python" class="language-python line-numbers">import os
import json
from openai import OpenAI

# 🛠️ OpenAI client initialisation
client = OpenAI(base_url=os.environ.get('OVH_AI_ENDPOINTS_WHISPER_URL'), 
                api_key=os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN'))

# 🎼 Audio file loading
with open("../resources/TdT20-trimed-2.mp3", "rb") as audio_file:
    # 📝 Call Whisper transcription API
    transcript = client.audio.transcriptions.create(
        model=os.environ.get('OVH_AI_ENDPOINTS_WHISPER_MODEL'),
        file=audio_file,
        temperature=0.0,
        response_format="verbose_json",
        extra_body={"diarize": True},
    )</code></pre>



<p>FYI:<br>&#8211; we use ‘<em>diarize</em>’ (not a Whisper parameter) to enable diarization, because the OpenAI client lets us add extra body parameters.<br>&#8211; you need <em>verbose_json</em> for diarization (which also means <em>segmentation</em> mode)</p>



<p>Once you have the full transcript, format it in a way that’s easy for humans to read.</p>



<h3 class="wp-block-heading">Create the script</h3>



<p>The JSON field ‘<em>diarization</em>’ contains all of the transcribed, diarized content.</p>



<pre title="JSON response for diarization" class="wp-block-code"><code lang="json" class="language-json line-numbers">"diarization": [
    {
      "speaker": 0,
      "text": "bla bla bla",
      "start": 16.5,
      "end": 26.38
    },
    {
      "speaker": 1,
      "text": "bla bla",
      "start": 26.38,
      "end": 32.6
    },
    {
      "speaker": 1,
      "text": "bla bla",
      "start": 32.6,
      "end": 40.6
    },
    {
      "speaker": 2,
      "text": "bla bla",
      "start": 40.6,
      "end": 42
    }
]</code></pre>



<p>Because they are segmented, you can merge several fields for the same speaker as detailed below—for speaker 1.</p>



<p>Here’s a sample code for creating the script of a <a href="https://smartlink.ausha.co/tranches-de-tech" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">French podcast</a> featuring 3 speakers:</p>



<pre title="Merge sentences for same speaker" class="wp-block-code"><code lang="python" class="language-python line-numbers"># 🔀 Merge the dialog said by the same speaker     
diarizedTranscript = ''
speakers = ["Aurélie", "Guillaume", "Stéphane"]
previousSpeaker = -1
jsonTranscript = json.loads(transcript.model_dump_json())

# 💬 Only the diarization field is useful
for dialog in jsonTranscript["diarization"]:
    speaker = dialog.get("speaker")
    text = dialog.get("text")
    if (previousSpeaker == speaker):
        diarizedTranscript += f" {text}"
    else:
        diarizedTranscript += f"\n\n{speakers[speaker]}: {text}"
    previousSpeaker = speaker

print(f"\n📝 Diarized Transcript 📝:\n{diarizedTranscript}")
</code></pre>



<p>Lastly, run the Python script:</p>



<pre class="wp-block-code"><code lang="" class=" line-numbers">$ python PodcastTranscriptWithWhisper.py

📝 Diarized Transcript 📝:

Stéphane: Bonjour tout le monde, ravi de vous retrouver pour l'enregistrement de ce dernier épisode de la saison avant de prendre des vacances bien méritées et de vous retrouver à la rentrée pour la troisième saison. Nous enregistrons cet épisode le 30 juin à la fraîche, enfin si on peut dire au vu des températures déjà présentes en cette matinée. Justement, elle revient chaudement de Sunnytech et c'est avec plaisir que je la retrouve pour l'enregistrement de cet épisode. Bonjour Aurélie, comment vas-tu ?

Aurélie: Salut, alors ça va très bien. Alors j'avoue, j'ai également très chaud. J'ai le ventilateur qui est juste à côté de moi donc ça va aller pour l'enregistrement du podcast.

Stéphane: Oui, c'est vrai qu'il fait un peu chaud. Et pour ce dernier épisode de la saison, c'est avec un mélange de joie mais aussi d'intimidation que je reçois notre invité. Si je fais ce métier de la façon dont je le fais, c'est grandement grâce à lui. Ce podcast, quelque part, a bien entendu des inspirations de ce que fait notre invité. Je suis donc très content de te recevoir Guillaume. Bonjour Guillaume, comment vas-tu et souhaites-tu te présenter à nos auditrices et auditeurs ? Bonjour à

Guillaume: tous et bien merci déjà de m'avoir invité. Je suis très content de rejoindre votre podcast pour cet épisode. Je m'appelle Guillaume Laforge, je suis un développeur Java depuis la première heure depuis très très longtemps. Je travaille chez Google, en particulier dans la partie Google Cloud. Je me focalise beaucoup sur tout ce qui est Generative AI vu que c'est à la mode évidemment. Les gens me connaissent peut-être ou peut-être ma voix d'ailleurs parce que je fais partie du podcast Les Cascodeurs qu'on a commencé il y a 15 ans ou quelque chose comme ça. Il y a trop longtemps. Ou alors ils me connaissent parce que je suis un des co-fondateurs du langage Groovy, Apache Groovy.</code></pre>



<p>Feel free to try out our new product, <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>, and share your thoughts.</p>



<p>Hang out with us on Discord at #<em>ai-endpoints or</em> <em><a href="https://discord.gg/ovhcloud" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://discord.gg/ovhcloud</a></em>. See you soon!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fcreate-a-podcast-transcript-with-whisper-by-ai-endpoints%2F&amp;action_name=Create%20a%20podcast%20transcript%20with%20Whisper%20by%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Fine tune an LLM with Axolotl and OVHcloud Machine Learning Services</title>
		<link>https://blog.ovhcloud.com/fine-tune-an-llm-with-axolotl-and-ovhcloud-machine-learning-services/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Fri, 25 Jul 2025 13:07:40 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Deploy]]></category>
		<category><![CDATA[AI Notebook]]></category>
		<category><![CDATA[Fine Tuning]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29408</guid>

					<description><![CDATA[There are many ways to train a model,📚 using detailed instructions, system prompts, Retrieval Augmented Generation, or function calling One way is fine-tuning, which is what this blog is about! ✨ Two years back we posted a blog on fine-tuning Llama models—it’s not nearly as complicated as it was before 😉.  This time we’re using the [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Ffine-tune-an-llm-with-axolotl-and-ovhcloud-machine-learning-services%2F&amp;action_name=Fine%20tune%20an%20LLM%20with%20Axolotl%20and%20OVHcloud%20Machine%20Learning%20Services&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-1.png" alt="A robot with a car tuning style" class="wp-image-29462" style="width:600px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-1.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-1-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-1-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-1-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/red-cat-02-1-70x70.png 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>There are many ways to train a model,📚 using detailed instructions, system prompts, Retrieval Augmented Generation, or function calling</p>



<p>One way is fine-tuning, which is what this blog is about! ✨</p>



<p>Two years back we posted a <a href="https://blog.ovhcloud.com/fine-tuning-llama-2-models-using-a-single-gpu-qlora-and-ai-notebooks/" data-wpel-link="internal">blog</a> on fine-tuning Llama models—it’s not nearly as complicated as it was before 😉.  This time we’re using the Framework <a href="https://docs.axolotl.ai/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Axolotl</a>, so hopefully there’s less to manage.</p>



<h3 class="wp-block-heading">So what’s the plan?</h3>



<p>For this blog, I’d like to fine-tune a small model, <a href="https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Llama-3.2-1B-Instruct</a>, and then test it out on a few questions about our <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud AI Endpoints</a> product 📝.</p>



<p>Before we fine-tune, let’s try it out! Deploying a <a href="https://huggingface.co/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Hugging Face</a> model is super easy with <a href="https://www.ovhcloud.com/fr/public-cloud/ai-deploy/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Deploy</a> from <a href="https://www.ovhcloud.com/fr/public-cloud/ai-machine-learning/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Machine Learning Services</a> 🥳.</p>



<p>And thanks to a <a href="https://blog.ovhcloud.com/mistral-small-24b-served-with-vllm-and-ai-deploy-one-command-to-deploy-llm/" data-wpel-link="internal">previous blog post</a>, we know how to use <a href="https://docs.vllm.ai/en/v0.7.3/index.html" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">vLLM</a> and <a href="https://www.ovhcloud.com/fr/public-cloud/ai-deploy/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Deploy</a>.</p>



<pre title="Deploy a model thanks to vLLM and AI Deploy" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">ovhai app run --name $1 \
	--flavor l40s-1-gpu \
	--gpu 2 \
	--default-http-port 8000 \
	--env OUTLINES_CACHE_DIR=/tmp/.outlines \
	--env HF_TOKEN=$MY_HUGGING_FACE_TOKEN \
	--env HF_HOME=/hub \
	--env HF_DATASETS_TRUST_REMOTE_CODE=1 \
	--env HF_HUB_ENABLE_HF_TRANSFER=0 \
	--volume standalone:/hub:rw \
	--volume standalone:/workspace:rw \
	vllm/vllm-openai:v0.8.2 \
	-- bash	-c "vllm serve meta-llama/Llama-3.2-1B-Instruct"</code></pre>



<p class="has-text-align-center"><strong><strong>⚠️ Make sure you’ve agreed to the terms of use for the model’s license from Hugging Face ⚠️</strong></strong></p>



<p>Check out the <a href="https://blog.ovhcloud.com/mistral-small-24b-served-with-vllm-and-ai-deploy-one-command-to-deploy-llm/" data-wpel-link="internal">blog</a> I mentioned earlier for all the details you need on the command and its parameters.</p>



<p>To test our different chatbots we will use a simple <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/llm-fine-tune/chatbot/chatbot.py" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Gradio application</a>:</p>



<pre title="Chatbot" class="wp-block-code"><code lang="python" class="language-python line-numbers"># Application to compare answers generation from OVHcloud AI Endpoints exposed model and fine tuned model.
# ⚠️ Do not used in production!! ⚠️

import gradio as gr
import os

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# 📜 Prompts templates 📜
prompt_template = ChatPromptTemplate.from_messages(
    [
        ("system", "{system_prompt}"),
        ("human", "{user_prompt}"),
    ]
)

def chat(prompt, system_prompt, temperature, top_p, model_name, model_url, api_key):
    """
    Function to generate a chat response using the provided prompt, system prompt, temperature, top_p, model name, model URL and API key.
    """

    # ⚙️ Initialize the OpenAI model ⚙️
    llm = ChatOpenAI(api_key=api_key, 
                 model=model_name, 
                 base_url=model_url,
                 temperature=temperature,
                 top_p=top_p
                 )

    # 📜 Apply the prompt to the model 📜
    chain = prompt_template | llm
    ai_msg = chain.invoke(
        {
            "system_prompt": system_prompt,
            "user_prompt": prompt
        }
    )

    # 🤖 Return answer in a compatible format for Gradio component.
    return [{"role": "user", "content": prompt}, {"role": "assistant", "content": ai_msg.content}]

# 🖥️ Main application 🖥️
with gr.Blocks() as demo:
    with gr.Row():
        with gr.Column():
            system_prompt = gr.Textbox(value="""You are a specialist on OVHcloud products.
If you can't find any sure and relevant information about the product asked, answer with "This product doesn't exist in OVHcloud""", 
                label="🧑‍🏫 System Prompt 🧑‍🏫")
            temperature = gr.Slider(minimum=0.0, maximum=2.0, step=0.01, label="Temperature", value=0.5)
            top_p = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label="Top P", value=0.0)
            model_name = gr.Textbox(label="🧠 Model Name 🧠", value='Llama-3.1-8B-Instruct')
            model_url = gr.Textbox(label="🔗 Model URL 🔗", value='https://oai.endpoints.kepler.ai.cloud.ovh.net/v1')
            api_key = gr.Textbox(label="🔑 OVH AI Endpoints Access Token 🔑", value=os.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN"), type="password")

        with gr.Column():
            chatbot = gr.Chatbot(type="messages", label="🤖 Response 🤖")
            prompt = gr.Textbox(label="📝 Prompt 📝", value='How many requests by minutes can I do with AI Endpoints?')
            submit = gr.Button("Submit")

    submit.click(chat, inputs=[prompt, system_prompt, temperature, top_p, model_name, model_url, api_key], outputs=chatbot)

demo.launch()</code></pre>



<p>ℹ️ You can find all resources to build and run this application in the <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/llm-fine-tune/chatbot/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">dedicated folder</a> in the GitHub repository.</p>



<p>Let&#8217;s test with a simple question: &#8220;How many requests by minutes can I do with AI Endpoints?&#8221;.<br>The first test is with <a href="https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Llama-3.2-1B-Instruct</a> from <a href="https://huggingface.co/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"></a><a href="https://huggingface.co/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Hugging Face</a> deployed with <a href="https://docs.vllm.ai/en/v0.7.3/index.html" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">vLLM</a> and <a href="https://www.ovhcloud.com/fr/public-cloud/ai-deploy/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud AI Deploy</a>.</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="474" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-13.19.16-1024x474.png" alt="Ask for AI Endpoints rate limit with a Llama-3.2-1B-Instruct model" class="wp-image-29448" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-13.19.16-1024x474.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-13.19.16-300x139.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-13.19.16-768x356.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-13.19.16-1536x712.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-13.19.16-2048x949.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>The response isn’t exactly what we expected. 😅</p>



<p>FYI, according to the official <a href="https://help.ovhcloud.com/csm/fr-public-cloud-ai-endpoints-capabilities?id=kb_article_view&amp;sysparm_article=KB0065424#limitations" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud guide</a>, the correct answer is:<br> &#8211; <strong>Anonymous</strong>: 2 requests per minute, per IP and per model.<br> &#8211; <strong>Authenticated with an API access key</strong>: 400 requests per minute, per Public Cloud project and per model.</p>



<h3 class="wp-block-heading"><strong>What’s the best way to feed the model fresh data?</strong></h3>



<p>I bet you already know this—you can use some data during the inference step, using Retrieval Augmented Generation (RAG). You can learn how to set up RAG by reading our <a href="https://blog.ovhcloud.com/rag-chatbot-using-ai-endpoints-and-langchain/" data-wpel-link="internal">past blog post</a>. 📗</p>



<p>Another way to feed a model fresh data by fine-tuning. ✨</p>



<p>In a nutshell,  fine-tuning is when you take a pre-trained machine learning model and train it further on additional data, so it can do a specific job. It’s quicker and easier than building a model yourself, or from scratch. 😉</p>



<p>For this, I’m picking <a href="https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Llama-3.2-1B-Instruct</a> from Hugging Face as the base model.</p>



<p><em>ℹ️ The more parameters your base model has, the more computing power you need. In this case, this model needs between 3GB and 4GB of memory, <em>which is why we’ll be using</em> a <a href="https://www.ovhcloud.com/fr/public-cloud/prices/#5260" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">single L4 GPU</a> (we need </em><a href="https://www.nvidia.com/en-us/data-center/ampere-architecture/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"><em>Ampere</em> compatible architecture</a>).</p>



<h3 class="wp-block-heading">When data is your gold</h3>



<p>To train a model, you need enough good-quality data.</p>



<p>The first part is easy; I get the OVHcloud AI Endpoints official documentation in a markdown format from our <a href="https://github.com/ovh/docs/tree/develop/pages/public_cloud/ai_machine_learning" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">public cloud documentation repository</a> (by the way, would you like to contribute?). 📚</p>



<p>First, create a dataset with the right format, Axolotl offers varying <a href="https://docs.axolotl.ai/docs/dataset-formats/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">dataset formats</a>. I prefer the <a href="https://docs.axolotl.ai/docs/dataset-formats/conversation.html" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">conversation format</a> because it’s the easiest for my use case, so I’m going with that. 😉</p>



<pre title="Conersation format dataset" class="wp-block-code"><code lang="json" class="language-json line-numbers"><a href="https://docs.axolotl.ai/docs/dataset-formats/conversation.html#cb1-1" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"></a>{
   "messages": [
     {"role": "...", "content": "..."}, 
     {"role": "...", "content": "..."}, 
     ...]
}</code></pre>



<p>And to create it manually and add the relevant information, I use an LLM to convert the markdown data into a well-formed dataset. 🤖</p>



<p>Here we’re using <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/llm-fine-tune/ai/llm-fine-tune/dataset/DatasetCreation.py" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Python script </a>🐍:</p>



<pre title="Dataset creation with LLM" class="wp-block-code"><code lang="python" class="language-python line-numbers">import os
from pathlib import Path
from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage

# 🗺️ Define the JSON schema for the response 🗺️
message_schema = {
    "type": "object",
    "properties": {
        "role": {"type": "string"},
        "content": {"type": "string"}
    },
    "required": ["role", "content"]
}

response_format = {
    "type": "json_object",
    "json_schema": {
        "name": "Messages",
        "description": "A list of messages with role and content",
        "properties": {
            "messages": {
                "type": "array",
                "items": message_schema
            }
        }
    }
}

# ⚙️ Initialize the chat model with AI Endpoints configuration ⚙️
chat_model = ChatOpenAI(
    api_key=os.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN"),
    base_url=os.getenv("OVH_AI_ENDPOINTS_MODEL_URL"),
    model_name=os.getenv("OVH_AI_ENDPOINTS_MODEL_NAME"),
    temperature=0.0
)

# 📂 Define the directory path 📂
directory_path = "docs/pages/public_cloud/ai_machine_learning"
directory = Path(directory_path)

# 🗃️ Walk through the directory and its subdirectories 🗃️
for path in directory.rglob("*"):
    # Check if the current path is a directory
    if path.is_dir():
        # Get the name of the subdirectory
        sub_directory = path.name

        # Construct the path to the "guide.en-gb.md" file in the subdirectory
        guide_file_path = path / "guide.en-gb.md"

        # Check if the "guide.en-gb.md" file exists in the subdirectory
        if "endpoints" in sub_directory and guide_file_path.exists():
            print(f"📗 Guide processed: {sub_directory}")
            with open(guide_file_path, 'r', encoding='utf-8') as file:
                raw_data = file.read()

            user_message = HumanMessage(content=f"""
With the markdown following, generate a JSON file composed as follows: a list named "messages" composed of tuples with a key "role" which can have the value "user" when it's the question and "assistant" when it's the response. To split the document, base it on the markdown chapter titles to create the question, seems like a good idea.
Keep the language English.
I don't need to know the code to do it but I want the JSON result file.
For the "user" field, don't just repeat the title but make a real question, for example "What are the requirements for OVHcloud AI Endpoints?"
Be sure to add OVHcloud with AI Endpoints so that it's clear that OVHcloud creates AI Endpoints.
Generate the entire JSON file.
An example of what it should look like: messages [{{"role":"user", "content":"What is AI Endpoints?"}}]
There must always be a question followed by an answer, never two questions or two answers in a row.
The source markdown file:
{raw_data}
""")
            chat_response = chat_model.invoke([user_message], response_format=response_format)
            
            with open(f"./generated/{sub_directory}.json", 'w', encoding='utf-8') as output_file:
                output_file.write(chat_response.content)
                print(f"✅ Dataset generated: ./generated/{sub_directory}.json")

</code></pre>



<p><em>ℹ️ You can find all resources to build and run this application in the <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/llm-fine-tune/dataset/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">dedicated folder</a> in the GitHub repository.</em></p>



<p>Here’s a sample of the file created as the dataset:</p>



<pre title="Dataset example" class="wp-block-code"><code lang="json" class="language-json line-numbers">[
  {
    "role": "user",
    "content": "What are the requirements for using OVHcloud AI Endpoints?"
  },
  {
    "role": "assistant",
    "content": "To use OVHcloud AI Endpoints, you need the following: \n1. A Public Cloud project in your OVHcloud account \n2. A payment method defined on your Public Cloud project. Access keys created from Public Cloud projects in Discovery mode (without a payment method) cannot use the service."
  },
  {
    "role": "user",
    "content": "What are the rate limits for using OVHcloud AI Endpoints?"
  },
  {
    "role": "assistant",
    "content": "The rate limits for OVHcloud AI Endpoints are as follows:\n- Anonymous: 2 requests per minute, per IP and per model.\n- Authenticated with an API access key: 400 requests per minute, per PCI project and per model."
  }, 
   ...]
}</code></pre>



<p>As for quantity, it’s a bit tricky. How can we generate the right data for training without lowering data quality?</p>



<p>To do this, I’ve created synthetic data using an LLM to create it from the original data. The trick is to generate more data on the same topic by rephrasing it differently but with the same idea.</p>



<p>Here is the <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/llm-fine-tune/dataset/DatasetAugmentation.py" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Python script</a> 🐍 to do the data augmentation:</p>



<pre title="Data augmentation" class="wp-block-code"><code lang="python" class="language-python line-numbers">import os
import json
import uuid
from pathlib import Path
from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage
from jsonschema import validate, ValidationError

# 🗺️ Define the JSON schema for the response 🗺️
message_schema = {
    "type": "object",
    "properties": {
        "role": {"type": "string"},
        "content": {"type": "string"}
    },
    "required": ["role", "content"]
}

response_format = {
    "type": "json_object",
    "json_schema": {
        "name": "Messages",
        "description": "A list of messages with role and content",
        "properties": {
            "messages": {
                "type": "array",
                "items": message_schema
            }
        }
    }
}

# ✅ JSON validity verification ❌
def is_valid(json_data):
    """
    Test the validity of the JSON data against the schema.
    Argument:
        json_data (dict): The JSON data to validate.  
    Raises:
        ValidationError: If the JSON data does not conform to the specified schema.  
    """
    try:
        validate(instance=json_data, schema=response_format["json_schema"])
        return True
    except ValidationError as e:
        print(f"❌ Validation error: {e}")
        return False

# ⚙️ Initialize the chat model with AI Endpoints configuration ⚙️
chat_model = ChatOpenAI(
    api_key=os.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN"),
    base_url=os.getenv("OVH_AI_ENDPOINTS_MODEL_URL"),
    model_name=os.getenv("OVH_AI_ENDPOINTS_MODEL_NAME"),
    temperature=0.0
)

# 📂 Define the directory path 📂
directory_path = "generated"
print(f"📂 Directory path: {directory_path}")
directory = Path(directory_path)

# 🗃️ Walk through the directory and its subdirectories 🗃️
for path in directory.rglob("*"):
    print(f"📜 Processing file: {path}")
    # Check if the current path is a valid file
    if path.is_file() and path.name.__contains__ ("endpoints"):
        # Read the raw data from the file
        with open(path, 'r', encoding='utf-8') as file:
            raw_data = file.read()

        try:
            json_data = json.loads(raw_data)
        except json.JSONDecodeError:
            print(f"❌ Failed to decode JSON from file: {path.name}")
            continue

        if not is_valid(json_data):
            print(f"❌ Dataset non valide: {path.name}")
            continue
        print(f"✅ Input dataset valide: {path.name}")

        user_message = HumanMessage(content=f"""
        Given the following JSON, generate a similar JSON file where you paraphrase each question in the content attribute
        (when the role attribute is user) and also paraphrase the value of the response to the question stored in the content attribute
        when the role attribute is assistant.
        The objective is to create synthetic datasets based on existing datasets.
        I do not need to know the code to do this, but I want the resulting JSON file.
        It is important that the term OVHcloud is present as much as possible, especially when the terms AI Endpoints are mentioned
        either in the question or in the response.
        There must always be a question followed by an answer, never two questions or two answers in a row.
        It is IMPERATIVE to keep the language in English.
        The source JSON file:
        {raw_data}
        """)

        chat_response = chat_model.invoke([user_message], response_format=response_format)

        output = chat_response.content

        # Replace unauthorized characters
        output = output.replace("\\t", " ")

        generated_file_name = f"{uuid.uuid4()}_{path.name}"
        with open(f"./generated/synthetic/{generated_file_name}", 'w', encoding='utf-8') as output_file:
            output_file.write(output)

        if not is_valid(json.loads(output)):
            print(f"❌ ERROR: File {generated_file_name} is not valid")
        else:
            print(f"✅ Successfully generated file: {generated_file_name}")</code></pre>



<p><em>ℹ️ Again, you can find all resources to build and run this application in the <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/llm-fine-tune/dataset/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">dedicated folder</a> in the GitHub repository.</em></p>



<h3 class="wp-block-heading">Fine-tune the model</h3>



<p>We now have enough training data, let’s fine-tune!</p>



<p><em><em>ℹ️ It’s hard to say exactly how much data is needed to train a model properly. It all depends on the model, the data, the topic, and so on.</em><br><em>The only option is to test and adapt. 🔁</em>.</em></p>



<p>I use <a href="https://jupyter.org/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Jupyter notebook</a>, created with <a href="https://www.ovhcloud.com/fr/public-cloud/ai-notebooks/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">OVHcloud AI Notebooks</a>, to fine-tune my models.</p>



<pre title="Jupyter notebook creation" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">ovhai notebook run conda jupyterlab \
	--name axolto-llm-fine-tune \
	--framework-version 25.3.1-py312-cudadevel128-gpu \
	--flavor l4-1-gpu \
	--gpu 1 \
	--envvar HF_TOKEN=$MY_HF_TOKEN \
	--envvar WANDB_TOKEN=$MY_WANDB_TOKEN \
	--unsecure-http</code></pre>



<p><em><em>ℹ️ For more details on how to create Jupyter notebook with <a href="https://www.ovhcloud.com/fr/public-cloud/ai-notebooks/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Notebooks</a>, read the <a href="https://help.ovhcloud.com/csm/fr-documentation-public-cloud-ai-and-machine-learning-ai-notebooks?id=kb_browse_cat&amp;kb_id=574a8325551974502d4c6e78b7421938&amp;kb_category=c8441955f49801102d4ca4d466a7fd58&amp;spa=1" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">documentation</a>.</em></em></p>



<p class="has-text-align-left">⚙️ The <strong>HF_TOKEN</strong> environment variable is used to pull and push the trained model to <a href="https://huggingface.co/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Hugging Face</a> <br>⚙️ The <strong>WANDB_TOKEN</strong> environment variable helps you track training quality in <a href="https://wandb.ai" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Weight &amp; Biases</a></p>



<p>Once the notebook is set up, you can start coding the model’s training with Axolotl.</p>



<p>To start, install Axolotl CLI and its dependencies. 🧰</p>



<pre title="Axolot installation" class="wp-block-code"><code lang="bash" class="language-bash"># Axolotl need these dependencies
!pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126

# Axolotl CLI installation
!pip install --no-build-isolation axolotl[flash-attn,deepspeed]

# Verify Axolotl version and installation
!axolotl --version</code></pre>



<p></p>



<p>The next step is to configure the Hugging Face CLI. 🤗</p>



<pre title="Hugging Face configurartion" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">!pip install -U "huggingface_hub[cli]"

!huggingface-cli --version</code></pre>



<pre title="Hugging Face hub authentication " class="wp-block-code"><code lang="python" class="language-python line-numbers">import os
from huggingface_hub import login

login(os.getenv("HF_TOKEN"))</code></pre>



<p></p>



<p>Then, configure your Weight &amp; Biases access.</p>



<pre title="Weight &amp; Biases configuration" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">pip install wandb

!wandb login $WANDB_TOKEN</code></pre>



<p></p>



<p>Once all that’s done, it’s time to train the model.</p>



<pre title="Train the model" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">!axolotl train /workspace/instruct-lora-1b-ai-endpoints.yml</code></pre>



<p>You only need to type this one line to train it, how cool is that? 😎</p>



<p><em><em>ℹ️ With one L4 card, 10 epochs, and roughly 2000 questions and answers in the datasets, it ran for about 90 minutes.</em></em></p>



<p>Basically, the command line needs just one parameter: the Axolotl config file. You can find everything you need to set up Axolotl in the <a href="https://docs.axolotl.ai/docs/config-reference.html" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official documentation</a>.📜<br>Here’s what the model was trained on:</p>



<pre title="Axolotl configuration" class="wp-block-code"><code lang="yaml" class="language-yaml">base_model: meta-llama/Llama-3.2-1B-Instruct
# optionally might have model_type or tokenizer_type
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name

load_in_8bit: true
load_in_4bit: false

datasets:
  - path: /workspace/ai-endpoints-doc/
    type: chat_template
      
    field_messages: messages
    message_property_mappings:
      role: role
      content: content
    roles:
      user:
        - user
      assistant:
        - assistant

dataset_prepared_path:
val_set_size: 0.01
output_dir: /workspace/out/llama-3.2-1b-ai-endpoints

sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true

adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true

wandb_project: ai_endpoints_training
wandb_entity: &lt;user id&gt;
wandb_mode: 
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 10
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002

bf16: auto
tf32: false

gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true

warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
   pad_token: &lt;|end_of_text|&gt;
</code></pre>



<p>🔎 Some key points (only the fields modified from the <a href="https://github.com/axolotl-ai-cloud/axolotl/blob/main/examples/llama-3/instruct-lora-8b.yml" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">given templates</a>):<br>&#8211; <strong>base_model: meta-llama/Llama-3.2-1B-Instruct</strong>: before you download the base model from Hugging Face, be sure to accept the licence’s <a>terms of use </a><a href="#_msocom_1">[JD1]</a> <br>&#8211; <strong>path: /workspace/ai-endpoints-doc/</strong>: folder to upload the generated dataset<br>&#8211; <strong>wandb_project: ai_endpoints_training</strong> &amp; <strong>wandb_entity: &lt;user id></strong>: to configure weights and biases<br>&#8211; <strong>num_epochs: 10</strong>: number of epochs for the training<a id="_msocom_1"></a></p>



<p>After the training, you can test the new model 🤖:</p>



<pre title="New model testing" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">!echo "What is OVHcloud AI Endpoints and how to use it?" | axolotl inference /workspace/instruct-lora-1b-ai-endpoints.yml --lora-model-dir="/workspace/out/llama-3.2-1b-ai-endpoints" </code></pre>



<p></p>



<p>When you’re satisfied with the result, merge the weights and upload the new model to Hugging Face:</p>



<pre title="Push the model" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">!axolotl merge-lora /workspace/instruct-lora-1b-ai-endpoints.yml

%cd /workspace/out/llama-3.2-1b-ai-endpoints/merged

!huggingface-cli upload wildagsx/Llama-3.2-1B-Instruct-AI-Endpoints-v0.6 .</code></pre>



<p>ℹ️ <em>You can find all resources to create and run the notebook in the <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/llm-fine-tune/notebook/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">dedicated folder</a> in the GitHub repository.</em></p>



<h3 class="wp-block-heading">Test the new model</h3>



<p>Once you have pushed your model in Hugging Face you can, again, deploy it with vLLM and AI Deploy to test it ⚡️.</p>



<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="474" src="https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-14.58.02-1024x474.png" alt="" class="wp-image-29459" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-14.58.02-1024x474.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-14.58.02-300x139.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-14.58.02-768x356.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-14.58.02-1536x712.png 1536w, https://blog.ovhcloud.com/wp-content/uploads/2025/07/Screenshot-2025-07-23-at-14.58.02-2048x949.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Ta-da! 🥳 Our little Llama model is now an OVHcloud AI Endpoints pro!</p>



<p></p>



<p>Feel free to try out OVHcloud Machine Learning products, and share your thoughts on our Discord server (<em><a href="https://discord.gg/ovhcloud" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">https://discord.gg/ovhcloud</a></em>), see you soon! 👋</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Ffine-tune-an-llm-with-axolotl-and-ovhcloud-machine-learning-services%2F&amp;action_name=Fine%20tune%20an%20LLM%20with%20Axolotl%20and%20OVHcloud%20Machine%20Learning%20Services&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Use Kilo Code with AI Endpoints and VSCode</title>
		<link>https://blog.ovhcloud.com/use-kilo-code-with-ai-endpoints-and-vscode/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Mon, 30 Jun 2025 08:09:03 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Endpoints]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29289</guid>

					<description><![CDATA[If you want to have more information on AI Endpoints, please read the following blog post.You can, also, have a look at our previous blog posts on how use AI Endpoints. In a previous blog post we explained how to use Continue with VSCode to create a code assistant with AI Endpoints. In this blog [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fuse-kilo-code-with-ai-endpoints-and-vscode%2F&amp;action_name=Use%20Kilo%20Code%20with%20AI%20Endpoints%20and%20VSCode&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/kilo-code.png" alt="a robot doing development stuff" class="wp-image-29293" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/kilo-code.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/kilo-code-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/kilo-code-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/kilo-code-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/kilo-code-70x70.png 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>If you want to have more information on AI Endpoints, please read the following<a href="https://blog.ovhcloud.com/enhance-your-applications-with-ai-endpoints/" data-wpel-link="internal"> blog post</a>.<br>You can, also, have a look at our <a href="https://blog.ovhcloud.com/tag/ai-endpoints/" data-wpel-link="internal">previous blog posts</a> on how use <a href="https://endpoints.ai.cloud.ovh.net" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>.</p>



<p>In a <a href="https://blog.ovhcloud.com/create-a-code-assistant-with-continue-and-ai-endpoints/" data-wpel-link="internal">previous blog post</a> we explained how to use <a href="https://www.continue.dev/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Continue</a> with <a href="https://code.visualstudio.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">VSCode</a> to create a code assistant with AI Endpoints.</p>



<p>In this blog post, we will explain how to use <a href="https://kilocode.ai/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Kilo Code</a> with VSCode to create a powerful coder companion! If you need more information about Kilo Code, please check out the <a href="https://kilocode.ai/docs" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official Kilo Code documentation</a>.</p>



<h3 class="wp-block-heading">How to use AI Endpoints with Kilo Code?</h3>



<p>The first thing is to install the extension in your VSCode. See the <a href="https://kilocode.ai/install" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official documentation</a> to see how to do that.</p>



<p>Once the extension is installed, you need to configure an external provider. To do this choose <strong>OVHcloud AI Endpoints</strong> in the Providers tab.</p>



<p>Here is the values for the Kilo Code parameters to set to use it with AI Endpoints:<br> &#8211; API Provider: OVHcloud AI Endpoints<br> &#8211; API Key: &#8230; your API Key 😇<br> &#8211; Model: One of the available models, for instance Qwen2.5-Coder-32B-Instruct (this is one of our coder model at the time I wrote the blog post, fell free to take another available model)<br></p>



<p>And that&#8217;s all, you can enjoy the power of Kilo Code with AI Endpoints! 🚀</p>



<figure class="wp-block-video aligncenter"><video height="996" style="aspect-ratio: 1866 / 996;" width="1866" autoplay controls loop src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/Kilocode-demo.mov"></video></figure>



<p>Don’t hesitate to test our new product, <a href="https://endpoints.ai.cloud.ovh.net/" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">AI Endpoints</a>, and give us your feedback.</p>



<p>You have a dedicated Discord channel (#<em>ai-endpoints</em>) on our Discord server (<em><a href="https://discord.gg/ovhcloud" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">https://discord.gg/ovhcloud</a></em>), see you there!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fuse-kilo-code-with-ai-endpoints-and-vscode%2F&amp;action_name=Use%20Kilo%20Code%20with%20AI%20Endpoints%20and%20VSCode&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		<enclosure url="https://blog.ovhcloud.com/wp-content/uploads/2025/06/Kilocode-demo.mov" length="3725430" type="video/quicktime" />

			</item>
		<item>
		<title>Model Context Protocol (MCP) with OVHcloud AI Endpoints</title>
		<link>https://blog.ovhcloud.com/model-context-protocol-mcp-with-ovhcloud-ai-endpoints/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Fri, 27 Jun 2025 08:01:19 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Endpoints]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29158</guid>

					<description><![CDATA[If you want to have more information on AI Endpoints, please read the following blog post.You can, also, have a look at our previous blog posts on how use AI Endpoints. OVHcloud AI Endpoints allows developers to easily add AI features to there day to day developments. In this article, we will explore how to [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmodel-context-protocol-mcp-with-ovhcloud-ai-endpoints%2F&amp;action_name=Model%20Context%20Protocol%20%28MCP%29%20with%20OVHcloud%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-3.png" alt="A robot using a laptop" class="wp-image-29168" style="width:640px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-3.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-3-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-3-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-3-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-3-70x70.png 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>If you want to have more information on AI Endpoints, please read the following<a href="https://blog.ovhcloud.com/enhance-your-applications-with-ai-endpoints/" data-wpel-link="internal"> blog post</a>.<br>You can, also, have a look at our <a href="https://blog.ovhcloud.com/tag/ai-endpoints/" data-wpel-link="internal">previous blog posts</a> on how use AI Endpoints.</p>



<p>OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints </a>allows developers to easily add AI features to there day to day developments.</p>



<p>In this article, we will explore how to create a Model Context Protocol (MCP) server and client using <a href="https://quarkus.io/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Quarkus</a> and <a href="https://docs.langchain4j.dev/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">LangChain4J</a> to interact with OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>.</p>



<p><em>ℹ️ You can find the full code on <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/ai-endpoints/mcp-quarkus-langchain4j" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Github</a> ℹ️</em></p>



<h3 class="wp-block-heading"><em>Introduction to Model Context Protocol (MCP)</em></h3>



<p>In few words, MCP is a protocol that allows your LLM to ask for additional context or data from external sources during the generation processes.<br><strong>⚠️ This is not the LLM that calls the external source, it&#8217;s the client that handles the call and returns the result to the LLM. ⚠️</strong></p>



<p>If you want more information about MCP, please refer to the <a href="https://modelcontextprotocol.io/introduction" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">official documentation</a>.</p>



<p>In this blog post, we&#8217;ll explore how to easily create, in Java, a MCP Server using Quarkus and a client using LangChain4J.</p>



<h3 class="wp-block-heading"><em>Creating a Server with Quarkus</em></h3>



<p>The goal of this MCP server is to allow the LLM to ask for information about OVHcloud public cloud projects.</p>



<p><em>ℹ️ The code used to call the <a href="https://eu.api.ovh.com/" data-wpel-link="exclude">OVHcloud AP</a>I is in the GitHub repository and will not be detailed here.</em></p>



<p>Thanks to Quarkus, the only things you need to create a MCP server is to define the tools that you want to expose to the LLM.</p>



<pre title="Quarkus code for MCP Server" class="wp-block-code"><code lang="java" class="language-java line-numbers">public class PublicCloudUserTool {

    @RestClient
    OVHcloudMe ovhcloudMe;

    @Tool(description = "Tool to manage the OVHcloud public cloud user.")
    ToolResponse getUserDetails() {
        Long ovhTimestamp = System.currentTimeMillis() / 1000;
        return ToolResponse.success(
                new TextContent(ovhcloudMe.getMe(OVHcloudSignatureHelper.signature("me", ovhTimestamp),
                        Long.toString(ovhTimestamp)).toString()));
    }
}</code></pre>



<p><strong>⚠️ The description is very important as it will be used by the LLM to choose the right tool for the task. ⚠️</strong></p>



<p>At the time of writing, there are two type of MCP servers: <a href="https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#stdio" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">stdio</a> and <a href="https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Streamable HTTP</a>.<br>This blog post uses the Streamable mode thanks to Quarkus with the <em>quarkus-mcp-server-sse</em> extension.</p>



<p>Run your server with the <em>quarkus dev</em> command. Your MCP server will be used on <em>http://localhost:8080</em>.</p>



<h3 class="wp-block-heading">Using the MCP server with LangChain4J</h3>



<p>You can, now, use the MCP server with LangChain4J to create a powerful chatbot that can now interact with your OVHcloud account!</p>



<pre title="MCP client with LangChain4J" class="wp-block-code"><code lang="java" class="language-java line-numbers">///usr/bin/env jbang "$0" "$@" ; exit $?
//JAVA 24+
//PREVIEW
//DEPS dev.langchain4j:langchain4j-mcp:1.0.1-beta6 dev.langchain4j:langchain4j:1.0.1 dev.langchain4j:langchain4j-mistral-ai:1.0.1-beta6 


import dev.langchain4j.mcp.McpToolProvider;
import dev.langchain4j.mcp.client.DefaultMcpClient;
import dev.langchain4j.mcp.client.McpClient;
import dev.langchain4j.mcp.client.transport.McpTransport;
import dev.langchain4j.mcp.client.transport.http.HttpMcpTransport;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.mistralai.MistralAiChatModel;
import dev.langchain4j.service.AiServices;

// Simple chatbot definition with AI Services from LangChain4J
public interface Bot {
    String chat(String prompt);
}

void main() {
    // Mistral model from OVHcloud AI Endpoints
    ChatModel chatModel = MistralAiChatModel.builder()
            .apiKey(System.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN"))
            .baseUrl(System.getenv("OVH_AI_ENDPOINTS_MODEL_URL"))
            .modelName(System.getenv("OVH_AI_ENDPOINTS_MODEL_NAME"))
            .logRequests(false)
            .logResponses(false)
            .build();

    // Configure the MCP server to use
    McpTransport transport = new HttpMcpTransport.Builder()
            // https://xxxx/mcp/sse
            .sseUrl(System.getenv("MCP_SERVER_URL"))
            .logRequests(false)
            .logResponses(false)
            .build();

    // Create the MCP client for the given MCP server
    McpClient mcpClient = new DefaultMcpClient.Builder()
            .transport(transport)
            .build();

    // Configure the tools list for the LLM
    McpToolProvider toolProvider = McpToolProvider.builder()
            .mcpClients(mcpClient)
            .build();

    // Create the chatbot with the given LLM and tools list
    Bot bot = AiServices.builder(Bot.class)
            .chatModel(chatModel)
            .toolProvider(toolProvider)
            .build();

    // Play with the chatbot 🤖
    String response = bot.chat("Can I have some details about my OVHcloud account?");
    System.out.println("RESPONSE: " + response);

}
</code></pre>



<p>If you run the code you can see your MCP server and client in action:</p>



<pre title="Call model with MCP Server tools" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">$ jbang SimpleMCPClient.java

DEBUG -- Connected to SSE channel at http://127.0.0.1:8080/mcp/sse
DEBUG -- Received the server's POST URL: http://127.0.0.1:8080/mcp/messages/ZDdkZTEyYWMtNzczMC00NDVkLWFhMjktZWI1MGI0YjVjNzFh
DEBUG -- MCP server capabilities: 
{"capabilities":
  {"resources":
    {"listChanged":true},
    "completions":{},
    "logging":{},
    "tools":
      {"listChanged":true},
      "prompts":
        {"listChanged":true}
  },
  "serverInfo":
    {"version":"1.0.0-SNAPSHOT",
    "name":"ovh-mcp"
    },
  "protocolVersion":"2024-11-05"
}

RESPONSE:  Here are the details for your OVHcloud account:
- First name: Stéphane
- Last name: Philippart
- City: XXX
- Country: FR
- Language: fr_FR

You can refer to these details when interacting with the OVHcloud platform or support.
</code></pre>



<p></p>



<p>You have a dedicated Discord channel (#<em>ai-endpoints</em>) on our Discord server (<em><a href="https://discord.gg/ovhcloud" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">https://discord.gg/ovhcloud</a></em>), see you there!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fmodel-context-protocol-mcp-with-ovhcloud-ai-endpoints%2F&amp;action_name=Model%20Context%20Protocol%20%28MCP%29%20with%20OVHcloud%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Using Function Calling with OVHcloud AI Endpoints</title>
		<link>https://blog.ovhcloud.com/using-function-calling-with-ovhcloud-ai-endpoints/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Tue, 24 Jun 2025 07:03:45 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Endpoints]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=29145</guid>

					<description><![CDATA[If you want to have more information on AI Endpoints, please read the following blog post.You can, also, have a look at our previous blog posts on how use AI Endpoints. OVHcloud AI Endpoints allows developers to easily add AI features to there day to day developments. Stable Diffusion is a powerful artificial intelligence model [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fusing-function-calling-with-ovhcloud-ai-endpoints%2F&amp;action_name=Using%20Function%20Calling%20with%20OVHcloud%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-2.png" alt="" class="wp-image-29156" style="width:640px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-2.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-2-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-2-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-2-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-2-70x70.png 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>If you want to have more information on AI Endpoints, please read the following<a href="https://blog.ovhcloud.com/enhance-your-applications-with-ai-endpoints/" data-wpel-link="internal"> blog post</a>.<br>You can, also, have a look at our <a href="https://blog.ovhcloud.com/tag/ai-endpoints/" data-wpel-link="internal">previous blog posts</a> on how use AI Endpoints.</p>



<p>OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints </a>allows developers to easily add AI features to there day to day developments.</p>



<p>Stable Diffusion is a powerful artificial intelligence model to generate images from text descriptions.<br>You can use it, thanks to AI Endpoints, simply by calling <a href="https://endpoints.ai.cloud.ovh.net/models/a363a190-ff7b-4c38-a1b9-147f9aae9328" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">the endpoint</a> with a prompt.</p>



<p>However, creating a good prompt for Stable Diffusion can be challenging.</p>



<p>In this blog post, we will show you how to optimize your prompts using Function Calling and AI Endpoints.</p>



<p>OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a> provides a <a href="https://endpoints.ai.cloud.ovh.net/catalog" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">lot of model</a>s, but for this example we will use models from the Large Languages Models (LLM) and Image Generation families.</p>



<p>The following examples use <a href="https://docs.langchain4j.dev/intro/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">LangChain4J</a> as Framework to do the LLM calls.</p>



<p><em>ℹ️ You can find the full code on <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/ai-endpoints/function-calling-langchain4j" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Github</a> ℹ️</em></p>



<h3 class="wp-block-heading"><em>Introduction to Function Calling</em></h3>



<p>Function calling refers to the ability of a language model or AI system to ask to invoke and execute pre-defined functions or tasks, such as data processing, calculations, or external API calls, in response to user input or prompts.<br>This enables the AI system to perform more complex and dynamic tasks, and to leverage external knowledge and services to generate more accurate and informative responses.</p>



<p>In the context of image generation, function calling can be used to enhance the quality of the prompts by optimizing them thanks to external tool based on a LLM.</p>



<p>To create our application we will use <a href="https://docs.langchain4j.dev/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">LangChain4J</a> to simplify the integration of the AI models and the function calling mechanism.</p>



<h3 class="wp-block-heading"><em><em>Tool creation</em></em></h3>



<p>To use the function calling mechanism, we need to define a tool.<br>In our example the goal of the tool is to call Stable Diffusion API to generate an image.</p>



<p><strong>⚠️ This is not the model itself that calls the tool but the client that invokes the model. ⚠️</strong></p>



<pre title="Image generation tool" class="wp-block-code"><code lang="java" class="language-java line-numbers">    @Tool("""
    Tool to create an image with Stable Diffusion XL given a prompt and a negative prompt.
    """)
    void generateImage(@P("Prompt that explains the image") String prompt, @P("Negative prompt that explains what the image must not contains") String negativePrompt) throws IOException, InterruptedException {
        System.out.println("Prompt: " + prompt);
        System.out.println("Negative prompt: " + negativePrompt);

        HttpRequest httpRequest = HttpRequest.newBuilder()
                .uri(URI.create(System.getenv("OVH_AI_ENDPOINTS_SD_URL")))
                .POST(HttpRequest.BodyPublishers.ofString("""
                        {"prompt": "%s", 
                         "negative_prompt": "%s"}
                        """.formatted(prompt, negativePrompt)))
                .header("accept", "application/octet-stream")
                .header("Content-Type", "application/json")
                .header("Authorization", "Bearer " + System.getenv("OVH_AI_ENDPOINTS_SDXL_ACCESS_TOKEN"))
                .build();

        HttpResponse&lt;byte[]&gt; response = HttpClient.newHttpClient()
                .send(httpRequest, HttpResponse.BodyHandlers.ofByteArray());

        System.out.println("SDXL status code: " + response.statusCode());
        Files.write(Path.of("generated-image.jpeg"), response.body());
    }</code></pre>



<p>⚠️ One of the main point to help the LLM to choose the right tool to use, is to provide clear and comprehensive description. ⚠️</p>



<p>Once the tool is ready, lets tell to the model that it can use it!</p>



<h3 class="wp-block-heading"><em>Optimizing the model with a tool</em></h3>



<p>First we create a simple chatbot.</p>



<pre title="Chatbot with AI Services" class="wp-block-code"><code lang="java" class="language-java line-numbers">/// Chatbot definition.
/// The goal of the chatbot is to build a powerful prompt for Stable diffusion XML.
interface ChatBot {
    @SystemMessage("""
            Your are an expert of using the Stable Diffusion XL model.
            The user explains in natural language what kind of image he wants.
            You must do the following steps:
              - Understand the user's request.
              - Generate the two kinds of prompts for stable diffusion: the prompt and the negative prompt
              - the prompts must be in english and detailed and optimized for the Stable Diffusion XL model. 
              - once and only once you have this two prompts call the tool with the two prompts.
            If asked about to create an image, you MUST call the `generateImage` function.
            """)
    @UserMessage("Create an image with stable diffusion XLK following this description: {{userMessage}}")
    String chat(String userMessage);
}</code></pre>



<p>It&#8217;s not mandatory to create a such detailed system message, but it helps the model to choose the tool when needed.</p>



<p>After this we assemble all the pieces together.</p>



<pre title="Chatbot with tool calling" class="wp-block-code"><code lang="java" class="language-java line-numbers">void main() throws Exception {

    // Main chatbot configuration, choose on of the available models on the AI Endpoints catalog (https://endpoints.ai.cloud.ovh.net/catalog)
    ChatModel chatModel = MistralAiChatModel.builder()
            .apiKey(System.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN"))
            .baseUrl(System.getenv("OVH_AI_ENDPOINTS_MODEL_URL"))
            .modelName(System.getenv("OVH_AI_ENDPOINTS_MODEL_NAME"))
            .logRequests(false)
            .logResponses(false)
            // To have more deterministic outputs, set temperature to 0.
            .temperature(0.0)
            .build();

    // Add memory to fine tune the SDXL prompt.
    ChatMemory chatMemory = MessageWindowChatMemory.withMaxMessages(10);

    // Build the chatbot thanks to LangChain4J AI Servises mode
    ChatBot chatBot = AiServices.builder(ChatBot.class)
            .chatModel(chatModel)
            .tools(new ImageGenTools())
            .chatMemory(chatMemory)
            .build();

    // Start the conversation loop (enter "exit" to quit)
    String userInput = "";
    Scanner scanner = new Scanner(System.in);
    while (true) {
        System.out.print("Enter your message: ");
        userInput = scanner.nextLine();
        if (userInput.equalsIgnoreCase("exit")) break;
        System.out.println("Response: " + chatBot.chat(userInput));
    }
    scanner.close();
}</code></pre>



<p>ℹ️ We use a loop to be able to ask the model to optimize the image generation parameters based on the previous response. ℹ️</p>



<p>And that it!<br>It&#8217;s time to test our Stable Diffusion optimizer.</p>



<pre title="Output for chatbot calling" class="wp-block-code"><code lang="bash" class="language-bash">$ jbang ImageGeneration.java


Enter your message: Un chat roux mignon photo réaliste

Prompt: A high-quality, realistic image of a cute red cat, with expressive eyes, soft fur, and a playful pose. 
The cat should be well-lit, with a warm and inviting atmosphere.

Negative prompt: No text, no watermarks, no low-quality images, no cartoon-style, no blurry or pixelated images, 
no cats with missing body parts, no cats with unnatural colors, no cats in unrealistic settings, no cats with human features, 
no cats with inappropriate content.

Response: I have successfully generated the image for you. The image should be a high-quality, 
realistic image of a cute red cat, with expressive eyes, soft fur, and a playful pose. The cat should be well-lit, 
with a warm and inviting atmosphere. If you have any issues or need further assistance, please let me know.

Enter your message: exit</code></pre>



<p>ℹ️ As you see, the model translate the prompt 😊</p>



<p>Here is the result of the prompt:</p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image.png" alt="A cut red cat generated by Stable Diffusion" class="wp-image-29149" style="width:640px" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-300x300.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-150x150.png 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-768x768.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/06/generated-image-70x70.png 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<p>You have a dedicated Discord channel (#<em>ai-endpoints</em>) on our Discord server (<em><a href="https://discord.gg/ovhcloud" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">https://discord.gg/ovhcloud</a></em>), see you there!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fusing-function-calling-with-ovhcloud-ai-endpoints%2F&amp;action_name=Using%20Function%20Calling%20with%20OVHcloud%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Using Structured Output with OVHcloud AI Endpoints</title>
		<link>https://blog.ovhcloud.com/using-structured-output-with-ovhcloud-ai-endpoints/</link>
		
		<dc:creator><![CDATA[Stéphane Philippart]]></dc:creator>
		<pubDate>Fri, 23 May 2025 12:14:54 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[Tranches de Tech & co]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Endpoints]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=28985</guid>

					<description><![CDATA[If you want to have more information on AI Endpoints, please read the following blog post.You can, also, have a look at our previous blog posts on how use AI Endpoints. You can find the full code example in the GitHub repository. In this article, we will explore how to use structured output with OVHcloud [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fusing-structured-output-with-ovhcloud-ai-endpoints%2F&amp;action_name=Using%20Structured%20Output%20with%20OVHcloud%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-2.webp" alt="A parrot on a computer screen
" class="wp-image-28999" style="width:750px;height:auto" srcset="https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-2.webp 1024w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-2-300x300.webp 300w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-2-150x150.webp 150w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-2-768x768.webp 768w, https://blog.ovhcloud.com/wp-content/uploads/2025/05/image-2-70x70.webp 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>I<em>f you want to have more information on <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>, please read the <a href="https://blog.ovhcloud.com/enhance-your-applications-with-ai-endpoints/" data-wpel-link="internal">following blog post</a>.<br>You can, also, have a look at our <a href="https://blog.ovhcloud.com/tag/ai-endpoints/" data-wpel-link="internal">previous blog</a> posts on how use AI Endpoints.</em></p>



<p><em>You can find the full code example in the <a href="https://github.com/ovh/public-cloud-examples/tree/main/ai/ai-endpoints/structured-output-langchain4j" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">GitHub repository</a>.</em></p>



<p>In this article, we will explore how to use structured output with OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>.</p>



<h3 class="wp-block-heading"><em>Introduction to Structured Output</em></h3>



<p>Structured output allows you to format output data in a way that makes it easier for machines to interpret and process.<br>We use the langchain4j library to interact with OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>.<br>Here is an excerpt of code that shows how to define a structured output format for the responses of the language model:</p>



<pre title="Json schema definition" class="wp-block-code"><code lang="java" class="language-java line-numbers">ResponseFormat responseFormat = ResponseFormat.builder()
         .type(ResponseFormatType.JSON)
         .jsonSchema(JsonSchema.builder()
            .name("Person")
            .rootElement(JsonObjectSchema.builder()
               .addStringProperty("name")
               .addIntegerProperty("age")
               .addNumberProperty("height")
               .addBooleanProperty("married")
               .required("name", "age", "height", "married")
            .build())
         .build())
.build();</code></pre>



<p>In this example, we define a JSON output format with a schema that specifies the name, age, height, and married properties as required.</p>



<h3 class="wp-block-heading"><em>Configure the model to use</em></h3>



<p>This example uses the Mistral AI model hosted on OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a>.<br>To configure the model, you need to set up the API key, base URL, and model name as environment variables.<br>Fell free to use another model, see <a href="https://endpoints.ai.cloud.ovh.net/catalog" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints catalog</a>.</p>



<p><em>You can find your access token, model URL, and model name in the OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/models/8b5793fb-89a1-484f-b691-ae45793d6ade" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints model dashboard</a>.</em></p>



<pre title="Model definition" class="wp-block-code"><code lang="java" class="language-java line-numbers">ChatModel chatModel = MistralAiChatModel.builder()
        .apiKey(System.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN"))
        .baseUrl(System.getenv("OVH_AI_ENDPOINTS_MODEL_URL"))
        .modelName(System.getenv("OVH_AI_ENDPOINTS_MODEL_NAME"))
        .logRequests(false)
        .logResponses(false)
.build();
</code></pre>



<p></p>



<h3 class="wp-block-heading"><em>Calling the language model</em></h3>



<p>Thanks to the JSON mode of the LLM, the response from the language model is received as a JSON string:</p>



<pre title="Model call with JSON mode" class="wp-block-code"><code lang="java" class="language-java line-numbers">UserMessage userMessage = UserMessage.from("""
        John is 42 years old.
        He stands 1.75 meters tall.
        Currently unmarried.
        """);

ChatRequest chatRequest = ChatRequest.builder()
        .responseFormat(responseFormat)
        .messages(userMessage)
        .build();

ChatResponse chatResponse = chatModel.chat(chatRequest);

String output = chatResponse.aiMessage().text();
System.out.println("Response: \n" + output); 


// Person is a simple record: record Person(String name, int age, double height, boolean married) {}
Person person = new ObjectMapper().readValue(output, Person.class);
System.out.println(person); 
</code></pre>



<p></p>



<h3 class="wp-block-heading">The full source code</h3>



<pre title="Full code source" class="wp-block-code"><code lang="java" class="language-java line-numbers">///usr/bin/env jbang "$0" "$@" ; exit $?
//JAVA 21+
//PREVIEW
//DEPS dev.langchain4j:langchain4j:1.0.1 dev.langchain4j:langchain4j-mistral-ai:1.0.1-beta6

import com.fasterxml.jackson.databind.ObjectMapper;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.request.ResponseFormat;
import dev.langchain4j.model.chat.request.ResponseFormatType;
import dev.langchain4j.model.chat.request.json.JsonObjectSchema;
import dev.langchain4j.model.chat.request.json.JsonSchema;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.mistralai.MistralAiChatModel;
import dev.langchain4j.model.chat.ChatModel;

record Person(String name, int age, double height, boolean married) {
}

void main() throws Exception {
    ResponseFormat responseFormat = ResponseFormat.builder()
            .type(ResponseFormatType.JSON)
            .jsonSchema(JsonSchema.builder()
                    .name("Person")
                    .rootElement(JsonObjectSchema.builder()
                            .addStringProperty("name")
                            .addIntegerProperty("age")
                            .addNumberProperty("height")
                            .addBooleanProperty("married")
                            .required("name", "age", "height", "married")
                            .build())
                    .build())
            .build();

    UserMessage userMessage = UserMessage.from("""
            John is 42 years old.
            He stands 1.75 meters tall.
            Currently unmarried.
            """);

    ChatRequest chatRequest = ChatRequest.builder()
            .responseFormat(responseFormat)
            .messages(userMessage)
            .build();

    ChatModel chatModel = MistralAiChatModel.builder()
            .apiKey(System.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN"))
            .baseUrl(System.getenv("OVH_AI_ENDPOINTS_MODEL_URL"))
            .modelName(System.getenv("OVH_AI_ENDPOINTS_MODEL_NAME"))
            .logRequests(false)
            .logResponses(false)
            .build();

    ChatResponse chatResponse = chatModel.chat(chatRequest);

    System.out.println("Prompt: \n" + userMessage.singleText());
    String output = chatResponse.aiMessage().text();
    System.out.println("Response: \n" + output); 

    Person person = new ObjectMapper().readValue(output, Person.class);
    System.out.println(person); 
}</code></pre>



<p></p>



<h3 class="wp-block-heading"><em>Running the application</em></h3>



<pre title="Runnung the application" class="wp-block-code"><code lang="bash" class="language-bash line-numbers">jbang HelloWorld.java
[jbang] Building jar for HelloWorld.java...

Prompt: 
John is 42 years old.
He stands 1.75 meters tall.
Currently unmarried.

Response: 
{"age": 42, "height": 1.75, "married": false, "name": "John"}
Person[name=John, age=42, height=1.75, married=false]</code></pre>



<p><em>This code example uses <a href="https://www.jbang.dev/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">JBang</a>, a Java-based tool for creating and running Java programs as scripts.<br>For more information on JBang, please refer to the <a href="https://www.jbang.dev/documentation/guide/latest/index.html" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">JBang documentation</a>.</em></p>



<p>In this article, we have seen how to use structured output with OVHcloud <a href="https://endpoints.ai.cloud.ovh.net/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">AI Endpoints</a> with <a href="https://docs.langchain4j.dev/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">LangChain4J</a>.</p>



<p>You have a <a href="https://discord.gg/ovhcloud" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">dedicated Discord </a>channel (#ai-endpoints) on our Discord server, see you there!</p>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fusing-structured-output-with-ovhcloud-ai-endpoints%2F&amp;action_name=Using%20Structured%20Output%20with%20OVHcloud%20AI%20Endpoints&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
