<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>tensorflow Archives - OVHcloud Blog</title>
	<atom:link href="https://blog.ovhcloud.com/tag/tensorflow/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.ovhcloud.com/tag/tensorflow/</link>
	<description>Innovation for Freedom</description>
	<lastBuildDate>Fri, 03 Mar 2023 16:40:33 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Deploy a custom Docker image for Data Science project &#8211; Gradio sketch recognition app (Part 1)</title>
		<link>https://blog.ovhcloud.com/deploy-a-custom-docker-image-for-data-science-project-gradio-sketch-recognition-app-part-1/</link>
		
		<dc:creator><![CDATA[Eléa Petton]]></dc:creator>
		<pubDate>Tue, 20 Sep 2022 14:30:11 +0000</pubDate>
				<category><![CDATA[OVHcloud Engineering]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Apps]]></category>
		<category><![CDATA[AI Solutions]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[gradio]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[OVHcloud]]></category>
		<category><![CDATA[Public Cloud]]></category>
		<category><![CDATA[tensorflow]]></category>
		<guid isPermaLink="false">https://blog.ovhcloud.com/?p=23056</guid>

					<description><![CDATA[A guide to deploy a custom Docker image for a Gradio app with AI Deploy. When creating code for a Data Science project, you probably want it to be as portable as possible. In other words, it can be run as many times as you like, even on different machines. Unfortunately, it is often the [&#8230;]<img src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploy-a-custom-docker-image-for-data-science-project-gradio-sketch-recognition-app-part-1%2F&amp;action_name=Deploy%20a%20custom%20Docker%20image%20for%20Data%20Science%20project%20%26%238211%3B%20Gradio%20sketch%20recognition%20app%20%28Part%201%29&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></description>
										<content:encoded><![CDATA[
<p><em>A guide to deploy a custom Docker image for a <a href="https://gradio.app/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Gradio</a><strong> </strong>app with <strong>AI Deploy</strong>. </em></p>



<figure class="wp-block-image alignfull size-large"><img fetchpriority="high" decoding="async" width="1024" height="814" src="https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-blog-2-1024x814.jpeg" alt="Deploy a custom Docker image for Data Science project - Gradio sketch recognition app (Part 1)" class="wp-image-23192" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-blog-2-1024x814.jpeg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-blog-2-300x238.jpeg 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-blog-2-768x610.jpeg 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-blog-2-1536x1221.jpeg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-blog-2.jpeg 1573w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>When creating code for a <strong>Data Science project</strong>, you probably want it to be as portable as possible. In other words, it can be run as many times as you like, even on different machines.</p>



<p>Unfortunately, it is often the case that a Data Science code works fine locally on a machine but gives errors during runtime. It can be due to different versions of libraries installed on the host machine.</p>



<p>To deal with this problem, you can use <a href="https://www.docker.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Docker</a>.</p>



<p><strong>The article is organized as follows:</strong></p>



<ul class="wp-block-list"><li>Objectives</li><li>Concepts</li><li>Build the Gradio app with Python</li><li>Containerize your app with Docker</li><li>Launch the app with AI Deploy</li></ul>



<p><em>All the code for this blogpost is available in our dedicated <a href="https://github.com/ovh/ai-training-examples/tree/main/apps/gradio/sketch-recognition" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">GitHub repository</a>. You can test it with OVHcloud <strong>AI Deploy</strong> tool, please refer to the <a href="https://docs.ovh.com/gb/en/publiccloud/ai/deploy/tuto-gradio-sketch-recognition/" data-wpel-link="exclude">documentation</a> to boot it up.</em></p>



<h2 class="wp-block-heading">Objectives</h2>



<p>In this article, you will learn how to develop your first Gradio sketch recognition app based on an existing ML model.</p>



<p>Once your app is up and running locally, it will be a matter of containerizing it, then deploying the custom Docker image with AI Deploy.</p>



<figure class="wp-block-image aligncenter size-large"><img decoding="async" width="1024" height="513" src="https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-objectives-1024x513.jpeg" alt="Objectives to create a Gradio app" class="wp-image-23189" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-objectives-1024x513.jpeg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-objectives-300x150.jpeg 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-objectives-768x385.jpeg 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-objectives-1536x770.jpeg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-objectives.jpeg 1620w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Concepts</h2>



<p><strong>In Artificial Intelligence, you probably hear about Computer Vision, but do you know what it is?</strong></p>



<p>Computer Vision is a branch of AI that aims to enable computers to interpret visual data (images for example) to extract information.</p>



<p>There are different tasks in computer vision:</p>



<ul class="wp-block-list"><li>Image classification</li><li>Object detection</li><li>Instance Segmentation</li></ul>



<figure class="wp-block-image aligncenter size-large is-resized"><img decoding="async" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-computer-vision-1024x576.jpeg" alt="Computer vision principle" class="wp-image-23126" width="848" height="477" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-computer-vision-1024x576.jpeg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-computer-vision-300x169.jpeg 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-computer-vision-768x432.jpeg 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-computer-vision-1536x864.jpeg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-computer-vision.jpeg 1620w" sizes="(max-width: 848px) 100vw, 848px" /></figure>



<p>Today we are interested in <strong>image recognition</strong> and more specifically in <strong>sketch recognition</strong> using a dataset of handwritten digits.</p>



<h3 class="wp-block-heading">MNIST dataset</h3>



<p>MNIST is a dataset developed by <a href="https://en.wikipedia.org/wiki/Yann_LeCun" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Yann LeCun</a>, <a href="https://en.wikipedia.org/wiki/Corinna_Cortes" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Corinna Cortes</a> and <a href="https://chrisburges.net/bio/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Christopher Burges</a> to evaluate <strong>Machine Learning</strong> models for <strong>handwritten digits</strong> classification.</p>



<p>The dataset was constructed from a number of digitized document datasets available from the <strong>National Institute of Standards and Technology </strong>(NIST).</p>



<p>The images of numbers are <strong>digitized</strong>, <strong>normalized</strong> and <strong>centered</strong>. This allows the developer to focus on machine learning with very little data cleaning.</p>



<p>Each image is a square of <strong>28</strong> by <strong>28</strong> pixels. The dataset is split in two with <strong>60,000 images</strong> for model training and <strong>10,000 images</strong> for testing it.</p>



<p>This is a digit recognition task to recognize <strong>10 digits</strong>, from 0 to 9.</p>



<figure class="wp-block-image aligncenter size-full"><img loading="lazy" decoding="async" width="266" height="264" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-mnist.jpeg" alt="MNIST dataset" class="wp-image-23125" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-mnist.jpeg 266w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-mnist-150x150.jpeg 150w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-mnist-70x70.jpeg 70w" sizes="auto, (max-width: 266px) 100vw, 266px" /></figure>



<p>❗<code><strong>A model to classify images of handwritten figures was trained in a previous tutorial, in notebook form, which you can find and test <a href="https://github.com/ovh/ai-training-examples/blob/main/notebooks/computer-vision/image-classification/tensorflow/weights-and-biases/notebook_Weights_and_Biases_MNIST.ipynb" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">here</a>.</strong></code></p>



<p>This model is registered in an OVHcloud <a href="https://docs.ovh.com/gb/en/publiccloud/ai/cli/data-cli/" data-wpel-link="exclude">Object Storage container</a>. </p>



<h3 class="wp-block-heading">Sketch recognition</h3>



<p><strong>Have you ever heard of sketch recognition in AI?</strong></p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p><strong>Sketch recognition</strong>&nbsp;is the automated recognition of <strong>hand-drawn&nbsp;diagrams</strong>&nbsp;by a&nbsp;computer.&nbsp;Research in sketch recognition lies at the crossroads of&nbsp;<strong>artificial<a href="https://en.wikipedia.org/wiki/Artificial_intelligence" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> </a>intelligence</strong>&nbsp;and&nbsp;<strong>human–computer</strong> interaction. Recognition algorithms usually are&nbsp;gesture-based, appearance-based,&nbsp;geometry-based, or a combination thereof.</p><cite>Wikipedia</cite></blockquote>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-sketch-recognition-1-1024x673.jpeg" alt="AI for Sketch Recognition " class="wp-image-23138" width="648" height="424"/></figure>



<p>In this article, <strong>Gradio</strong> will allow you to create your first sketch recognition app.</p>



<h3 class="wp-block-heading">Gradio</h3>



<p><strong>What is Gradio?</strong></p>



<p><a href="https://gradio.app/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Gradio</a> allows you to create and share <strong>Machine Learning apps</strong>.</p>



<p>It&#8217;s a quick way to demonstrate your Machine Learning model with a user-friendly web interface so that anyone can use it.</p>



<p>Gradio offers the ability to quickly create a <strong>sketch recognition interface</strong> by specifying &#8220;<code>sketchpad</code>&#8221; as an entry.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="303" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-interface-1024x303.png" alt="Gradio app drawing space " class="wp-image-23114" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-interface-1024x303.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-interface-300x89.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-interface-768x227.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-interface.png 1118w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>To make this app accessible, you need to containerize it using <strong>Docker</strong>.</p>



<h3 class="wp-block-heading">Docker</h3>



<p><a href="https://www.docker.com/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Docker</a> platform allows you to build, run and manage isolated applications. The principle is to build an application that contains not only the written code but also all the context to run the code: libraries and their versions for example</p>



<p>When you wrap your application with all its context, you build a Docker image, which can be saved in your local repository or in the Docker Hub.</p>



<p>To get started with Docker, please, check this <a href="https://www.docker.com/get-started" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">documentation</a>.</p>



<p>To build a Docker image, you will define 2 elements:</p>



<ul class="wp-block-list"><li>the application code (<em>Grapio sketch recognition app</em>)</li><li>the <a href="https://docs.docker.com/engine/reference/builder/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Dockerfile</a></li></ul>



<p>In the next steps, you will see how to develop the Python code for your app, but also how to write the Dockerfile.</p>



<p>Finally, you will see how to deploy your custom docker image with <strong>OVHcloud AI Deploy</strong> tool.</p>



<h3 class="wp-block-heading">AI Deploy</h3>



<p><strong>AI Deploy</strong> enables AI models and managed applications to be started via Docker containers. </p>



<p>To know more about AI Deploy, please refer to this <a href="https://docs.ovh.com/gb/en/publiccloud/ai/deploy/getting-started/" data-wpel-link="exclude">documentation</a>.</p>



<h2 class="wp-block-heading">Build the Gradio app with Python</h2>



<h3 class="wp-block-heading">Import Python dependencies </h3>



<p>The first step is to import the <strong>Python libraries</strong> needed to run the Gradio app.</p>



<ul class="wp-block-list"><li>Gradio</li><li>TensorFlow</li><li>OpenCV</li></ul>



<pre class="wp-block-code"><code class="">import gradio as gr
import tensorflow as tf
import cv2</code></pre>



<h3 class="wp-block-heading">Define fixed elements of the app</h3>



<p>With <strong>Gradio</strong>, it is possible to add a title to your app to give information on its purpose.</p>



<pre class="wp-block-code"><code class="">title = "Welcome on your first sketch recognition app!"</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="71" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-title-1024x71.png" alt="Gradio app title" class="wp-image-23109" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-title-1024x71.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-title-300x21.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-title-768x53.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-title.png 1118w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Then, you can be describe your app by adding an image and a &#8220;<strong>description</strong>&#8220;.<br><br>To display and centre an image or text, an <strong>HTML tag</strong> is ideal 💡!</p>



<pre class="wp-block-code"><code class="">head = (
  "&lt;center&gt;"
  "&lt;img src='file/mnist-classes.png' width=400&gt;"
  "The robot was trained to classify numbers (from 0 to 9). To test it, write your number in the space provided."
  "&lt;/center&gt;"
)</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="235" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-description-1024x235.png" alt="Gradio app description" class="wp-image-23111" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-description-1024x235.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-description-300x69.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-description-768x177.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-description.png 1118w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>It is also possible to share a useful link (source code, documentation, …). You can do it with the Gradio attribute named &#8220;<strong>article</strong>&#8220;.</p>



<pre class="wp-block-code"><code class="">ref = "Find the whole code [here](https://github.com/ovh/ai-training-examples/tree/main/apps/gradio/sketch-recognition)."</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="54" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-reference-1024x54.png" alt="Gradio app references" class="wp-image-23112" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-reference-1024x54.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-reference-300x16.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-reference-768x41.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-reference.png 1118w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>For this application, you have to set some variables.</p>



<ul class="wp-block-list"><li><strong>The images size</strong></li></ul>



<p>The image size is set to <strong>28</strong>. Indeed, the model input expects to have a <strong>28&#215;28</strong> image.</p>



<ul class="wp-block-list"><li><strong>The classes list</strong></li></ul>



<p>The classes list is composed of ten strings corresponding to the numbers 0 to 9 written in full.</p>



<pre class="wp-block-code"><code class="">img_size = 28
labels = ["zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"]</code></pre>



<p>Once the image size has been set and the list of classes defined, the next step is to load the AI model.</p>



<h3 class="wp-block-heading">Load TensorFlow model</h3>



<p>This is a <strong>TensorFlow model</strong> saved and exported beforehand in <code>model.h5</code> format.</p>



<p>Indeed, Keras provides a basic saving format using the <strong>HDF5</strong> standard.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p><strong>Hierarchical Data Format</strong>&nbsp;(HDF) is a set of&nbsp;file formats&nbsp;(HDF4,&nbsp;<strong>HDF5</strong>) designed to store and organize large amounts of data.</p><cite>Wikipedia</cite></blockquote>



<p>In a previous notebook,  you have exported the model in an<a href="https://www.ovhcloud.com/fr/public-cloud/object-storage/" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer"> OVHcloud Object Storage</a> container. If you want to test the notebook, please refer to the <a href="https://github.com/ovh/ai-training-examples/blob/main/notebooks/computer-vision/image-classification/tensorflow/weights-and-biases/notebook_Weights_and_Biases_MNIST.ipynb" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">GitHub repository</a>.</p>



<p><code>model<strong>.</strong>save('model/sketch_recognition_numbers_model.h5')</code></p>



<p>To load this model again and use it for inference, without having to re-train it, you have to use the <code>load_model</code> function from Keras.</p>



<pre class="wp-block-code"><code class="">model = tf.keras.models.load_model("model/sketch_recognition_numbers_model.h5")</code></pre>



<p>After defining the different parameters and loading the model, you can define the function that will predict what you have drawn.</p>



<h3 class="wp-block-heading">Define the prediction function</h3>



<p>This function consists of several steps.</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-predict-2-1024x365.jpeg" alt="Gradio app return the results top 3" class="wp-image-23139" width="892" height="318" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-predict-2-1024x365.jpeg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-predict-2-300x107.jpeg 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-predict-2-768x274.jpeg 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-predict-2-1536x548.jpeg 1536w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-predict-2.jpeg 1620w" sizes="auto, (max-width: 892px) 100vw, 892px" /></figure>



<pre class="wp-block-code"><code class="">def predict(img):

  img = cv2.resize(img, (img_size, img_size))
  img = img.reshape(1, img_size, img_size, 1)

  preds = model.predict(img)[0]

  return {label: float(pred) for label, pred in zip(labels, preds)}


label = gr.outputs.Label(num_top_classes=3)</code></pre>



<h3 class="wp-block-heading">Launch Gradio interface</h3>



<p>Now you need to build the interface using a Python class, named <a href="https://gradio.app/docs/#interface" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Interface</a>, previously defined by Gradio.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>The <strong>Interface class</strong> is a high-level abstraction that allows you to create a web-based demo around a machine learning model or arbitrary Python function by specifying: <br>(1) the function <br>(2) the desired input components<br>(3) desired output components.</p><cite>Gradio</cite></blockquote>



<pre class="wp-block-code"><code class="">interface = gr.Interface(fn=predict, inputs="sketchpad", outputs=label, title=title, description=head, article=ref)</code></pre>



<p>Finally, you have to launch the Gradio app with &#8220;<strong>launch</strong>&#8221; method. It launches a simple web server that serves the demo.</p>



<pre class="wp-block-code"><code class="">interface.launch(server_name="0.0.0.0", server_port=8080)</code></pre>



<p>Then, you can test your app locally at the following address: <strong>http://localhost:8080/</strong></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="663" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-overview-1024x663.png" alt="Global overview of Gradio app" class="wp-image-23113" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-overview-1024x663.png 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-overview-300x194.png 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-overview-768x497.png 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-overview.png 1118w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Your app works locally? Congratulations 🎉!<br><br>Now it&#8217;s time to move on to containerization!</p>



<figure class="wp-block-image aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-docker-1024x615.jpeg" alt="Docker - Gradio sketch recognition app
" class="wp-image-23120" width="671" height="403" srcset="https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-docker-1024x615.jpeg 1024w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-docker-300x180.jpeg 300w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-docker-768x461.jpeg 768w, https://blog.ovhcloud.com/wp-content/uploads/2022/06/gradio-docker.jpeg 1168w" sizes="auto, (max-width: 671px) 100vw, 671px" /></figure>



<h2 class="wp-block-heading">Containerize your app with Docker</h2>



<p>First of all, you have to build the file that will contain the different Python modules to be installed with their corresponding version.</p>



<h3 class="wp-block-heading">Create the requirements.txt file</h3>



<p>The <code>requirements.txt</code> file will allow us to write all the modules needed to make our application work.</p>



<pre class="wp-block-code"><code class="">gradio==3.0.10
tensorflow==2.9.1
opencv-python-headless==4.6.0.66</code></pre>



<p>This file will be useful when writing the <code>Dockerfile</code>.</p>



<h3 class="wp-block-heading">Write the Dockerfile</h3>



<p>Your <code>Dockerfile</code> should start with the the <code>FROM</code> instruction indicating the parent image to use. In our case we choose to start from a classic Python image.<br><br>For this Gradio app, you can use version <strong>3.7</strong> of Python.</p>



<pre class="wp-block-code"><code class="">FROM python:3.7</code></pre>



<p>Next, you have to to fill in the working directory and add the <code>requirements.txt</code> file.</p>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:100%">
<div class="inherit-container-width wp-block-group is-layout-constrained wp-block-group-is-layout-constrained"><div class="wp-block-group__inner-container">
<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:100%">
<div class="inherit-container-width wp-block-group is-layout-constrained wp-block-group-is-layout-constrained"><div class="wp-block-group__inner-container">
<p><code><strong>❗ Here you must be in the /workspace directory. This is the basic directory for launching an OVHcloud AI Deploy.</strong></code></p>
</div></div>
</div>
</div>
</div></div>
</div>
</div>



<pre class="wp-block-code"><code class="">WORKDIR /workspace
ADD requirements.txt /workspace/requirements.txt</code></pre>



<p>Install the <code>requirements.txt</code> file which contains your needed Python modules using a <code>pip install…</code> command:</p>



<pre class="wp-block-code"><code class="">RUN pip install -r requirements.txt</code></pre>



<p>Now, you have to add your <code>Python file</code>, as well as the image present in the description of your app, in the <code>workspace</code>.</p>



<pre class="wp-block-code"><code class="">ADD app.py mnist-classes.png /workspace/</code></pre>



<p>Then, you can give correct access rights to OVHcloud user (<code>42420:42420</code>).</p>



<pre class="wp-block-code"><code class="">RUN chown -R 42420:42420 /workspace
ENV HOME=/workspace</code></pre>



<p>Finally, you have to define your default launching command to start the application.</p>



<pre class="wp-block-code"><code class="">CMD [ "python3" , "/workspace/app.py" ]</code></pre>



<p>Once your <code>Dockerfile</code> is defined, you will be able to build your custom docker image.</p>



<h3 class="wp-block-heading">Build the Docker image from the Dockerfile</h3>



<p>First, you can launch the following command from the <code>Dockerfile</code> directory to build your application image.</p>



<pre class="wp-block-code"><code class="">docker build . -t gradio_app:latest</code></pre>



<p><code>⚠️</code> <strong><code>The dot . argument indicates that your build context (place of the Dockerfile and other needed files) is the current directory.</code></strong></p>



<p><code>⚠️</code> <code><strong>The -t argument allows you to choose the identifier to give to your image. Usually image identifiers are composed of a name and a version tag &lt;name&gt;:&lt;version&gt;. For this example we chose gradio_app:latest.</strong></code></p>



<h3 class="wp-block-heading">Test it locally</h3>



<p><code><strong>❗ If you are testing your app locally, you can download your model (sketch_recognition_numbers_model.h5), then add it to the /workspace</strong></code></p>



<p>You can do it via the Dockerfile with the following line:</p>



<p><code><strong>ADD sketch_recognition_numbers_model.h5 /workspace/</strong></code></p>



<p>Now, you can run the following <strong>Docker command</strong> to launch your application locally on your computer.</p>



<pre class="wp-block-code"><code class="">docker run --rm -it -p 8080:8080 --user=42420:42420 gradio_app:latest</code></pre>



<p><code>⚠️</code> <code><strong>The -p 8080:8080 argument indicates that you want to execute a port redirection from the port 8080 of your local machine into the port 8080 of the Docker container.</strong></code></p>



<p><code><strong>⚠️ Don't forget the --user=42420:42420 argument if you want to simulate the exact same behaviour that will occur on AI Deploy. It executes the Docker container as the specific OVHcloud user (user 42420:42420).</strong></code></p>



<p>Once started, your application should be available on <strong>http://localhost:8080</strong>.<br><br>Your Docker image seems to work? Good job 👍!<br><br>It&#8217;s time to push it and deploy it!</p>



<h3 class="wp-block-heading">Push the image into the shared registry</h3>



<p>❗ The shared registry of AI Deploy should only be used for testing purpose. Please consider attaching your own Docker registry. More information about this can be found <a href="https://docs.ovh.com/asia/en/publiccloud/ai/training/add-private-registry/" data-wpel-link="exclude">here</a>.</p>



<p>Then, you have to find the address of your <code>shared registry</code> by launching this command.</p>



<pre class="wp-block-code"><code class="">ovhai registry list</code></pre>



<p>Next, log in on the shared registry with your usual <code>OpenStack</code> credentials.</p>



<pre class="wp-block-code"><code class="">docker login -u &lt;user&gt; -p &lt;password&gt; &lt;shared-registry-address&gt;</code></pre>



<p>To finish, you need to push the created image into the shared registry.</p>



<pre class="wp-block-code"><code class="">docker tag gradio_app:latest &lt;shared-registry-address&gt;/gradio_app:latest
docker push &lt;shared-registry-address&gt;/gradio_app:latest</code></pre>



<p>Once you have pushed your custom docker image into the shared registry, you are ready to launch your app 🚀!</p>



<h2 class="wp-block-heading">Launch the AI Deploy</h2>



<p>The following command starts a new job running your Gradio application.</p>



<pre class="wp-block-code"><code class="">ovhai app run \
      --cpu 1 \
      --volume &lt;my_saved_model&gt;@&lt;region&gt;/:/workspace/model:RO \
      &lt;shared-registry-address&gt;/gradio_app:latest</code></pre>



<h3 class="wp-block-heading">Choose the compute resources</h3>



<p>First, you can either choose the number of GPUs or CPUs for your app.</p>



<p><code><strong>--cpu 1</strong></code> indicates that we request 1 CPU for that app.</p>



<p>If you want, you can also launch this app with one or more <strong>GPUs</strong>.</p>



<h3 class="wp-block-heading">Attach Object Storage container</h3>



<p>Then, you need to attach <strong>1 volume</strong> to this app. It contains the model that you trained before in part <em>&#8220;Save and export the model for future inference&#8221;</em> of the <a href="https://github.com/ovh/ai-training-examples/blob/main/notebooks/computer-vision/image-classification/tensorflow/weights-and-biases/notebook_Weights_and_Biases_MNIST.ipynb" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">notebook</a>.</p>



<p><code><strong>--volume &lt;my_saved_model&gt;@&lt;region&gt;/:/workspace/saved_model:RO</strong></code>&nbsp;is the volume attached for using your&nbsp;<strong>pretrained model</strong>. </p>



<p>This volume is read-only (<code>RO</code>) because you just need to use the model and not make any changes to this Object Storage container.</p>



<h3 class="wp-block-heading">Make the app public</h3>



<p>Finally, if you want your app to be accessible without the need to authenticate, specify it as follows.</p>



<p>Consider adding the&nbsp;<code><strong>--unsecure-http</strong></code>&nbsp;attribute if you want your application to be reachable without any authentication.</p>



<figure class="wp-block-video aligncenter"><video height="970" style="aspect-ratio: 1914 / 970;" width="1914" controls src="https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-video-final-app.mov"></video></figure>



<h2 class="wp-block-heading">Conclusion</h2>



<p>Well done 🎉! You have learned how to build your <strong>own Docker image</strong> for a dedicated <strong>sketch recognition app</strong>! </p>



<p>You have also been able to deploy this app thanks to <strong>OVHcloud&#8217;s AI Deploy</strong> tool.</p>



<p><em>In a second article, you will see how it is possible to deploy a Data Science project for <strong>interactive data visualization&nbsp;and prediction</strong>.</em></p>



<h3 class="wp-block-heading" id="want-to-find-out-more">Want to find out more?</h3>



<h5 class="wp-block-heading"><strong>Notebook</strong></h5>



<p>You want to access the notebook? Refer to the <a href="https://github.com/ovh/ai-training-examples/blob/main/notebooks/computer-vision/image-classification/tensorflow/weights-and-biases/notebook_Weights_and_Biases_MNIST.ipynb" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">GitHub repository</a>.<br><br>To launch and test this notebook with&nbsp;<strong>AI Notebooks</strong>, please refer to our <a href="https://docs.ovh.com/gb/en/publiccloud/ai/notebooks/tuto-weights-and-biases/" data-wpel-link="exclude">documentation</a>.</p>



<h6 class="wp-block-heading"><strong>App</strong></h6>



<p>You want to access to the full code to create the Gradio app? Refer to the&nbsp;<a href="https://github.com/ovh/ai-training-examples/tree/main/apps/gradio/sketch-recognition" target="_blank" rel="noreferrer noopener nofollow external" data-wpel-link="external">GitHub repository</a>.<br><br>To launch and test this app with&nbsp;<strong>AI Deploy</strong>, please refer to&nbsp;our <a href="https://docs.ovh.com/gb/en/publiccloud/ai/deploy/tuto-gradio-sketch-recognition/" data-wpel-link="exclude">documentation</a>.</p>



<h2 class="wp-block-heading">References</h2>



<ul class="wp-block-list"><li><a href="https://towardsdatascience.com/how-to-run-a-data-science-project-in-a-docker-container-2ab1a3baa889" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">How to Run a Data Science Project in a Docker Container</a></li><li><a href="https://github.com/gradio-app/hub-sketch-recognition" data-wpel-link="external" target="_blank" rel="nofollow external noopener noreferrer">Sketch Recognition on Gradio</a></li></ul>
<img loading="lazy" decoding="async" src="//blog.ovhcloud.com/wp-content/plugins/matomo/app/matomo.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.ovhcloud.com%2Fdeploy-a-custom-docker-image-for-data-science-project-gradio-sketch-recognition-app-part-1%2F&amp;action_name=Deploy%20a%20custom%20Docker%20image%20for%20Data%20Science%20project%20%26%238211%3B%20Gradio%20sketch%20recognition%20app%20%28Part%201%29&amp;urlref=https%3A%2F%2Fblog.ovhcloud.com%2Ffeed%2F" style="border:0;width:0;height:0" width="0" height="0" alt="" />]]></content:encoded>
					
		
		<enclosure url="https://blog.ovhcloud.com/wp-content/uploads/2022/07/gradio-video-final-app.mov" length="2582723" type="video/quicktime" />

			</item>
	</channel>
</rss>
