Deploy model in production (Tensorflow only)

In this tutorial, we will see how to deploy and monitor your model using Picsell.ia, from your saved model files to an inference ready production model

We will assume that you have already trained and exported a TensorFlow model. If not, you can follow this short tutorial to learn how to do it with Picsell.ia.

When you export a trained model using Tensorflow, you end up with a folder that should look a bit like this

Create the model

There are two ways to export a model to Picsell.ia :

  • From an Experiment

  • From raw files

We will cover both cases during this tutorial.

From an Experiment

If you have performed your training in the scope of an experiment, it means that you an store assets that will be linked to this Experiments, we will see how to properly store our trained-model so you can deploy it later.

Start by initializing our Client (replace the tokens and the experiment name by yours)

from picsellia.client import Client

api_token = ""
project_token = ""

experiment = Client.Experiment(api_token, project_token)
exp = experiment.checkout('test-experiment')

Then , the only command you have to run is the latter

exp.store('model-latest', 'saved_model', zip=True)

The first argument is the name given to your file, used to retrieve the asset later or display it on the platform. It HAS TO be named model-latest to be recognized as a trained-model file by Picsell.ia (see the documentation about namespace for more information).

The second argument is the path to the folder containing the .pb file and the variables folder.

Here we set zip, the third argument to True (which will compress your folder into a .zip file) because it's the format we need to run inference later using our engine.

We also have to know on what classes your model have been trained. To send the labelmap, you can proceed with the following method

labels = {
    '1': 'car',
    '2': 'person',
    '3': 'bus'
}

experiment.log(name='labelmap', data=labels, type='labelmap')

To convert your experiment into a model instance we added a step that we call publishing. We did that so you don't confuse the output of your experiments that might not be good with the experiment that contains your final assets.

From the SDK

To do this, run the following command

exp.publish('my-new-model')

The first argument is the name under which we want to create our model (it will be used to retrieve or display it on Picsell.ia)

From Picsell.ia

You can also publish your model directly from the platform, if you have an experiment ready to be published (that means it has a file named 'model-latest') then if you go to your experiment in the platform you should see something like this.

If you click on Export as model it will publish your experiment as model, just like it would have done with the publish method above.

From raw files

If we are not in the scope of an experiment, you can create a model instance directly by using the Network object of the SDK.

Start by initializing our Client (use your own token) and then create a Network

from picsellia.client import Client

api_token = ""

network = Client.Network(api_token)
net = network.create('my-new-model', type='detection')

Remember that you have to set the type of your model, it must be one of the following:

  • detection

  • segmentation

  • classification

We will add some more model types in the future but these are the ones supported for inference

And then run the following command to upload your files

net.store(name='model-latest', file_name="saved_model", zip=True)

The first argument is the name given to your file, used to retrieve the asset later or display it on the platform. It HAS TO be named model-latest to be recognized as a trained-model file by Picsell.ia (see the documentation about namespace for more information).

The second argument is the path to the folder containing the .pb file and the variables folder.

Here we set zip, the third argument to True (which will compress your folder into a .zip file) because it's the format we need to run inference later using our engine.

We also have to know on what classes your model have been trained. To send the label-map, you can proceed with the following method

labels = {
    '1': 'car',
    '2': 'person',
    '3': 'bus'
}
net.labels(labels)

And that's it ! Your model has been created, now let's move on to the next steps.

Deploy the model

Now if we go the All models page in Picsell.ia, we should see our brand new model

And in the Deployment page you should see it too

Now click on the deploy icon on the right

You will see a message telling you that we are deploying your model, it should not take long before you see it succeed.

Ok, now we have a fully-functional model in production, congratulations ! 🥳

You can now click on the code icon to see a code snippet telling you how to perform inference with your model.

You can now copy/paste this snippet and just replace the token variable with your own API token and the path to the file with the file you want to perform inference on.

If you click on the zoom icon, you will have access to the details about your hosted model such as its latency, number of API calls and other stats.

That's it, you now have a fully-functional model deployed with Picsell.ia ! See you in another tutorial 😃

Last updated