Train a custom Object Detection model

Here, we will see how to perform transfer learning from a pre-trained model on your own dataset the easiest way ever !

TL;DR

If you just want to skip explanations and just start training your model, we have cooked a notebook just for you that you can open in Colab.

Click here to access the notebook.

Installation

First we need to install the picsellia modules to get started.

pip install picsellia

And then install one of the following package whether you want to work with Tensorflow 1 or Tensorflow 2.

pip install picsellia_tf1 # Tensorflow 1
pip install picsellia_tf2 # Tensorflow 2

For the sake of our tutorial, we will use the picsellia_tf2 package, but the methods are nearly the same with the picsellia_tf1 package, you can check out the reference to learn more about how each module works.

Now that we are all set, let's get started !

Initialize our client

In order for your code to interact with the platform you need to start with those lines.

from picsellia.client import Client
from picsellia_tf2 import pxl_utils
from picsellia_tf2 import pxl_tf

api_token = '4d388e237d10b8a19a93517ffbe7ea32ee7f4787'
project_token = '9c68b4ae-691d-4c3a-9972-8fe49ffb2799'

experiment = Client.Experiment(api_token=api_token, project_token=project_token)

Ok, those first lines might need some explanations.

The Client is the main Picsell.ia class, it will be used in every tutorials. You can check the reference here. The Client is for general use that's why here we initialize the subclass Experiment in order to focus on the experiment part of Picsell.ia.

pxl_utils and pxl_tf are the only two modules we need to perform training with Tensorflow.

You can find your api_token in your profile on the platform, see this.

This tutorial assumes that you have created your first project, please refer to this tutorial if it's not the case.

You should see a greeting message in your console.

Hi Pierre-Nicolas, welcome back.

Now that we are connected to Picsell.ia, let's get down to real business.

Initialize an experiment

Checkout an existing experiment

If you have already created an experiment on the platform and chose a base model, you might want to retrieve it to launch your training and not create a new experiment, to do this you can call the following method

exp = experiment.checkout('my-new-model', tree=True, with_file=True)

Let's explain the parameters :

  • 'my-new-model' is obviously the name of the experiment you created earlier

  • tree, setting this parameter to True will create the folder structure needed to store and organize all files issued from training (records, checkpoints, config, saved_model ...)

  • with_file, this parameters is set to True to fetch on your machine all the files stored under your experiment (checkpoints, config...)

Create a new experiment

If you want to log and store everything you create or observe during training, you have to create what we call an experiment.

Check out this page to learn more about the experiment system.

exp = experiment.create(
    name='my_new_model',
    description='Transfer learning with an efficientdet d0 network',
    source='Pierre-Nicolas/efficientdet-d0-coco17-tpu-32'
    )

Now we can see that our experiment has been created and that we have retrieved all the assets from the efficientdet network to start our training :

{
 'id': '3fb04130-1718-4b73-a850-e58dda3d9cfe',
 'date_created': '2020-12-14T18:18:38.010144Z',
 'last_update': '2020-12-14T18:18:38.009873Z',
 'owner': {'username': 'Pierre-Nicolas'},
 'project': {'project_id': '9c68b4ae-691d-4c3a-9972-8fe49ffb2799',
  'project_name': 'project 21'},
 'name': 'my_new_model',
 'description': 'Transfer learning with an efficientdet d0 network',
 'status': 'started',
 'logging': None,
 'files': [
  {'id': 31,
   'date_created': '2020-12-14T18:18:38.017601Z',
   'last_update': '2020-12-14T18:18:38.017435Z',
   'large': False,
   'name': 'config',
   'object_name': 'efficientdet_d0_coco17_tpu-32/checkpoint/pipeline.config'},
  ...
  {'id': 34,
   'date_created': '2020-12-14T18:18:38.052955Z',
   'last_update': '2020-12-14T18:18:38.052806Z',
   'large': False,
   'name': 'checkpoint-index-latest',
   'object_name': 'efficientdet_d0_coco17_tpu-32/checkpoint/ckpt-0.index'}
 ],
 'data': [
   {
    'id': 8,
    'date_created': '2020-12-14T18:18:38.064439Z',
    'last_update': '2020-12-14T18:18:38.064268Z',
    'name': 'labelmap',
    'data': {
     '1': 'person',
     '2': 'bicycle',
     ...
     '89': 'hair drier',
     '90': 'toothbrush'
    }
   }
  ]
}

Prepare for training

Now that we have created our experiment we need to do a few more steps before we can train, i'm sure you've guessed them :

  • Download annotations

  • Download images

  • Perform a train/test split

  • Create tfrecords (data placeholder optimized for training)

  • Edit the config file (needed to tune our experiment)

Here are the functions that will perform those actions, if you need more information please check the reference here.

experiment.dl_annotations()
experiment.dl_pictures()
experiment.generate_labelmap()
experiment.log('labelmap', experiment.label_map, 'labelmap', replace=True)
experiment.train_test_split()

Now let's fetch the default parameters for our mode, if you want to change some, don't forget to log them back afterward !

parameters = experiment.get_data('parameters')
print(parameters)
{ 
    'steps': 10000,
    'learning_rate': 1e-3',
    'annotation_type': 'rectangle',
    'batch_size': 8
}
parameters['steps'] = 50000
experiment.log('parameters', parameters, 'table', replace=True)
pxl_utils.create_record_files(
        dict_annotations=experiment.dict_annotations, 
        train_list=experiment.train_list, 
        train_list_id=experiment.train_list_id, 
        eval_list=experiment.eval_list, 
        eval_list_id=experiment.eval_list_id,
        label_path=experiment.label_path, 
        record_dir=experiment.record_dir, 
        tfExample_generator=pxl_tf.tf_vars_generator, 
        annotation_type=parameters['annotation_type']
        )
picsell_utils.edit_config(
        model_selected=experiment.model_selected, 
        config_dir=experiment.config_dir,
        record_dir=experiment.record_dir, 
        label_map_path=experiment.label_path, 
        num_steps=parameters['num_steps'],
        batch_size=parameters['batch_size'],
        learning_rate=parameters['learning_rate'],
        annotation_type=parameters['annotation_type'],
        eval_number = 5,
        incremental_or_transfer=incremental_or_transfer
        )

Train

Now we can call this simple method to run the training

pxl_utils.train(
        ckpt_dir=experiment.checkpoint_dir, 
        config_dir=experiment.config_dir
    )

Evaluate

Call this method to run an evaluation on all our test images

pxl_utils.evaluate(
    experiment.metrics_dir, 
    experiment.config_dir, 
    experiment.checkpoint_dir
    )        

Export the model

Now we can export our model so we can load it to perform inference later on

pxl_utils.export_graph(
                       ckpt_dir=experiment.checkpoint_dir, 
                       exported_model_dir=experiment.exported_model_dir, 
                       config_dir=experiment.config_dir
                       )

Run inference

Let's try our model on a few images to check the results

pxl_utils.infer(
     experiment.record_dir, 
     exported_model_dir=experiment.exported_model_dir, 
     label_map_path=experiment.label_path, 
     results_dir=experiment.results_dir, 
     min_score_thresh=min_score_thresh, 
     num_infer=5, 
     from_tfrecords=True, 
     disp=False
     )

Log and store

Nice job ! We have performed a complete training in just a few steps, now we will log our metrics and logs to the platform and store our file assets so we can restore it in our next iteration:

metrics = picsell_utils.tf_events_to_dict(clt.metrics_dir, 'eval')
logs = picsell_utils.tf_events_to_dict(clt.checkpoint_dir, 'train')

clt.store('model-latest')
clt.store('config')
clt.store('checkpoint-data-latest')
clt.store('checkpoint-index-latest')
clt.log('logs', logs)
clt.log('metrics', metrics)

Last updated