Experiment Tracking

The second main feature on Picsellia is the ability to track all the experiments that will lead you to the best version of your models, let's discover it !

As it is a whole discipline in itself, we propose you a fully functional Hyperparameter tuning system that allows you to elevate your workflow and finally achieve top performances.When training AI models, you will perform a lot of different experiments with different pre-trained models, dataset versions, set of hyperparameters, evaluation techniques etc...

This is a very iterative process that takes a lot of time because once your training script is ready, you will spend a lot of time playing with all the different variables your final models depends on, and launching your script over those variables over and over until you are satisfied with its performances.

One big challenge that you will face as a data scientist, AI researcher or engineer is that you must be able to store all the important metrics for each experiment, store the principal files and finally compare your experiments to find the best one.

This whole process is called Experiment Tracking and you can do it seamlessly on Picsellia ! In this section, you will learn how 😊

TL:DR

If you want to start experimenting right now, you can go right to the next tutorial 👇

pageInitialize an experiment

The experiment system

When you have created a project, it is now time to start experimenting with your datasets and start training models.

The three main aspects of experiment tracking are :

Logs

To log something means that you will save the value of a parameter, metric, image, and that you and your team will be able to visualize it later in the platform in a practical way such as an interactive plot/chart.

To do this allows you to analyze your experiments automatically without having to worry about saving anything to your local drive.

Artifacts

Artifacts are all the files that you will generate when you run an experiment such as model weights, checkpoints, configuration files etc...

If you don't use a proper experiment tracking system, at a certain point you will have trouble organizing the different versions of your files and share it with your team.

That's why we allow you to store any file to our platform at any time during your experiment so you are sure to find it later and be able to retrieve the files of the your best experiment.

Features

Even if Logs and Artifacts could be enough for your team to organize its work and be able to reproduce results, we have developed other features that allows you to go further and be extremely profficient in your model development task.

Evaluation

You have access to a dedicated interface to evaluate and understand your trained model results.

pageEvaluate your models

Hyperparameter tuning

Hyperparameter tuning is the process of playing with your training's parameters in order to achieve the best performances.

You may do it 'by hand' using only experiments with parameters that you set up manually but it's a tedious process and you are never sure that you have found the best parameter combination in the end...

pageHyperparameter tuning

Last updated