Config

Launch a Scan

First, let's initialize our client :
1
api_token = ""
2
project_token = ""
3
client = Client.Experiment(api_token=api_token, project_token=project_token)
Copied!
Here's an example of how to configure a Scan :
1
config = {
2
'script': 'script.py',
3
'execution': {
4
'type': 'agents'
5
},
6
'strategy': 'grid',
7
'metric': {
8
'name': 'Loss-total_loss',
9
'goal': 'minimize'
10
},
11
'parameters': {
12
'batch_size': {
13
'values': [2, 4, 8],
14
},
15
'learning_rate': {
16
'values': [1e-3, 1e-4, 1e-5]
17
},
18
'steps': {
19
'value': 1000
20
},
21
'annotation_type': {
22
'value': 'rectangle'
23
}
24
},
25
'base_model': 'picsell/ssd-mobilenet-v2-640-fpnlite',
26
'dataset': 'SampleDataset/first'
27
}
28
client.init_scan('test-scan-1', config)
Copied!

Configuration

Top-level key
execution
How you want to run the Scan (manually, remotely, using agents)
image
Name of the docker image executing your training script (optional)
script
Filename of the script you want to execute (optional)
requirements
List of package needed if you use our base docker image (optional)
strategy
The search strategy for the Scan (required)
max_run
Maximum number of runs for this Scan (optional, default = 100)
early_stopping
The chosen early-stopping or pruning algorithm (optional)
metric
The metric to optimize (required)
parameters
The parameter space used for the search (required)
base_model
A model used to start each run (as in experiment init) (optional)
dataset
A dataset used to start each run (as in experiment init) (optional)

execution

Specify how you want to run the Scan.
execution
manual
We will just define the grid of parameter, you will then be able to use our python SDK to access each run with each set of parameters and execute it on your own, either in a script or in jupyter notebook for example
remote
We will automatically launch remote runs for you on servers equipped with NVIDIA V100S. You must set up max_worker as the limit of parallel runs.
agents
You will be able to launch our agents on any machine and runs will be automatically dispatched across your agents. (see our CLI for more information)
You need to subscribe to a paid plan if you want to launch remote Scans
manual
agents
remote
1
'execution': {
2
'type': 'manual'
3
}
Copied!
1
'execution': {
2
'type': 'agents'
3
}
Copied!
1
'execution': {
2
'type': 'remote',
3
'max_worker': 4,
4
}
Copied!

image

Our Scan engine is based on Docker images that we will schedule and launch on distributed machines, which could be your computer or a cloud server hosted by Picsellia.
If you do not specify any image parameter, we will use our base image (called custom-run:1.0) that will encapsulate the script you provided, install the specified requirements and then launch your script.
Specifying a custom image that will run your code is compulsory if you do not provide a script param, for us to launch your script in our base image
But to save time on package install or to be sure that your script will run 100% of the time, we encourage you to build your own custom image, and then push it to the Docker HUB so we can run it remotely or just have it on every machines where you want to launch our agents.
To specify a custom image, you just have to give its name like below :
1
'image': 'picsellpn/custom-run:1.0'
Copied!

script

If you want to be able to automatically launch your training script without having it on every machines you can specify the path the file, it will be saved on Picsellia and used for each run.
Providing a script is mandatory if youdo not want to define custom Docker images but use our base images.
1
'script': 'my_training_script.py'
Copied!

requirements

Specify this parameter if you want to install specific Python package needed for your script to run when using our base images.
For example, as our image only have the picsellia package installed, if you need tensorflow 2.3.1 to run your script you will set the requirements as below :
1
'requirements': {
2
'package': 'tensorflow':
3
'version': '2.3.1'
4
}
Copied!
Alternatively, you can set requirements to the path of a requirements.txt file just like this :
1
'requirements': 'path/to/requirements.txt'
Copied!
With the requirements.txt file looking like this :
1
tensorflow==2.3.1
2
numpy==1.20.2
Copied!

strategy

Allows you to choose a search strategy within the following options :
strategy
grid
Grid-search, will try out every parameter combinations
optuna
Optimization of hyperparameter search using Optuna library. To use this strategy, you will have to set up a distribution or specific values for every parameter.

max_run

When you perform hyperparameter-search, you never really know how many runs will be needed to find the best combination. For example if you choose an Optuna strategy for your run, the parameters for future runs are computed accordingly to the results of past runs.
That's why you can set up a max_run parameter, that allows you to be sure that your Scan will stop before using infinite resources, and that you will be able to create a new scan with a reduced search space later.
If you do not specify this parameter, the default value is set to 100 runs.
1
'max_run': 256
Copied!

early_stopping (coming soon)

Early-stopping is an optional features that can drastically speed-up your hyperparameter search by deciding whether of not some runs might be stopped early or are given a chance to continue.
If some runs are not promising, they are automatically stopped and the agents get a new set of parameter to try so you do not spare time on unnecessary experiments.
method
decription
hyperband
Implementation of the Hyperband Algorithm by Optuna

Parameters :

parameter
description
min_iter
The minimum number of iteration (e.g training epochs or steps) to wait before deciding to prune the run or not
max_iter
The maximum number of iteration to wait before you either prune the run or let it finish
reduction_factor
At the completion point of each rung, about 1/reduction_factor trials will be promoted.
1
'early_stopping': {
2
'hyperband': {
3
'min_iter': 100,
4
'max_iter': 1000,
5
'reduction_factor': 3,
6
}
7
}
Copied!

metric

The name of the metric you want to optimize, and the way you want to optimize it.
Maximize
Minimize
1
'metric' {
2
'name': 'loss',
3
'goal': 'maximize'
4
}
Copied!
1
'metric': {
2
'name': 'loss',
3
'goal': 'minimize'
4
}
Copied!
For the Scan to run properly, you must log explicitly the metric somewhere in the script you use, this means that you should have a line looking like this :
1
experiment.log('loss', value, 'line')
Copied!
Where the name of what you log must corresponds to the metric you set up during configuration.

parameters

Specify the hyperparameter space to explore. You can either set up a list of constant values for each parameter or just choose a distribution and the bounds (for optuna).
Values
value (int, float, str)
Single value for hyperparameter
values (list[int, float, str])
List of all values for hyperparameter
distribution (str)
Choose an available distribution from the list below (available for optuna strategy)
min (int, float)
Minimum value for hyperparameter. It's the lower bound for the chosen distribution
max (int, float)
Maximum value for hyperparameter. It's the upper bound for the chosen distribution
q (float)
Quantization step size for discrete hyperparameters
step (int)
Step size between values ( for int_uniform distribution)
grid - value
grid - values
optuna - log_uniform
1
'parameter_name':{
2
'value': 0.0001
3
}
Copied!
1
'parameter_name': {
2
'values': ['relu', 'elu', 'selu']
3
}
Copied!
1
'parameter_name': {
2
'distribution': 'log_uniform',
3
'min': 1e-5,
4
'max': 1e-3
5
}
Copied!

distributions

Here is the list of all the distributions you can use :
Name
Information
constant
Constant value for hyperparameter, equals to value.
categorical
Categorical distribution, hyperparameter value will be chosen from values
uniform
Continuous uniform distribution. You must set the bounds min and max.
int_uniform
Discrete uniform distribution for integer. You must set the bounds min and max. You can also set step to a value higher than 1 if you want to.
discrete_uniform
Discrete uniform distribution. You must set the bounds min and max, and the q parameter (step of discretization)
log_uniform
Continuous uniform distribution in the log domain. You must set up the bounds min and max.

Examples

constant
categorical
uniform
int_uniform
discrete_uniform
log_uniform
1
'parameter_name': {
2
'value': 0.3546
3
}
Copied!
1
'parameter_name': {
2
'values': ['relu', 'elu', 'selu']
3
}
Copied!
1
'parameter_name': {
2
'distribution': 'uniform',
3
'min': 1e-5,
4
'max': 1e-2,
5
}
Copied!
1
'parameter_name': {
2
'distribution': 'int_uniform',
3
'min': 2,
4
'max': 16,
5
'step': 2
6
}
Copied!
1
'parameter_name': {
2
'distribution': 'discrete_uniform',
3
'min': 0.1,
4
'max': 1,
5
'q': 0.1
6
}
Copied!
1
'parameter_name': {
2
'distribution': 'log_uniform',
3
'min': 1e-2,
4
'max': 1e3,
5
}
Copied!

base_model

As when you create an experiment, you can choose a model whose files, labelmap and so on, will be duplicated in each run's experiment.
To choose a model, you have to specify the username of the author and the model name this way : <username>/<model_name>
1
'base_model': 'picsell/faster-rcnn-resnet-640'
Copied!

dataset

As when you create an experiment, you can choose a dataset to attach to your run. To do this, the dataset must have already been attached to the project first. Then you have to specify the chosen dataset this way : <dataset_name>/<dataset_version>
1
'dataset': 'SampleDataset/first'
Copied!
Last modified 7mo ago