VARISTA's experiment management function is a simple way to manage combinations of data, hyperparameters, and finished models that tend to be cumbersome.
The created models can be sorted by score, etc., so you can select several models with high scores for detailed model comparison.
The Model List screen displays the models that you have created by changing the data, algorithms, hyperparameters, cross-validation, and other settings.
The list of models can be sorted and filtered by displaying arbitrary scores as well as information on the settings used for training.
Each row in the list is a model created by VARISTA.
Select display columns
The column selection feature allows you to hide columns that are not needed for model comparison.
For model comparison, you can select up to 8 models to compare their performance.
The following are the items that can be compared.
The Performance tab shows the List, Grid, and Confusion matrices.
Displays the performance of the selected model in a list.
You can also select the indicators for each model and compare them in the chart. (Figure below)
Selecting the Grid view will display a list of charts of the indicators you have recorded.
(Binary classification only)
The confusion matrix lists the confusion matrices for each model, displayed as a chart.
The confusion matrix lists the confusion matrices for each model in a chart.
Plotting is supported only for regression and binary classification, and the content displayed for each task is different.
For regression classification, Prediction And Observation, Absolute Errors, and Errors Chart are displayed.
![VARISTA Document Experiments VARISTA Document Experiments 011
For binary classification, Precision-Recall, ROC is displayed.
![VARISTA Document Experiments 007](//images.ctfassets.net/8qlu80sl3ynp/6EobLT6ZX3VFDJUMJ9BsON/cb3a40b1e9913f1d410f106b29483afb/ VARISTA_Document_Experiments_007.png)
Speed vs Accuracy
Displays a chart that plots the inference speed vs. accuracy of a model.
This is a useful feature to help you decide whether to use a model with inference speed or accuracy priority when actually adopting a model.