Enhanced Experiment Management Functionality
VARISTA's experiment management function is a simple way to manage combinations of data, hyperparameters, and finished models that tend to be complicated.
Update of the model list screen.
Added a list of scores and parameters for the models created in the Model List.
The list can be sorted and filtered by score, hyperparameter, algorithm, etc., so you can get a bird's eye view of the performance of the models you have created, making it more convenient than ever to manage experiments on machine learning models.
Add model comparison feature.
The model comparison function is a useful feature that allows you to compare the metrics of multiple models.
This is useful when deciding which model to adopt, such as whether to simply adopt the model with the best accuracy or the model with the fastest inference speed.
For more information, please check the following document.
🔗 Documentation›Experiment Management
We have added a sample project so that you can experience the process of creating a model immediately after registering with VARISTA.
Currently, there are two sample projects: one is a model for predicting the number of survivors of the Titanic, which is also used by Kaggle, and the other is a model for predicting housing prices.
More sample projects will be added in the future.
After registering with VARISTA, we have added a new feature that allows you to experience the process from project creation to model creation in a tutorial format.
- Learning Templates: Fixed an issue where the Ensemble cross-validation setting had an empty default value.
- Training Template: Fixed an issue where Auto Selection would show a Single model when saved with only one model.
- Model Reports: Fixed an issue where some values in metrics were different.
Show hyperparameter optimization history
You can now see the history of your hyperparameter optimization attempts.
All methods available in VARISTA (Grid Search, Randomized Search, Hyperopt and Optina) are supported.
You can see the hyperparameters and scores for each round of optimization in a chart, which is useful for interpreting the model.
For more information, please refer to this document.
- Fixed the problem of a single thread during parameter auto-tuning.
- Fixed a problem where metrics could not be specified for Single model during training.
- Fixed a bug in the data upload screen of the project creation wizard where analysis would not complete if there were errors in the data.
- Minor UI adjustments
While the previous AutoML features did not allow for fine-grained tuning of the model, the newly added modeling now has a new concept of template.
The template can be configured for single model, ensemble learning, autopilot learning, etc., and hyperparameter settings can be configured at the same level as in Python code.
In the past, VARISTA always performed a parameter search when creating a model in AutoML, and it took a long time to complete the learning process.
With the new modeling feature, the modeling process has been reviewed and restructured so that it can now be used to quickly complete training on a single model, such as XGBoost.
In addition, ensemble learning and autopilot learning will compare up to 32 models and select the fused and superior model, which is ideal if you want to build a pipeline.
For more details, please check this template.
Model Evaluation Report
The UI has been revamped with a new layout, and the following charts can now be viewed in the regression.
- Prediction And Observation
- Residual VS Fitted
- Absolute Errors
- Errors Chart
- Residual Histogram
In Classification, you can now check Threshold, PrecisionRecall, ROC, and Confusion Matrix.
Data Distribution, Correlation as well as Partial Dependency Plot can now be checked.
In the past, VARISTA allowed you to run inference only in the browser, but with the new deployment feature, you can now deploy your model with a single click and use inference via the API.
You can also easily turn on/off the deployed API from the GUI.
For details, please see Deploy here.
A dashboard has been added to make it easier to keep track of the project status.
- Form Inference
It is now possible to perform inference using forms from the browser.
The form will automatically display the features, so you can quickly make inferences even if you don't want to create test data in CSV.
A powerful processing tool has been added to VARISTA so that data pre-processing can also be done on a GUI basis.
The VARISTA Data Editor is a tool for processing data by applying prepared filters to the data uploaded to VARISTA.
When you add a filter, it will execute it step by step and process the data.
The data provided will be added in the future as needed.
Outlier removal can also be done visually.
Enhanced data management
When managing data, a right-click menu has been introduced to make it easier to access each action.
Also, multiple data uploads are now supported.
Workspace and Team Features
In the past, the project was the top-level concept, but now we have added a workspace above the project.
On the workspace, you can create shared projects and private projects.
You can also invite team members to the workspace, making it easier to share information.