Track
Cubyc's core functionality revolves around tracking experiment runs.
This guide will teach you how to set up, track, and manage them with Cubyc's powerful Run
class.
Before getting started, make sure you have completed steps 1 through 3
from the quickstart.
Runs
Cubyc offers three ways to define experiment runs: you can explicitly specify the start and end of the run, utilize a context manager, or define it as a function. All three approaches are equally effective and capture the same information.
Create a Run
and call its start
and end
methods to define the start and end of your run.
experiment.py | |
---|---|
Use Python's with
statement to define a context manager for your experiment.
experiment.py | |
---|---|
Define your experiment as a function and use the @run.track
decorator to track it.
experiment.py | |
---|---|
Warning
Currently, we only support function syntax for tracking runs in notebooks.
Remote Only
To automatically save your runs in the cloud, you either set the remote repository URL by running
cubyc remote add <URL>
or pass the remote to the Run
constructor.
Tags
You can add tags to your runs to categorize them and make them easier to search and analyze.
Hyperparameters
Tracking hyperparameters like learning rates, batch sizes, or model architectures allows you to compare and analyze different configurations.
Pass a dictionary of hyperparameters to the Run's constructor.
Pass a dictionary of hyperparameters to the Run's constructor.
Tip
Cubyc can track serializable hyperparameters like strings, numbers, and lists. Use Pydantic for custom objects.
Dependencies
Cubyc automatically saves the dependencies of your experiment, such as the Python version and libraries used, along with their versions. There's no need for manual recording.
Code
Cubyc stores your experiment code, enabling you to version it and compare changes over time.
Add your code between calls to the Run's start
and end
methods to track it.
Wrap your code in the context manager to track it.
Logs
Logs can be used to track metrics such as loss, accuracy, or any other output of interest.
Pass a dictionary of metrics to the Run's log
method.
Pass a dictionary of metrics to the Run's log
method.
Artifacts
Cubyc automatically tracks and commits files created during your runs, such as model weights and plots.
In the example above, model.pkl
and plot.png
are saved as artifacts.
By leveraging Cubyc's tracking capabilities, you can easily record and reproduce all your experiment runs. In the next section, we'll show you how Cubyc saves and organizes the outputs of your runs to a Git repository, and how you can query and analyze them with SQL.