Skip to content

Track

Cubyc's core functionality revolves around tracking experiment runs. This guide will teach you how to set up, track, and manage them with Cubyc's powerful Run class. Before getting started, make sure you have completed steps 1 through 3 from the quickstart.


Runs

Cubyc offers three ways to define experiment runs: you can explicitly specify the start and end of the run, utilize a context manager, or define it as a function. All three approaches are equally effective and capture the same information.

Create a Run and call its start and end methods to define the start and end of your run.

experiment.py
from cubyc import Run

model = MLP(hidden_layers=2)
optimizer = Adam(lr=0.001)

run = Run(params={"model": model, "opt": optimizer}, tags=["tutorial"])
run.start()
for epoch in range(10):
    ...
    run.log({"loss": ..., "acc": ...})

model.save("models/model.pkl")
plt.savefig("plot.png")
run.end()

Use Python's with statement to define a context manager for your experiment.

experiment.py
from cubyc import Run

model = MLP(hidden_layers=2)
optimizer = Adam(lr=0.001)

with Run(params={"model": model, "opt": optimizer}, tags=["tutorial"])):
    for epoch in range(10):
        ...
        run.log({"loss": ..., "acc": ...})

    model.save("models/model.pkl")
    plt.savefig("plot.png")

Define your experiment as a function and use the @run.track decorator to track it.

experiment.py
from cubyc import Run

model = MLP(hidden_layers=2)
optimizer = Adam(lr=0.001)

@Run(tags=["tutorial"]))
def experiment_func(model, optimizer):
    for epoch in range(10):
        ...
        yield {"loss": ..., "acc": ...}

    model.save("model.pkl")
    plt.savefig("plot.png")
experiment_func(model=model, optimizer=optimizer)

Warning

Currently, we only support function syntax for tracking runs in notebooks.

Remote Only

To automatically save your runs in the cloud, you either set the remote repository URL by running cubyc remote add <URL> or pass the remote to the Run constructor.

run = Run(remote="htps://github.com/owner/project.git")
with Run(remote="htps://github.com/owner/project.git") as run:
@Run(remote="htps://github.com/owner/project.git")

Tags

You can add tags to your runs to categorize them and make them easier to search and analyze.

Pass a list or set of tags to the Run constructor.

run = Run(tags=["tutorial"])

Pass a list or set of tags to the context manager.

with Run(tags=["tutorial"]) as run:

Pass a list or set of tags to the @run.track decorator.

@Run(tags=["tutorial"])

Hyperparameters

Tracking hyperparameters like learning rates, batch sizes, or model architectures allows you to compare and analyze different configurations.

Pass a dictionary of hyperparameters to the Run's constructor.

3
4
5
6
7
model = MLP(hidden_layers=2)
optimizer = Adam(lr=0.001)

run = Run(params={"model": model, "opt": optimizer}, tags=["tutorial"]))
...

Pass a dictionary of hyperparameters to the Run's constructor.

3
4
5
6
7
model = MLP(hidden_layers=2)
optimizer = Adam(lr=0.001)

with Run(params={"model": model, "opt": optimizer}, tags=["tutorial"])):
    ...

Define your hyperparameters as function arguments.

3
4
5
6
7
8
model = MLP(hidden_layers=2)
optimizer = Adam(lr=0.001)

@Run(tags=["tutorial"]))
def experiment_func(model, optimizer):
    ...

Tip

Cubyc can track serializable hyperparameters like strings, numbers, and lists. Use Pydantic for custom objects.


Dependencies

Cubyc automatically saves the dependencies of your experiment, such as the Python version and libraries used, along with their versions. There's no need for manual recording.


Code

Cubyc stores your experiment code, enabling you to version it and compare changes over time.

Add your code between calls to the Run's start and end methods to track it.

run = Run(params={"model": model, "opt": optimizer}, ags=["tutorial"]))
run.start()
for epoch in range(10):
    ...
    run.log({"loss": ..., "acc": ...})

model.save("models/model.pkl")
plt.savefig("plot.png")
run.end()

Wrap your code in the context manager to track it.

with Run(params={"model": model, "opt": optimizer}, tags=["tutorial"])):
    for epoch in range(10):
        ...
        run.log({"loss": ..., "acc": ...})

    model.save("models/model.pkl")
    plt.savefig("plot.png")

Wrap your code in a function and use the @run.track decorator to track it.

@Run(tags=["tutorial"]))
def experiment_func(model, optimizer):
    for epoch in range(10):
        ...
        yield {"loss": ..., "acc": ...}


    model.save("model.pkl")
    plt.savefig("plot.png")

Logs

Logs can be used to track metrics such as loss, accuracy, or any other output of interest.

Pass a dictionary of metrics to the Run's log method.

run = Run(params={"model": model, "opt": optimizer}, ags=["tutorial"]))
run.start()
for epoch in range(10):
    ...
    run.log({"loss": ..., "acc": ...})

model.save("models/model.pkl")
plt.savefig("plot.png")
run.end()

Pass a dictionary of metrics to the Run's log method.

with Run(params={"model": model, "opt": optimizer}, tags=["tutorial"])):
    for epoch in range(10):
        ...
        run.log({"loss": ..., "acc": ...})

    model.save("models/model.pkl")
    plt.savefig("plot.png")

Yield a dictionary with the desired metrics.

@Run(tags=["tutorial"]))
def experiment_func(model, optimizer):
    for epoch in range(10):
        ...
        yield {"loss": ..., "acc": ...}


    model.save("model.pkl")
    plt.savefig("plot.png")

Artifacts

Cubyc automatically tracks and commits files created during your runs, such as model weights and plots. In the example above, model.pkl and plot.png are saved as artifacts.


By leveraging Cubyc's tracking capabilities, you can easily record and reproduce all your experiment runs. In the next section, we'll show you how Cubyc saves and organizes the outputs of your runs to a Git repository, and how you can query and analyze them with SQL.