Skip to main content

Logging

Histogram logging lets you track how distributions of values (such as weight distributions, activation values, or gradient norms) change over training steps. To log a histogram, instantiate the pluto.Histogram class and pass it to pluto.log:
histogram = pluto.Histogram(
    data=values,
    bins=64,
)
pluto.log({"layers/layer_0/weights": histogram}, step=epoch)
ParameterTypeDescription
dataUnion[list, np.ndarray, torch.Tensor]The values to build the histogram from.
binsintNumber of bins for the histogram. Defaults to 64.

Examples

Logging Weight Distributions

import pluto
import torch

run = pluto.init(project="my-project")

model = MyModel()
for epoch in range(num_epochs):
    # ... training step ...

    # Log weight distributions for each layer
    for name, param in model.named_parameters():
        if "weight" in name:
            pluto.log({f"histograms/{name}": pluto.Histogram(param.data.cpu())}, step=epoch)

Logging Gradient Distributions

for name, param in model.named_parameters():
    if param.grad is not None:
        pluto.log({
            f"gradients/{name}": pluto.Histogram(param.grad.data.cpu(), bins=32)
        }, step=epoch)

Viewing

Histograms appear as cards alongside your other metrics. Each histogram widget displays the distribution at a given training step. Histogram view in the dashboard

Step Navigation

Use the step slider below the histogram to browse distributions at different training steps. When multiple histogram groups are displayed in the same section, their step sliders can be linked so that changing the step on one group changes all of them simultaneously. Click the lock icon on the step navigator to toggle sync on or off.

Fullscreen View

Click the expand button on any histogram card’s toolbar to open it in fullscreen. The fullscreen view displays the full multi-run comparison at viewport size. Use arrow keys to navigate between steps.