speechbrain.utils.train_logger module

Loggers for experiment monitoring.

Authors
  • Peter Plantinga 2020

Summary

Classes:

FileTrainLogger

Text logger of training information.

ProgressSampleLogger

A logger that outputs samples during training progress, used primarily in speech synthesis but customizable, reusable and applicable to any other generative task

TensorboardLogger

Logs training information in the format required by Tensorboard.

TrainLogger

Abstract class defining an interface for training loggers.

WandBLogger

Logger for wandb.

Functions:

detach

Detaches the specified object from the graph, which can be a single tensor or a dictionary of tensors.

plot_spectrogram

Returns the matplotlib sprctrogram if available or None if it is not - optional dependency

Reference

class speechbrain.utils.train_logger.TrainLogger[source]

Bases: object

Abstract class defining an interface for training loggers.

log_stats(stats_meta, train_stats=None, valid_stats=None, test_stats=None, verbose=False)[source]

Log the stats for one epoch.

Parameters
  • stats_meta (dict of str:scalar pairs) – Meta information about the stats (e.g., epoch, learning-rate, etc.).

  • train_stats (dict of str:list pairs) – Each loss type is represented with a str : list pair including all the values for the training pass.

  • valid_stats (dict of str:list pairs) – Each loss type is represented with a str : list pair including all the values for the validation pass.

  • test_stats (dict of str:list pairs) – Each loss type is represented with a str : list pair including all the values for the test pass.

  • verbose (bool) – Whether to also put logging information to the standard logger.

class speechbrain.utils.train_logger.FileTrainLogger(save_file, precision=2)[source]

Bases: TrainLogger

Text logger of training information.

Parameters
  • save_file (str) – The file to use for logging train information.

  • precision (int) – Number of decimal places to display. Default 2, example: 1.35e-5.

  • summary_fns (dict of str:function pairs) – Each summary function should take a list produced as output from a training/validation pass and summarize it to a single scalar.

log_stats(stats_meta, train_stats=None, valid_stats=None, test_stats=None, verbose=True)[source]

See TrainLogger.log_stats()

class speechbrain.utils.train_logger.TensorboardLogger(save_dir)[source]

Bases: TrainLogger

Logs training information in the format required by Tensorboard.

Parameters

save_dir (str) – A directory for storing all the relevant logs.

Raises

ImportError if Tensorboard is not installed.

log_stats(stats_meta, train_stats=None, valid_stats=None, test_stats=None, verbose=False)[source]

See TrainLogger.log_stats()

log_audio(name, value, sample_rate)[source]

Add audio signal in the logs.

log_figure(name, value)[source]

Add a figure in the logs.

class speechbrain.utils.train_logger.WandBLogger(*args, **kwargs)[source]

Bases: TrainLogger

Logger for wandb. To be used the same way as TrainLogger. Handles nested dicts as well. An example on how to use this can be found in recipes/Voicebank/MTL/CoopNet/

log_stats(stats_meta, train_stats=None, valid_stats=None, test_stats=None, verbose=False)[source]

See TrainLogger.log_stats()

class speechbrain.utils.train_logger.ProgressSampleLogger(output_path, formats=None, format_defs=None, batch_sample_size=1)[source]

Bases: object

A logger that outputs samples during training progress, used primarily in speech synthesis but customizable, reusable and applicable to any other generative task

Natively, this logger supports images and raw PyTorch output. Other custom formats can be added as needed.

Example:

In hparams.yaml progress_sample_logger: !new:speechbrain.utils.progress_samples.ProgressSampleLogger

output_path: output/samples progress_batch_sample_size: 3 format_defs:

foo:

extension: bar saver: !speechbrain.dataio.mystuff.save_my_format kwargs:

baz: qux

formats:

foobar: foo

In the brain:

Run the following to “remember” a sample (e.g. from compute_objectives)

self.hparams.progress_sample_logger.remember(

target=spectrogram_target, output=spectrogram_output, alignments=alignments_output, my_output= raw_batch={

“inputs”: inputs, “spectrogram_target”: spectrogram_target, “spectrogram_output”: spectrorgram_outputu, “alignments”: alignments_output

}

)

Run the following at the end of the epoch (e.g. from on_stage_end) self.progress_sample_logger.save(epoch)

Parameters
  • output_path (str) – the filesystem path to which samples will be saved

  • formats (dict) –

    a dictionary with format identifiers as keys and dictionaries with handler callables and extensions as values. The signature of the handler should be similar to torch.save

    Example: {

    ”myformat”: {

    “extension”: “myf”, “saver”: somemodule.save_my_format

    }

    }

  • batch_sample_size (int) – The number of items to retrieve when extracting a batch sample

DEFAULT_FORMAT = 'image'
reset()[source]

Initializes the collection of progress samples

remember(**kwargs)[source]

Updates the internal dictionary of snapshots with the provided values

Parameters

kwargs (dict) – the parameters to be saved with

get_batch_sample(value)[source]

Obtains a sample of a batch for saving. This can be useful to monitor raw data (both samples and predictions) over the course of training

Parameters

value (dict|torch.Tensor|list) – the raw values from the batch

Returns

result – the same type of object as the provided value

Return type

object

save(epoch)[source]

Saves all items previously saved with remember() calls

Parameters

epoch (int) – The epoch number

save_item(key, data, epoch)[source]

Saves a single sample item

Parameters
  • key (str) – the key/identifier of the item

  • data (torch.Tensor) – the data to save

  • epoch (int) – the epoch number (used in file path calculations)

speechbrain.utils.train_logger.plot_spectrogram(spectrogram, ap=None, fig_size=(16, 10), output_fig=False)[source]

Returns the matplotlib sprctrogram if available or None if it is not - optional dependency

speechbrain.utils.train_logger.detach(value)[source]

Detaches the specified object from the graph, which can be a single tensor or a dictionary of tensors. Dictionaries of tensors are converted recursively

Parameters

value (torch.Tensor|dict) – a tensor or a dictionary of tensors

Returns

result – a tensor of dictionary of tensors

Return type

torch.Tensor|dict