speechbrain.utils.optimizers module
Implements functions to avoid optimizing certain parameters
- Authors
Titouan Parcollet 2023
Summary
Functions:
Put vectors in a parameter group without weight decay |
Reference
- speechbrain.utils.optimizers.rm_vector_weight_decay(modules)[source]
Put vectors in a parameter group without weight decay
Takes in a list of modules and separates their parameters into two parameter groups, which can be passed to a PyTorch Optimizer class. Vector parameters get weight_decay overridden to zero. This is particularly useful for biases and norms, which we expect to deviate from zero. Other vectors as parameters are also likely not meant to be pushed toward zero.
- Parameters:
modules (torch.ModuleList, torch.Module) – Torch modules to operate on
- Returns:
The parameter groups in the Pytorch Optimizer specification format.
- Return type: