speechbrain.nnet.containers module¶
Library for implementing cascade (sequences) of different neural modules.
- Authors
Peter Plantinga 2020
Summary¶
Classes:
Connect a sequence of blocks with shortcut connections. |
|
Sequential model that can take |
|
This class implements a wrapper to torch.nn.ModuleList with a forward() method to forward all the layers sequentially. |
|
A sequence of modules with potentially inferring shape on construction. |
Reference¶
- class speechbrain.nnet.containers.Sequential(*layers, input_shape=None, **named_layers)[source]¶
Bases:
torch.nn.modules.container.ModuleDict
A sequence of modules with potentially inferring shape on construction.
If layers are passed with names, these can be referenced with dot notation.
- Parameters
input_shape (iterable) – A list or tuple of ints or None, representing the expected shape of an input tensor. None represents a variable-length dimension. If no
input_shape
is passed, no shape inference will be performed.*layers – The inputs are treated as a list of layers to be applied in sequence. The output shape of each layer is used to infer the shape of the following layer. If a tuple is returned, only the shape of the first element is used to determine input shape of the next layer (e.g. RNN returns output, hidden).
**named_layers – The inputs are treated as a list of layers to be applied in sequence. The output shape of each layer is used to infer the shape of the following layer. If a tuple is returned, only the shape of the first element is used to determine input shape of the next layer (e.g. RNN returns output, hidden).
Example
>>> inputs = torch.rand(10, 40, 50) >>> model = Sequential(input_shape=inputs.shape) >>> model.append(Linear, n_neurons=100, layer_name="layer1") >>> model.append(Linear, n_neurons=200, layer_name="layer2") >>> outputs = model(inputs) >>> outputs.shape torch.Size([10, 40, 200]) >>> outputs = model.layer1(inputs) >>> outputs.shape torch.Size([10, 40, 100])
- append(layer, *args, layer_name=None, **kwargs)[source]¶
Add a layer to the list of layers, inferring shape if necessary.
- Parameters
layer (A torch.nn.Module class or object) – If the layer is a class, it should accept an argument called
input_shape
which will be inferred and passed. If the layer is a module object, it is added as-is.layer_name (str) – The name of the layer, for reference. If the name is in use,
_{count}
will be appended.*args – These are passed to the layer if it is constructed.
**kwargs – These are passed to the layer if it is constructed.
- get_output_shape()[source]¶
Returns expected shape of the output.
Computed by passing dummy input constructed with the
self.input_shape
attribute.
- forward(x)[source]¶
Applies layers in sequence, passing only the first element of tuples.
- Parameters
x (torch.Tensor) – The input tensor to run through the network.
- class speechbrain.nnet.containers.LengthsCapableSequential(*args, **kwargs)[source]¶
Bases:
speechbrain.nnet.containers.Sequential
Sequential model that can take
lengths
in the forward method.This is useful for Sequential models that include RNNs where it is important to avoid padding, or for some feature normalization layers.
Unfortunately, this module is not jit-able because the compiler doesn’t know ahead of time if the length will be passed, and some layers don’t accept the length parameter.
- forward(x, lengths=None)[source]¶
Applies layers in sequence, passing only the first element of tuples.
In addition, forward the
lengths
argument to all layers that accept alengths
argument in theirforward()
method (e.g. RNNs).- Parameters
x (torch.Tensor) – The input tensor to run through the network.
lengths (torch.Tensor) – The relative lengths of each signal in the tensor.
- class speechbrain.nnet.containers.ModuleList(*layers)[source]¶
Bases:
torch.nn.modules.module.Module
This class implements a wrapper to torch.nn.ModuleList with a forward() method to forward all the layers sequentially. For some pretrained model with the SpeechBrain older implementation of Sequential class, user can use this class to load those pretrained models
- Parameters
*layers (torch class) – Torch objects to be put in a ModuleList.
- class speechbrain.nnet.containers.ConnectBlocks(input_shape, shortcut_type='residual', shortcut_projection=False, shortcut_combine_fn=<built-in method add of type object>)[source]¶
Bases:
torch.nn.modules.module.Module
Connect a sequence of blocks with shortcut connections.
Note: all shortcuts start from the output of the first block, since the first block may change the shape significantly.
- Parameters
input_shape (tuple) – The shape of the
shortcut_type (str) – One of: * “residual” - first block output passed to final output, * “dense” - input of each block is from all previous blocks, * “skip” - output of each block is passed to final output.
shortcut_projection (bool) – Only has an effect if shortcut_type is passed. Whether to add a linear projection layer to the shortcut connection before combining with the output, to handle different sizes.
shortcut_combine_fn (str or function) – Either a pre-defined function (one of “add”, “sub”, “mul”, “div”, “avg”, “cat”) or a user-defined function that takes the shortcut and next input, and combines them, as well as init_params in case parameters need to be initialized inside of the function.
Example
>>> inputs = torch.rand(10, 100, 20) >>> model = ConnectBlocks( ... input_shape=inputs.shape, shortcut_projection=True ... ) >>> model.append(Linear, n_neurons=10) >>> model.append(Linear, n_neurons=10, end_of_block=True) >>> model.append(Linear, n_neurons=10) >>> model.append(Linear, n_neurons=10, end_of_block=True) >>> outputs = model(inputs) >>> outputs.shape torch.Size([10, 100, 10])
- append(layer, *args, **kwargs)[source]¶
Appends the specified module to the shortcut model.
- Parameters
layer (torch.nn.Module class) – This layer will get initialized with *args and **kwargs. Also, the argument
input_shape
will be passed if the layer takes it.*args – Passed unchanged to the layer EXCEPT the kwarg
end_of_block
which is used to indicate that the shortcut should be added in.**kwargs – Passed unchanged to the layer EXCEPT the kwarg
end_of_block
which is used to indicate that the shortcut should be added in.
- forward(x)[source]¶
- Parameters
x (torch.Tensor) – The inputs to the replicated modules.