speechbrain.lobes.models.RNNLM module¶
Implementation of a Recurrent Language Model.
- Authors
Mirco Ravanelli 2020
Peter Plantinga 2020
Ju-Chieh Chou 2020
Titouan Parcollet 2020
Abdel 2020
Reference¶
- class speechbrain.lobes.models.RNNLM.RNNLM(output_neurons, embedding_dim=128, activation=<class 'torch.nn.modules.activation.LeakyReLU'>, dropout=0.15, rnn_class=<class 'speechbrain.nnet.RNN.LSTM'>, rnn_layers=2, rnn_neurons=1024, rnn_re_init=False, return_hidden=False, dnn_blocks=1, dnn_neurons=512)[source]¶
Bases:
torch.nn.modules.module.Module
This model is a combination of embedding layer, RNN, DNN. It can be used for RNNLM.
- Parameters
output_neurons (int) – Number of entries in embedding table, also the number of neurons in output layer.
embedding_dim (int) – Size of embedding vectors (default 128).
activation (torch class) – A class used for constructing the activation layers for DNN.
dropout (float) – Neuron dropout rate applied to embedding, RNN, and DNN.
rnn_class (torch class) – The type of RNN to use in RNNLM network (LiGRU, LSTM, GRU, RNN)
rnn_layers (int) – The number of recurrent layers to include.
rnn_neurons (int) – Number of neurons in each layer of the RNN.
rnn_re_init (bool) – Whether to initialize rnn with orthogonal initialization.
rnn_return_hidden (bool) – Whether to return hidden states (default True).
dnn_blocks (int) – The number of linear neural blocks to include.
dnn_neurons (int) – The number of neurons in the linear layers.
Example
>>> model = RNNLM(output_neurons=5) >>> inputs = torch.Tensor([[1, 2, 3]]) >>> outputs = model(inputs) >>> outputs.shape torch.Size([1, 3, 5])