speechbrain.lobes.models.transformer.TransformerLM module

An implementation of Transformer Language model.

Authors * Jianyuan Zhong * Samuele Cornell




This is an implementation of transformer language model.


class speechbrain.lobes.models.transformer.TransformerLM.TransformerLM(vocab, d_model=512, nhead=8, num_encoder_layers=12, num_decoder_layers=0, d_ffn=2048, dropout=0.1, activation=<class 'torch.nn.modules.activation.ReLU'>, positional_encoding='fixed_abs_sine', normalize_before=False, d_embedding=None, max_length=2500, causal=True, attention_type='regularMHA', decoder_use_memory=False)[source]

Bases: TransformerInterface

This is an implementation of transformer language model.

The architecture is based on the paper “Attention Is All You Need”: https://arxiv.org/pdf/1706.03762.pdf

  • d_model (int) – The number of expected features in the encoder/decoder inputs (default=512).

  • nhead (int) – The number of heads in the multiheadattention models (default=8).

  • num_encoder_layers (int) – The number of sub-encoder-layers in the encoder (default=6).

  • num_decoder_layers (int) – The number of sub-decoder-layers in the decoder (default=6).

  • dim_ffn (int) – The dimension of the feedforward network model (default=2048).

  • dropout (int) – The dropout value (default=0.1).

  • activation (torch class) – The activation function of encoder/decoder intermediate layer, relu or gelu (default=relu).

  • decoder_use_memory (bool) – whether to use the hidden state in the decoder


>>> src = torch.randint(0, 720, [8, 120])
>>> net = TransformerLM(720, 512, 8, 1, 0, 1024, activation=torch.nn.GELU)
>>> enc_out = net.forward(src)
>>> print(enc_out.shape)
torch.Size([8, 120, 720])
forward(src, hx=None)[source]

src (tensor) – The sequence to the encoder (required).

training: bool
make_masks(src, pad_idx=0, look_ahead_mask=True, padding_mask=True)[source]