speechbrain.lobes.models.transformer.TransformerLM module

An implementation of Transformer Language model.

Authors * Jianyuan Zhong

Summary

Classes:

TransformerLM

This is an implementation of transformer language model.

Reference

class speechbrain.lobes.models.transformer.TransformerLM.TransformerLM(vocab, d_model=512, nhead=8, num_encoder_layers=12, num_decoder_layers=0, d_ffn=2048, dropout=0.1, activation=<class 'torch.nn.modules.activation.ReLU'>, positional_encoding=True, normalize_before=False, d_embedding=None)[source]

Bases: speechbrain.lobes.models.transformer.Transformer.TransformerInterface

This is an implementation of transformer language model.

The architecture is based on the paper “Attention Is All You Need”: https://arxiv.org/pdf/1706.03762.pdf

Parameters
  • d_model (int) – The number of expected features in the encoder/decoder inputs (default=512).

  • nhead (int) – The number of heads in the multiheadattention models (default=8).

  • num_encoder_layers (int) – The number of sub-encoder-layers in the encoder (default=6).

  • num_decoder_layers (int) – The number of sub-decoder-layers in the decoder (default=6).

  • dim_ffn (int) – The dimension of the feedforward network model (default=2048).

  • dropout (int) – The dropout value (default=0.1).

  • activation (torch class) – The activation function of encoder/decoder intermediate layer, relu or gelu (default=relu).

Example

>>> src = torch.randint(0, 720, [8, 120])
>>> net = TransformerLM(720, 512, 8, 1, 0, 1024, activation=torch.nn.GELU)
>>> enc_out = net.forward(src)
>>> print(enc_out.shape)
torch.Size([8, 120, 720])
forward(src, hx=None)[source]
Parameters

src (tensor) – The sequence to the encoder (required).

training: bool
make_masks(src, pad_idx=0, look_ahead_mask=True, padding_mask=True)[source]