speechbrain.lobes.models.huggingface_transformers.textencoder moduleο
This lobe enables the integration of generic huggingface pretrained text encoders (e.g. BERT).
Transformer from HuggingFace needs to be installed: https://huggingface.co/transformers/installation.html
- Authors
Sylvain de Langen 2024
Summaryο
Classes:
This lobe enables the integration of a generic HuggingFace text encoder (e.g. BERT). |
Referenceο
- class speechbrain.lobes.models.huggingface_transformers.textencoder.TextEncoder(source, save_path, freeze=True, num_layers: int | None = None, **kwargs)[source]ο
Bases:
HFTransformersInterface
This lobe enables the integration of a generic HuggingFace text encoder (e.g. BERT). Requires the
AutoModel
found from thesource
to have alast_hidden_state
key in the output dict.- Parameters:
source (str) β HuggingFace hub name: e.g βgoogle-bert/bert-baseβ
save_path (str) β Path (dir) of the downloaded model.
freeze (bool (default: True)) β If True, the model is frozen. If False, the model will be trained alongside with the rest of the pipeline.
num_layers (int, optional) β When specified, and assuming the passed LM can be truncated that way, the encoder for the passed model will be truncated to the specified layer (mutating it). This means that the embeddings will be those of the Nth layer rather than the last layer. The last layer is not necessarily the best for certain tasks.
**kwargs β Extra keyword arguments passed to the
from_pretrained
function.
Example
>>> inputs = ["La vie est belle"] >>> model_hub = "google-bert/bert-base-multilingual-cased" >>> save_path = "savedir" >>> model = TextEncoder(model_hub, save_path) >>> outputs = model(inputs)
- truncate(keep_layers: int)[source]ο
Truncates the encoder to a specific layer so that output embeddings are the hidden state of the n-th layer.
- Parameters:
keep_layers (int) β Number of layers to keep, e.g. 4 would keep layers
[0, 1, 2, 3]
.
- forward(input_texts, return_tokens: bool = False)[source]ο
This method implements a forward of the encoder model, which generates batches of embeddings embeddings from input text.
- Parameters:
- Returns:
(any, torch.Tensor) if
return_tokens == True
β Respectively: - Tokenized sentence in the form of a padded batch tensor. In the HFformat, as returned by the tokenizer.
Output embeddings of the model (i.e. the last hidden state)
torch.Tensor if
return_tokens
== False β Output embeddings of the model (i.e. the last hidden state)