speechbrain.lobes.models.huggingface_transformers
High level processing blocks.
This subpackage gathers higher level blocks, or “lobes” for HuggingFace Transformers.
|
This lobe enables the integration of pretrained discrete SSL (hubert,wavlm,wav2vec) for extracting semnatic tokens from output of SSL layers. |
This lobe enables the integration of huggingface pretrained EnCodec. |
|
This lobe enables the integration of huggingface pretrained GPT2LMHeadModel model. |
|
This lobe enables the integration of huggingface pretrained hubert models. |
|
|
This lobe is the interface for huggingface transformers models It enables loading config and model via AutoConfig & AutoModel. |
This lobe enables the integration of huggingface pretrained LaBSE models. |
|
This lobe enables the integration of huggingface pretrained mBART models. |
|
This lobe enables the integration of huggingface pretrained NLLB models. |
|
|
This lobe enables the integration of generic huggingface pretrained text encoders (e.g. BERT). |
This lobe enables the integration of huggingface pretrained wav2vec2 models. |
|
This lobe enables the integration of huggingface pretrained wavlm models. |
|
|
This lobe enables the integration of huggingface pretrained wav2vec2 models. |
This lobe enables the integration of huggingface pretrained whisper model. |