speechbrain.pretrained.interfaces module¶
Defines interfaces for simple inference with pretrained models
- Authors:
Aku Rouhe 2021
Peter Plantinga 2021
Loren Lugosch 2020
Mirco Ravanelli 2020
Titouan Parcollet 2021
Summary¶
Classes:
A ready-to-use class for utterance-level classification (e.g, speaker-id, language-id, emotion recognition, keyword spotting, etc). |
|
A ready-to-use Encoder-Decoder ASR model |
|
A end-to-end SLU model. |
|
Takes a trained model and makes predictions on new data. |
|
A “ready-to-use” speech separation model. |
|
A ready-to-use model for speaker recognition. |
|
A ready-to-use model for speech enhancement. |
Reference¶
- class speechbrain.pretrained.interfaces.Pretrained(modules=None, hparams=None, run_opts=None, freeze_params=True)[source]¶
Bases:
object
Takes a trained model and makes predictions on new data.
This is a base class which handles some common boilerplate. It intentionally has an interface similar to
Brain
- these base classes handle similar things.Subclasses of Pretrained should implement the actual logic of how the pretrained system runs, and add methods with descriptive names (e.g. transcribe_file() for ASR).
- Parameters
modules (dict of str:torch.nn.Module pairs) – The Torch modules that make up the learned system. These can be treated in special ways (put on the right device, frozen, etc.)
hparams (dict) – Each key:value pair should consist of a string key and a hyperparameter that is used within the overridden methods. These will be accessible via an
hparams
attribute, using “dot” notation: e.g., self.hparams.model(x).run_opts (dict) –
Options parsed from command line. See
speechbrain.parse_arguments()
. List that are supported here:device
data_parallel_count
data_parallel_backend
distributed_launch
distributed_backend
jit_module_keys
freeze_params (bool) – To freeze (requires_grad=False) parameters or not. Normally in inference you want to freeze the params. Also calls .eval() on all modules.
- HPARAMS_NEEDED = []¶
- MODULES_NEEDED = []¶
- load_audio(path, savedir='.')[source]¶
Load an audio file with this model”s input spec
When using a speech model, it is important to use the same type of data, as was used to train the model. This means for example using the same sampling rate and number of channels. It is, however, possible to convert a file from a higher sampling rate to a lower one (downsampling). Similarly, it is simple to downmix a stereo file to mono. The path can be a local path, a web url, or a link to a huggingface repo.
- classmethod from_hparams(source, hparams_file='hyperparams.yaml', overrides={}, savedir=None, **kwargs)[source]¶
Fetch and load based from outside source based on HyperPyYAML file
The source can be a location on the filesystem or online/huggingface
The hyperparams file should contain a “modules” key, which is a dictionary of torch modules used for computation.
The hyperparams file should contain a “pretrainer” key, which is a speechbrain.utils.parameter_transfer.Pretrainer
- Parameters
source (str) – The location to use for finding the model. See
speechbrain.pretrained.fetching.fetch
for details.hparams_file (str) – The name of the hyperparameters file to use for constructing the modules necessary for inference. Must contain two keys: “modules” and “pretrainer”, as described.
overrides (dict) – Any changes to make to the hparams file when it is loaded.
savedir (str or Path) – Where to put the pretraining material. If not given, will use ./pretrained_models/<class-name>-hash(source).
- class speechbrain.pretrained.interfaces.EndToEndSLU(*args, **kwargs)[source]¶
Bases:
speechbrain.pretrained.interfaces.Pretrained
A end-to-end SLU model.
The class can be used either to run only the encoder (encode()) to extract features or to run the entire model (decode()) to map the speech to its semantics.
Example
>>> from speechbrain.pretrained import EndToEndSLU >>> tmpdir = getfixture("tmpdir") >>> slu_model = EndToEndSLU.from_hparams( ... source="speechbrain/slu-timers-and-such-direct-librispeech-asr", ... savedir=tmpdir, ... ) >>> slu_model.decode_file("samples/audio_samples/example6.wav") "{'intent': 'SimpleMath', 'slots': {'number1': 37.67, 'number2': 75.7, 'op': ' minus '}}"
- HPARAMS_NEEDED = ['tokenizer', 'asr_model_source']¶
- MODULES_NEEDED = ['slu_enc', 'beam_searcher']¶
- decode_file(path)[source]¶
Maps the given audio file to a string representing the semantic dictionary for the utterance.
- encode_batch(wavs, wav_lens)[source]¶
Encodes the input audio into a sequence of hidden states
- Parameters
wavs (torch.tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.
wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.
- Returns
The encoded batch
- Return type
torch.tensor
- decode_batch(wavs, wav_lens)[source]¶
Maps the input audio to its semantics
- Parameters
wavs (torch.tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.
wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.
- Returns
list – Each waveform in the batch decoded.
tensor – Each predicted token id.
- class speechbrain.pretrained.interfaces.EncoderDecoderASR(*args, **kwargs)[source]¶
Bases:
speechbrain.pretrained.interfaces.Pretrained
A ready-to-use Encoder-Decoder ASR model
The class can be used either to run only the encoder (encode()) to extract features or to run the entire encoder-decoder model (transcribe()) to transcribe speech. The given YAML must contains the fields specified in the *_NEEDED[] lists.
Example
>>> from speechbrain.pretrained import EncoderDecoderASR >>> tmpdir = getfixture("tmpdir") >>> asr_model = EncoderDecoderASR.from_hparams( ... source="speechbrain/asr-crdnn-rnnlm-librispeech", ... savedir=tmpdir, ... ) >>> asr_model.transcribe_file("samples/audio_samples/example2.flac") "MY FATHER HAS REVEALED THE CULPRIT'S NAME"
- HPARAMS_NEEDED = ['tokenizer']¶
- MODULES_NEEDED = ['encoder', 'decoder']¶
- encode_batch(wavs, wav_lens)[source]¶
Encodes the input audio into a sequence of hidden states
The waveforms should already be in the model’s desired format. You can call:
normalized = EncoderDecoderASR.normalizer(signal, sample_rate)
to get a correctly converted signal in most cases.- Parameters
wavs (torch.tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.
wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.
- Returns
The encoded batch
- Return type
torch.tensor
- transcribe_batch(wavs, wav_lens)[source]¶
Transcribes the input audio into a sequence of words
The waveforms should already be in the model’s desired format. You can call:
normalized = EncoderDecoderASR.normalizer(signal, sample_rate)
to get a correctly converted signal in most cases.- Parameters
wavs (torch.tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.
wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.
- Returns
list – Each waveform in the batch transcribed.
tensor – Each predicted token id.
- class speechbrain.pretrained.interfaces.EncoderClassifier(*args, **kwargs)[source]¶
Bases:
speechbrain.pretrained.interfaces.Pretrained
A ready-to-use class for utterance-level classification (e.g, speaker-id, language-id, emotion recognition, keyword spotting, etc).
The class assumes that an encoder called “embedding_model” and a model called “classifier” are defined in the yaml file. If you want to convert the predicted index into a corresponding text label, please provide the path of the label_encoder in a variable called ‘lab_encoder_file’ within the yaml.
The class can be used either to run only the encoder (encode_batch()) to extract embeddings or to run a classification step (classify_batch()). ```
Example
>>> import torchaudio >>> from speechbrain.pretrained import EncoderClassifier >>> # Model is downloaded from the speechbrain HuggingFace repo >>> tmpdir = getfixture("tmpdir") >>> classifier = EncoderClassifier.from_hparams( ... source="speechbrain/spkrec-ecapa-voxceleb", ... savedir=tmpdir, ... )
>>> # Compute embeddings >>> signal, fs = torchaudio.load("samples/audio_samples/example1.wav") >>> embeddings = classifier.encode_batch(signal)
>>> # Classification >>> prediction = classifier .classify_batch(signal)
- MODULES_NEEDED = ['compute_features', 'mean_var_norm', 'embedding_model', 'classifier']¶
- encode_batch(wavs, wav_lens=None, normalize=False)[source]¶
Encodes the input audio into a single vector embedding.
The waveforms should already be in the model’s desired format. You can call:
normalized = <this>.normalizer(signal, sample_rate)
to get a correctly converted signal in most cases.- Parameters
wavs (torch.tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model. Make sure the sample rate is fs=16000 Hz.
wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.
normalize (bool) – If True, it normalizes the embeddings with the statistics contained in mean_var_norm_emb.
- Returns
The encoded batch
- Return type
torch.tensor
- classify_batch(wavs, wav_lens=None)[source]¶
Performs classification on the top of the encoded features.
It returns the posterior probabilities, the index and, if the label encoder is specified it also the text label.
- Parameters
wavs (torch.tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model. Make sure the sample rate is fs=16000 Hz.
wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.
- Returns
out_prob – The log posterior probabilities of each class ([batch, N_class])
score – It is the value of the log-posterior for the best class ([batch,])
index – The indexes of the best class ([batch,])
text_lab – List with the text labels corresponding to the indexes. (label encoder should be provided).
- classify_file(path)[source]¶
Classifies the given audiofile into the given set of labels.
- Parameters
path (str) – Path to audio file to classify.
- Returns
out_prob – The log posterior probabilities of each class ([batch, N_class])
score – It is the value of the log-posterior for the best class ([batch,])
index – The indexes of the best class ([batch,])
text_lab – List with the text labels corresponding to the indexes. (label encoder should be provided).
- class speechbrain.pretrained.interfaces.SpeakerRecognition(*args, **kwargs)[source]¶
Bases:
speechbrain.pretrained.interfaces.EncoderClassifier
A ready-to-use model for speaker recognition. It can be used to perform speaker verification with verify_batch().
``` .. rubric:: Example
>>> import torchaudio >>> from speechbrain.pretrained import SpeakerRecognition >>> # Model is downloaded from the speechbrain HuggingFace repo >>> tmpdir = getfixture("tmpdir") >>> verification = SpeakerRecognition.from_hparams( ... source="speechbrain/spkrec-ecapa-voxceleb", ... savedir=tmpdir, ... )
>>> # Perform verification >>> signal, fs = torchaudio.load("samples/audio_samples/example1.wav") >>> signal2, fs = torchaudio.load("samples/audio_samples/example2.flac") >>> score, prediction = verification.verify_batch(signal, signal2)
- MODULES_NEEDED = ['compute_features', 'mean_var_norm', 'embedding_model', 'mean_var_norm_emb']¶
- verify_batch(wavs1, wavs2, wav1_lens=None, wav2_lens=None, threshold=0.25)[source]¶
Performs speaker verification with cosine distance.
It returns the score and the decision (0 different speakers, 1 same speakers).
- Parameters
wavs1 (Torch.Tensor) – Tensor containing the speech waveform1 (batch, time). Make sure the sample rate is fs=16000 Hz.
wavs2 (Torch.Tensor) – Tensor containing the speech waveform2 (batch, time). Make sure the sample rate is fs=16000 Hz.
wav1_lens (Torch.Tensor) – Tensor containing the relative length for each sentence in the length (e.g., [0.8 0.6 1.0])
wav2_lens (Torch.Tensor) – Tensor containing the relative length for each sentence in the length (e.g., [0.8 0.6 1.0])
threshold (Float) – Threshold applied to the cosine distance to decide if the speaker is different (0) or the same (1).
- Returns
score – The score associated to the binary verification output (cosine distance).
prediction – The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
- verify_files(path_x, path_y)[source]¶
Speaker verification with cosine distance
Returns the score and the decision (0 different speakers, 1 same speakers).
- Returns
score – The score associated to the binary verification output (cosine distance).
prediction – The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
- class speechbrain.pretrained.interfaces.SepformerSeparation(modules=None, hparams=None, run_opts=None, freeze_params=True)[source]¶
Bases:
speechbrain.pretrained.interfaces.Pretrained
A “ready-to-use” speech separation model.
Uses Sepformer architecture.
Example
>>> tmpdir = getfixture("tmpdir") >>> model = SepformerSeparation.from_hparams( ... source="speechbrain/sepformer-wsj02mix", ... savedir=tmpdir) >>> mix = torch.randn(1, 400) >>> est_sources = model.separate_batch(mix) >>> print(est_sources.shape) torch.Size([1, 400, 2])
- MODULES_NEEDED = ['encoder', 'masknet', 'decoder']¶
- separate_batch(mix)[source]¶
Run source separation on batch of audio.
- Parameters
mix (torch.tensor) – The mixture of sources.
- Returns
Separated sources
- Return type
tensor
- separate_file(path, savedir='.')[source]¶
Separate sources from file.
- Parameters
path (str) – Path to file which has a mixture of sources. It can be a local path, a web url, or a huggingface repo.
savedir (path) – Path where to store the wav signals (when downloaded from the web).
- Returns
Separated sources
- Return type
tensor
- class speechbrain.pretrained.interfaces.SpectralMaskEnhancement(modules=None, hparams=None, run_opts=None, freeze_params=True)[source]¶
Bases:
speechbrain.pretrained.interfaces.Pretrained
A ready-to-use model for speech enhancement.
- Parameters
Pretrained. (See) –
Example
>>> import torchaudio >>> from speechbrain.pretrained import SpectralMaskEnhancement >>> # Model is downloaded from the speechbrain HuggingFace repo >>> tmpdir = getfixture("tmpdir") >>> enhancer = SpectralMaskEnhancement.from_hparams( ... source="speechbrain/mtl-mimic-voicebank", ... savedir=tmpdir, ... ) >>> noisy, fs = torchaudio.load("samples/audio_samples/example_noisy.wav") >>> # Channel dimension is interpreted as batch dimension here >>> enhanced = enhancer.enhance_batch(noisy)
- HPARAMS_NEEDED = ['compute_stft', 'spectral_magnitude', 'resynth']¶
- MODULES_NEEDED = ['enhance_model']¶
- compute_features(wavs)[source]¶
Compute the log spectral magnitude features for masking.
- Parameters
wavs (torch.tensor) – A batch of waveforms to convert to log spectral mags.
- enhance_batch(noisy, lengths=None)[source]¶
Enhance a batch of noisy waveforms.
- Parameters
noisy (torch.tensor) – A batch of waveforms to perform enhancement on.
lengths (torch.tensor) – The lengths of the waveforms if the enhancement model handles them.
- Returns
A batch of enhanced waveforms of the same shape as input.
- Return type
torch.tensor