speechbrain.pretrained.interfaces module

Defines interfaces for simple inference with pretrained models

Authors:
  • Aku Rouhe 2021

  • Peter Plantinga 2021

  • Loren Lugosch 2020

  • Mirco Ravanelli 2020

  • Titouan Parcollet 2021

  • Abdel Heba 2021

  • Andreas Nautsch 2022, 2023

  • Pooneh Mousavi 2023

Summary

Classes:

AudioClassifier

A ready-to-use class for utterance-level classification (e.g, speaker-id, language-id, emotion recognition, keyword spotting, etc).

EncodeDecodePipelineMixin

A mixin for pretrained models that makes it possible to specify an encoding pipeline and a decoding pipeline

EncoderASR

A ready-to-use Encoder ASR model

EncoderClassifier

A ready-to-use class for utterance-level classification (e.g, speaker-id, language-id, emotion recognition, keyword spotting, etc).

EncoderDecoderASR

A ready-to-use Encoder-Decoder ASR model

EndToEndSLU

An end-to-end SLU model.

FastSpeech2

A ready-to-use wrapper for Fastspeech2 (text -> mel_spec).

GraphemeToPhoneme

A pretrained model implementation for Grapheme-to-Phoneme (G2P) models that take raw natural language text as an input and

HIFIGAN

A ready-to-use wrapper for HiFiGAN (mel_spec -> waveform).

PIQAudioInterpreter

This class implements the interface for the PIQ posthoc interpreter for an audio classifier.

Pretrained

Takes a trained model and makes predictions on new data.

SNREstimator

A "ready-to-use" SNR estimator.

SepformerSeparation

A "ready-to-use" speech separation model.

SpeakerRecognition

A ready-to-use model for speaker recognition.

SpectralMaskEnhancement

A ready-to-use model for speech enhancement.

Speech_Emotion_Diarization

A ready-to-use SED interface (audio -> emotions and their durations)

Tacotron2

A ready-to-use wrapper for Tacotron2 (text -> mel_spec).

VAD

A ready-to-use class for Voice Activity Detection (VAD) using a pre-trained model.

WaveformEncoder

A ready-to-use waveformEncoder model

WaveformEnhancement

A ready-to-use model for speech enhancement.

WhisperASR

A ready-to-use Whisper ASR model

Functions:

foreign_class

Fetch and load an interface from an outside source

Reference

speechbrain.pretrained.interfaces.foreign_class(source, hparams_file='hyperparams.yaml', pymodule_file='custom.py', classname='CustomInterface', overrides={}, savedir=None, use_auth_token=False, download_only=False, **kwargs)[source]

Fetch and load an interface from an outside source

The source can be a location on the filesystem or online/huggingface

The pymodule file should contain a class with the given classname. An instance of that class is returned. The idea is to have a custom Pretrained subclass in the file. The pymodule file is also added to the python path before the Hyperparams YAML file is loaded, so it can contain any custom implementations that are needed.

The hyperparams file should contain a “modules” key, which is a dictionary of torch modules used for computation.

The hyperparams file should contain a “pretrainer” key, which is a speechbrain.utils.parameter_transfer.Pretrainer

Parameters:
  • source (str or Path or FetchSource) – The location to use for finding the model. See speechbrain.pretrained.fetching.fetch for details.

  • hparams_file (str) – The name of the hyperparameters file to use for constructing the modules necessary for inference. Must contain two keys: “modules” and “pretrainer”, as described.

  • pymodule_file (str) – The name of the Python file that should be fetched.

  • classname (str) – The name of the Class, of which an instance is created and returned

  • overrides (dict) – Any changes to make to the hparams file when it is loaded.

  • savedir (str or Path) – Where to put the pretraining material. If not given, will use ./pretrained_models/<class-name>-hash(source).

  • use_auth_token (bool (default: False)) – If true Hugginface’s auth_token will be used to load private models from the HuggingFace Hub, default is False because the majority of models are public.

  • download_only (bool (default: False)) – If true, class and instance creation is skipped.

Returns:

An instance of a class with the given classname from the given pymodule file.

Return type:

object

class speechbrain.pretrained.interfaces.Pretrained(modules=None, hparams=None, run_opts=None, freeze_params=True)[source]

Bases: Module

Takes a trained model and makes predictions on new data.

This is a base class which handles some common boilerplate. It intentionally has an interface similar to Brain - these base classes handle similar things.

Subclasses of Pretrained should implement the actual logic of how the pretrained system runs, and add methods with descriptive names (e.g. transcribe_file() for ASR).

Pretrained is a torch.nn.Module so that methods like .to() or .eval() can work. Subclasses should provide a suitable forward() implementation: by convention, it should be a method that takes a batch of audio signals and runs the full model (as applicable).

Parameters:
  • modules (dict of str:torch.nn.Module pairs) – The Torch modules that make up the learned system. These can be treated in special ways (put on the right device, frozen, etc.). These are available as attributes under self.mods, like self.mods.model(x)

  • hparams (dict) – Each key:value pair should consist of a string key and a hyperparameter that is used within the overridden methods. These will be accessible via an hparams attribute, using “dot” notation: e.g., self.hparams.model(x).

  • run_opts (dict) –

    Options parsed from command line. See speechbrain.parse_arguments(). List that are supported here:

    • device

    • data_parallel_count

    • data_parallel_backend

    • distributed_launch

    • distributed_backend

    • jit_module_keys

  • freeze_params (bool) – To freeze (requires_grad=False) parameters or not. Normally in inference you want to freeze the params. Also calls .eval() on all modules.

HPARAMS_NEEDED = []
MODULES_NEEDED = []
load_audio(path, savedir='audio_cache', **kwargs)[source]

Load an audio file with this model’s input spec

When using a speech model, it is important to use the same type of data, as was used to train the model. This means for example using the same sampling rate and number of channels. It is, however, possible to convert a file from a higher sampling rate to a lower one (downsampling). Similarly, it is simple to downmix a stereo file to mono. The path can be a local path, a web url, or a link to a huggingface repo.

classmethod from_hparams(source, hparams_file='hyperparams.yaml', pymodule_file='custom.py', overrides={}, savedir=None, use_auth_token=False, revision=None, download_only=False, **kwargs)[source]

Fetch and load based from outside source based on HyperPyYAML file

The source can be a location on the filesystem or online/huggingface

You can use the pymodule_file to include any custom implementations that are needed: if that file exists, then its location is added to sys.path before Hyperparams YAML is loaded, so it can be referenced in the YAML.

The hyperparams file should contain a “modules” key, which is a dictionary of torch modules used for computation.

The hyperparams file should contain a “pretrainer” key, which is a speechbrain.utils.parameter_transfer.Pretrainer

Parameters:
  • source (str or Path or FetchSource) – The location to use for finding the model. See speechbrain.pretrained.fetching.fetch for details.

  • hparams_file (str) – The name of the hyperparameters file to use for constructing the modules necessary for inference. Must contain two keys: “modules” and “pretrainer”, as described.

  • pymodule_file (str) – A Python file can be fetched. This allows any custom implementations to be included. The file’s location is added to sys.path before the hyperparams YAML file is loaded, so it can be referenced in YAML. This is optional, but has a default: “custom.py”. If the default file is not found, this is simply ignored, but if you give a different filename, then this will raise in case the file is not found.

  • overrides (dict) – Any changes to make to the hparams file when it is loaded.

  • savedir (str or Path) – Where to put the pretraining material. If not given, will use ./pretrained_models/<class-name>-hash(source).

  • use_auth_token (bool (default: False)) – If true Hugginface’s auth_token will be used to load private models from the HuggingFace Hub, default is False because the majority of models are public.

  • revision (str) – The model revision corresponding to the HuggingFace Hub model revision. This is particularly useful if you wish to pin your code to a particular version of a model hosted at HuggingFace.

  • download_only (bool (default: False)) – If true, class and instance creation is skipped.

training: bool
class speechbrain.pretrained.interfaces.EndToEndSLU(*args, **kwargs)[source]

Bases: Pretrained

An end-to-end SLU model.

The class can be used either to run only the encoder (encode()) to extract features or to run the entire model (decode()) to map the speech to its semantics.

Example

>>> from speechbrain.pretrained import EndToEndSLU
>>> tmpdir = getfixture("tmpdir")
>>> slu_model = EndToEndSLU.from_hparams(
...     source="speechbrain/slu-timers-and-such-direct-librispeech-asr",
...     savedir=tmpdir,
... )
>>> slu_model.decode_file("tests/samples/single-mic/example6.wav")
"{'intent': 'SimpleMath', 'slots': {'number1': 37.67, 'number2': 75.7, 'op': ' minus '}}"
HPARAMS_NEEDED = ['tokenizer', 'asr_model_source']
MODULES_NEEDED = ['slu_enc', 'beam_searcher']
decode_file(path, **kwargs)[source]

Maps the given audio file to a string representing the semantic dictionary for the utterance.

Parameters:

path (str) – Path to audio file to decode.

Returns:

The predicted semantics.

Return type:

str

encode_batch(wavs, wav_lens)[source]

Encodes the input audio into a sequence of hidden states

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

The encoded batch

Return type:

torch.Tensor

decode_batch(wavs, wav_lens)[source]

Maps the input audio to its semantics

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

  • list – Each waveform in the batch decoded.

  • tensor – Each predicted token id.

forward(wavs, wav_lens)[source]

Runs full decoding - note: no gradients through decoding

training: bool
class speechbrain.pretrained.interfaces.EncoderDecoderASR(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use Encoder-Decoder ASR model

The class can be used either to run only the encoder (encode()) to extract features or to run the entire encoder-decoder model (transcribe()) to transcribe speech. The given YAML must contain the fields specified in the *_NEEDED[] lists.

Example

>>> from speechbrain.pretrained import EncoderDecoderASR
>>> tmpdir = getfixture("tmpdir")
>>> asr_model = EncoderDecoderASR.from_hparams(
...     source="speechbrain/asr-crdnn-rnnlm-librispeech",
...     savedir=tmpdir,
... )
>>> asr_model.transcribe_file("tests/samples/single-mic/example2.flac")
"MY FATHER HAS REVEALED THE CULPRIT'S NAME"
HPARAMS_NEEDED = ['tokenizer']
MODULES_NEEDED = ['encoder', 'decoder']
transcribe_file(path, **kwargs)[source]

Transcribes the given audiofile into a sequence of words.

Parameters:

path (str) – Path to audio file which to transcribe.

Returns:

The audiofile transcription produced by this ASR system.

Return type:

str

encode_batch(wavs, wav_lens)[source]

Encodes the input audio into a sequence of hidden states

The waveforms should already be in the model’s desired format. You can call: normalized = EncoderDecoderASR.normalizer(signal, sample_rate) to get a correctly converted signal in most cases.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

The encoded batch

Return type:

torch.Tensor

transcribe_batch(wavs, wav_lens)[source]

Transcribes the input audio into a sequence of words

The waveforms should already be in the model’s desired format. You can call: normalized = EncoderDecoderASR.normalizer(signal, sample_rate) to get a correctly converted signal in most cases.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

  • list – Each waveform in the batch transcribed.

  • tensor – Each predicted token id.

forward(wavs, wav_lens)[source]

Runs full transcription - note: no gradients through decoding

training: bool
class speechbrain.pretrained.interfaces.WaveformEncoder(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use waveformEncoder model

It can be used to wrap different embedding models such as SSL ones (wav2vec2) or speaker ones (Xvector) etc. Two functions are available: encode_batch and encode_file. They can be used to obtain the embeddings directly from an audio file or from a batch of audio tensors respectively.

The given YAML must contain the fields specified in the *_NEEDED[] lists.

Example

>>> from speechbrain.pretrained import WaveformEncoder
>>> tmpdir = getfixture("tmpdir")
>>> ssl_model = WaveformEncoder.from_hparams(
...     source="speechbrain/ssl-wav2vec2-base-libri",
...     savedir=tmpdir,
... ) 
>>> ssl_model.encode_file("samples/audio_samples/example_fr.wav") 
MODULES_NEEDED = ['encoder']
encode_file(path, **kwargs)[source]

Encode the given audiofile into a sequence of embeddings.

Parameters:

path (str) – Path to audio file which to encode.

Returns:

The audiofile embeddings produced by this system.

Return type:

torch.Tensor

encode_batch(wavs, wav_lens)[source]

Encodes the input audio into a sequence of hidden states

The waveforms should already be in the model’s desired format.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

The encoded batch

Return type:

torch.Tensor

forward(wavs, wav_lens)[source]

Runs the encoder

training: bool
class speechbrain.pretrained.interfaces.EncoderASR(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use Encoder ASR model

The class can be used either to run only the encoder (encode()) to extract features or to run the entire encoder + decoder function model (transcribe()) to transcribe speech. The given YAML must contain the fields specified in the *_NEEDED[] lists.

Example

>>> from speechbrain.pretrained import EncoderASR
>>> tmpdir = getfixture("tmpdir")
>>> asr_model = EncoderASR.from_hparams(
...     source="speechbrain/asr-wav2vec2-commonvoice-fr",
...     savedir=tmpdir,
... ) 
>>> asr_model.transcribe_file("samples/audio_samples/example_fr.wav") 
HPARAMS_NEEDED = ['tokenizer', 'decoding_function']
MODULES_NEEDED = ['encoder']
transcribe_file(path, **kwargs)[source]

Transcribes the given audiofile into a sequence of words.

Parameters:

path (str) – Path to audio file which to transcribe.

Returns:

The audiofile transcription produced by this ASR system.

Return type:

str

encode_batch(wavs, wav_lens)[source]

Encodes the input audio into a sequence of hidden states

The waveforms should already be in the model’s desired format. You can call: normalized = EncoderASR.normalizer(signal, sample_rate) to get a correctly converted signal in most cases.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

The encoded batch

Return type:

torch.Tensor

transcribe_batch(wavs, wav_lens)[source]

Transcribes the input audio into a sequence of words

The waveforms should already be in the model’s desired format. You can call: normalized = EncoderASR.normalizer(signal, sample_rate) to get a correctly converted signal in most cases.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

  • list – Each waveform in the batch transcribed.

  • tensor – Each predicted token id.

forward(wavs, wav_lens)[source]

Runs the encoder

training: bool
class speechbrain.pretrained.interfaces.EncoderClassifier(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use class for utterance-level classification (e.g, speaker-id, language-id, emotion recognition, keyword spotting, etc).

The class assumes that an encoder called “embedding_model” and a model called “classifier” are defined in the yaml file. If you want to convert the predicted index into a corresponding text label, please provide the path of the label_encoder in a variable called ‘lab_encoder_file’ within the yaml.

The class can be used either to run only the encoder (encode_batch()) to extract embeddings or to run a classification step (classify_batch()). ```

Example

>>> import torchaudio
>>> from speechbrain.pretrained import EncoderClassifier
>>> # Model is downloaded from the speechbrain HuggingFace repo
>>> tmpdir = getfixture("tmpdir")
>>> classifier = EncoderClassifier.from_hparams(
...     source="speechbrain/spkrec-ecapa-voxceleb",
...     savedir=tmpdir,
... )
>>> classifier.hparams.label_encoder.ignore_len()
>>> # Compute embeddings
>>> signal, fs = torchaudio.load("tests/samples/single-mic/example1.wav")
>>> embeddings = classifier.encode_batch(signal)
>>> # Classification
>>> prediction = classifier.classify_batch(signal)
MODULES_NEEDED = ['compute_features', 'mean_var_norm', 'embedding_model', 'classifier']
encode_batch(wavs, wav_lens=None, normalize=False)[source]

Encodes the input audio into a single vector embedding.

The waveforms should already be in the model’s desired format. You can call: normalized = <this>.normalizer(signal, sample_rate) to get a correctly converted signal in most cases.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model. Make sure the sample rate is fs=16000 Hz.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

  • normalize (bool) – If True, it normalizes the embeddings with the statistics contained in mean_var_norm_emb.

Returns:

The encoded batch

Return type:

torch.Tensor

classify_batch(wavs, wav_lens=None)[source]

Performs classification on the top of the encoded features.

It returns the posterior probabilities, the index and, if the label encoder is specified it also the text label.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model. Make sure the sample rate is fs=16000 Hz.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

  • out_prob – The log posterior probabilities of each class ([batch, N_class])

  • score – It is the value of the log-posterior for the best class ([batch,])

  • index – The indexes of the best class ([batch,])

  • text_lab – List with the text labels corresponding to the indexes. (label encoder should be provided).

classify_file(path, **kwargs)[source]

Classifies the given audiofile into the given set of labels.

Parameters:

path (str) – Path to audio file to classify.

Returns:

  • out_prob – The log posterior probabilities of each class ([batch, N_class])

  • score – It is the value of the log-posterior for the best class ([batch,])

  • index – The indexes of the best class ([batch,])

  • text_lab – List with the text labels corresponding to the indexes. (label encoder should be provided).

forward(wavs, wav_lens=None)[source]

Runs the classification

training: bool
class speechbrain.pretrained.interfaces.SpeakerRecognition(*args, **kwargs)[source]

Bases: EncoderClassifier

A ready-to-use model for speaker recognition. It can be used to perform speaker verification with verify_batch().

``` .. rubric:: Example

>>> import torchaudio
>>> from speechbrain.pretrained import SpeakerRecognition
>>> # Model is downloaded from the speechbrain HuggingFace repo
>>> tmpdir = getfixture("tmpdir")
>>> verification = SpeakerRecognition.from_hparams(
...     source="speechbrain/spkrec-ecapa-voxceleb",
...     savedir=tmpdir,
... )
>>> # Perform verification
>>> signal, fs = torchaudio.load("tests/samples/single-mic/example1.wav")
>>> signal2, fs = torchaudio.load("tests/samples/single-mic/example2.flac")
>>> score, prediction = verification.verify_batch(signal, signal2)
MODULES_NEEDED = ['compute_features', 'mean_var_norm', 'embedding_model', 'mean_var_norm_emb']
verify_batch(wavs1, wavs2, wav1_lens=None, wav2_lens=None, threshold=0.25)[source]

Performs speaker verification with cosine distance.

It returns the score and the decision (0 different speakers, 1 same speakers).

Parameters:
  • wavs1 (Torch.Tensor) – Tensor containing the speech waveform1 (batch, time). Make sure the sample rate is fs=16000 Hz.

  • wavs2 (Torch.Tensor) – Tensor containing the speech waveform2 (batch, time). Make sure the sample rate is fs=16000 Hz.

  • wav1_lens (Torch.Tensor) – Tensor containing the relative length for each sentence in the length (e.g., [0.8 0.6 1.0])

  • wav2_lens (Torch.Tensor) – Tensor containing the relative length for each sentence in the length (e.g., [0.8 0.6 1.0])

  • threshold (Float) – Threshold applied to the cosine distance to decide if the speaker is different (0) or the same (1).

Returns:

  • score – The score associated to the binary verification output (cosine distance).

  • prediction – The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.

verify_files(path_x, path_y, **kwargs)[source]

Speaker verification with cosine distance

Returns the score and the decision (0 different speakers, 1 same speakers).

Returns:

  • score – The score associated to the binary verification output (cosine distance).

  • prediction – The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.

training: bool
class speechbrain.pretrained.interfaces.VAD(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use class for Voice Activity Detection (VAD) using a pre-trained model.

Example

>>> import torchaudio
>>> from speechbrain.pretrained import VAD
>>> # Model is downloaded from the speechbrain HuggingFace repo
>>> tmpdir = getfixture("tmpdir")
>>> VAD = VAD.from_hparams(
...     source="speechbrain/vad-crdnn-libriparty",
...     savedir=tmpdir,
... )
>>> # Perform VAD
>>> boundaries = VAD.get_speech_segments("tests/samples/single-mic/example1.wav")
HPARAMS_NEEDED = ['sample_rate', 'time_resolution', 'device']
MODULES_NEEDED = ['compute_features', 'mean_var_norm', 'model']
get_speech_prob_file(audio_file, large_chunk_size=30, small_chunk_size=10, overlap_small_chunk=False)[source]

Outputs the frame-level speech probability of the input audio file using the neural model specified in the hparam file. To make this code both parallelizable and scalable to long sequences, it uses a double-windowing approach. First, we sequentially read non-overlapping large chunks of the input signal. We then split the large chunks into smaller chunks and we process them in parallel.

Parameters:
  • audio_file (path) – Path of the audio file containing the recording. The file is read with torchaudio.

  • large_chunk_size (float) – Size (in seconds) of the large chunks that are read sequentially from the input audio file.

  • small_chunk_size – Size (in seconds) of the small chunks extracted from the large ones. The audio signal is processed in parallel within the small chunks. Note that large_chunk_size/small_chunk_size must be an integer.

  • overlap_small_chunk (bool) – True, creates overlapped small chunks. The probabilities of the overlapped chunks are combined using hamming windows.

Returns:

prob_vad – Tensor containing the frame-level speech probabilities for the input audio file.

Return type:

torch.Tensor

get_speech_prob_chunk(wavs, wav_lens=None)[source]

Outputs the frame-level posterior probability for the input audio chunks Outputs close to zero refers to time steps with a low probability of speech activity, while outputs closer to one likely contain speech.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model. Make sure the sample rate is fs=16000 Hz.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

The encoded batch

Return type:

torch.Tensor

apply_threshold(vad_prob, activation_th=0.5, deactivation_th=0.25)[source]

Scans the frame-level speech probabilities and applies a threshold on them. Speech starts when a value larger than activation_th is detected, while it ends when observing a value lower than the deactivation_th.

Parameters:
  • vad_prob (torch.Tensor) – Frame-level speech probabilities.

  • activation_th (float) – Threshold for starting a speech segment.

  • deactivation_th (float) – Threshold for ending a speech segment.

Returns:

vad_th – Tensor containing 1 for speech regions and 0 for non-speech regions.

Return type:

torch.Tensor

get_boundaries(prob_th, output_value='seconds')[source]

Computes the time boundaries where speech activity is detected. It takes in input frame-level binary decisions (1 for speech, 0 for non-speech) and outputs the begin/end second (or sample) of each detected speech region.

Parameters:
  • prob_th (torch.Tensor) – Frame-level binary decisions (1 for speech frame, 0 for a non-speech one). The tensor can be obtained from apply_threshold.

  • output_value ('seconds' or 'samples') – When the option ‘seconds’ is set, the returned boundaries are in seconds, otherwise, it reports them in samples.

Returns:

boundaries – Tensor containing the start second (or sample) of speech segments in even positions and their corresponding end in odd positions (e.g, [1.0, 1.5, 5,.0 6.0] means that we have two speech segment;

one from 1.0 to 1.5 seconds and another from 5.0 to 6.0 seconds).

Return type:

torch.Tensor

merge_close_segments(boundaries, close_th=0.25)[source]

Merges segments that are shorter than the given threshold.

Parameters:
  • boundaries (str) – Tensor containing the speech boundaries. It can be derived using the get_boundaries method.

  • close_th (float) – If the distance between boundaries is smaller than close_th, the segments will be merged.

Returns:

The new boundaries with the merged segments.

Return type:

new_boundaries

remove_short_segments(boundaries, len_th=0.25)[source]

Removes segments that are too short.

Parameters:
  • boundaries (torch.Tensor) – Tensor containing the speech boundaries. It can be derived using the get_boundaries method.

  • len_th (float) – If the length of the segment is smaller than close_th, the segments will be merged.

Returns:

The new boundaries without the short segments.

Return type:

new_boundaries

save_boundaries(boundaries, save_path=None, print_boundaries=True, audio_file=None)[source]

Saves the boundaries on a file (and/or prints them) in a readable format.

Parameters:
  • boundaries (torch.Tensor) – Tensor containing the speech boundaries. It can be derived using the get_boundaries method.

  • save_path (path) – When to store the text file containing the speech/non-speech intervals.

  • print_boundaries (Bool) – Prints the speech/non-speech intervals in the standard outputs.

  • audio_file (path) – Path of the audio file containing the recording. The file is read with torchaudio. It is used here to detect the length of the signal.

energy_VAD(audio_file, boundaries, activation_th=0.5, deactivation_th=0.0, eps=1e-06)[source]

Applies energy-based VAD within the detected speech segments.The neural network VAD often creates longer segments and tends to merge segments that are close with each other.

The energy VAD post-processes can be useful for having a fine-grained voice activity detection.

The energy VAD computes the energy within the small chunks. The energy is normalized within the segment to have mean 0.5 and +-0.5 of std. This helps to set the energy threshold.

Parameters:
  • audio_file (path) – Path of the audio file containing the recording. The file is read with torchaudio.

  • boundaries (torch.Tensor) – Tensor containing the speech boundaries. It can be derived using the get_boundaries method.

  • activation_th (float) – A new speech segment is started it the energy is above activation_th.

  • deactivation_th (float) – The segment is considered ended when the energy is <= deactivation_th.

  • eps (float) – Small constant for numerical stability.

Returns:

The new boundaries that are post-processed by the energy VAD.

Return type:

new_boundaries

create_chunks(x, chunk_size=16384, chunk_stride=16384)[source]

Splits the input into smaller chunks of size chunk_size with an overlap chunk_stride. The chunks are concatenated over the batch axis.

Parameters:
  • x (torch.Tensor) – Signal to split into chunks.

  • chunk_size (str) – The size of each chunk.

  • chunk_stride – The stride (hop) of each chunk.

Returns:

x – A new tensors with the chunks derived from the input signal.

Return type:

torch.Tensor

upsample_VAD(vad_out, audio_file, time_resolution=0.01)[source]

Upsamples the output of the vad to help visualization. It creates a signal that is 1 when there is speech and 0 when there is no speech. The vad signal has the same resolution as the input one and can be opened with it (e.g, using audacity) to visually figure out VAD regions.

Parameters:
  • vad_out (torch.Tensor) – Tensor containing 1 for each frame of speech and 0 for each non-speech frame.

  • audio_file (path) – The original audio file used to compute vad_out

  • time_resolution (float) – Time resolution of the vad_out signal.

Returns:

The upsampled version of the vad_out tensor.

Return type:

vad_signal

upsample_boundaries(boundaries, audio_file)[source]

Based on the input boundaries, this method creates a signal that is 1 when there is speech and 0 when there is no speech. The vad signal has the same resolution as the input one and can be opened with it (e.g, using audacity) to visually figure out VAD regions.

Parameters:
  • boundaries (torch.Tensor) – Tensor containing the boundaries of the speech segments.

  • audio_file (path) – The original audio file used to compute vad_out

Returns:

The output vad signal with the same resolution of the input one.

Return type:

vad_signal

double_check_speech_segments(boundaries, audio_file, speech_th=0.5)[source]

Takes in input the boundaries of the detected speech segments and double checks (using the neural VAD) that they actually contain speech.

Parameters:
  • boundaries (torch.Tensor) – Tensor containing the boundaries of the speech segments.

  • audio_file (path) – The original audio file used to compute vad_out.

  • speech_th (float) – Threshold on the mean posterior probability over which speech is confirmed. Below that threshold, the segment is re-assigned to a non-speech region.

Returns:

The boundaries of the segments where speech activity is confirmed.

Return type:

new_boundaries

get_segments(boundaries, audio_file, before_margin=0.1, after_margin=0.1)[source]

Returns a list containing all the detected speech segments.

Parameters:
  • boundaries (torch.Tensor) – Tensor containing the boundaries of the speech segments.

  • audio_file (path) – The original audio file used to compute vad_out.

  • before_margin (float) – Used to cut the segments samples a bit before the detected margin.

  • after_margin (float) – Use to cut the segments samples a bit after the detected margin.

Returns:

segments – List containing the detected speech segments

Return type:

list

get_speech_segments(audio_file, large_chunk_size=30, small_chunk_size=10, overlap_small_chunk=False, apply_energy_VAD=False, double_check=True, close_th=0.25, len_th=0.25, activation_th=0.5, deactivation_th=0.25, en_activation_th=0.5, en_deactivation_th=0.0, speech_th=0.5)[source]

Detects speech segments within the input file. The input signal can be both a short or a long recording. The function computes the posterior probabilities on large chunks (e.g, 30 sec), that are read sequentially (to avoid storing big signals in memory). Each large chunk is, in turn, split into smaller chunks (e.g, 10 seconds) that are processed in parallel. The pipeline for detecting the speech segments is the following:

1- Compute posteriors probabilities at the frame level. 2- Apply a threshold on the posterior probability. 3- Derive candidate speech segments on top of that. 4- Apply energy VAD within each candidate segment (optional). 5- Merge segments that are too close. 6- Remove segments that are too short. 7- Double check speech segments (optional).

Parameters:
  • audio_file (str) – Path to audio file.

  • large_chunk_size (float) – Size (in seconds) of the large chunks that are read sequentially from the input audio file.

  • small_chunk_size (float) – Size (in seconds) of the small chunks extracted from the large ones. The audio signal is processed in parallel within the small chunks. Note that large_chunk_size/small_chunk_size must be an integer.

  • overlap_small_chunk (bool) – If True, it creates overlapped small chunks (with 50% overlap). The probabilities of the overlapped chunks are combined using hamming windows.

  • apply_energy_VAD (bool) – If True, a energy-based VAD is used on the detected speech segments. The neural network VAD often creates longer segments and tends to merge close segments together. The energy VAD post-processes can be useful for having a fine-grained voice activity detection. The energy thresholds is managed by activation_th and deactivation_th (see below).

  • double_check (bool) – If True, double checks (using the neural VAD) that the candidate speech segments actually contain speech. A threshold on the mean posterior probabilities provided by the neural network is applied based on the speech_th parameter (see below).

  • activation_th (float) – Threshold of the neural posteriors above which starting a speech segment.

  • deactivation_th (float) – Threshold of the neural posteriors below which ending a speech segment.

  • en_activation_th (float) – A new speech segment is started it the energy is above activation_th. This is active only if apply_energy_VAD is True.

  • en_deactivation_th (float) – The segment is considered ended when the energy is <= deactivation_th. This is active only if apply_energy_VAD is True.

  • speech_th (float) – Threshold on the mean posterior probability within the candidate speech segment. Below that threshold, the segment is re-assigned to a non-speech region. This is active only if double_check is True.

  • close_th (float) – If the distance between boundaries is smaller than close_th, the segments will be merged.

  • len_th (float) – If the length of the segment is smaller than close_th, the segments will be merged.

Returns:

boundaries – Tensor containing the start second of speech segments in even positions and their corresponding end in odd positions (e.g, [1.0, 1.5, 5,.0 6.0] means that we have two speech segment;

one from 1.0 to 1.5 seconds and another from 5.0 to 6.0 seconds).

Return type:

torch.Tensor

forward(wavs, wav_lens=None)[source]

Gets frame-level speech-activity predictions

training: bool
class speechbrain.pretrained.interfaces.SepformerSeparation(modules=None, hparams=None, run_opts=None, freeze_params=True)[source]

Bases: Pretrained

A “ready-to-use” speech separation model.

Uses Sepformer architecture.

Example

>>> tmpdir = getfixture("tmpdir")
>>> model = SepformerSeparation.from_hparams(
...     source="speechbrain/sepformer-wsj02mix",
...     savedir=tmpdir)
>>> mix = torch.randn(1, 400)
>>> est_sources = model.separate_batch(mix)
>>> print(est_sources.shape)
torch.Size([1, 400, 2])
MODULES_NEEDED = ['encoder', 'masknet', 'decoder']
separate_batch(mix)[source]

Run source separation on batch of audio.

Parameters:

mix (torch.Tensor) – The mixture of sources.

Returns:

Separated sources

Return type:

tensor

separate_file(path, savedir='audio_cache')[source]

Separate sources from file.

Parameters:
  • path (str) – Path to file which has a mixture of sources. It can be a local path, a web url, or a huggingface repo.

  • savedir (path) – Path where to store the wav signals (when downloaded from the web).

Returns:

Separated sources

Return type:

tensor

forward(mix)[source]

Runs separation on the input mix

training: bool
class speechbrain.pretrained.interfaces.SpectralMaskEnhancement(modules=None, hparams=None, run_opts=None, freeze_params=True)[source]

Bases: Pretrained

A ready-to-use model for speech enhancement.

Parameters:

Pretrained. (See) –

Example

>>> import torch
>>> from speechbrain.pretrained import SpectralMaskEnhancement
>>> # Model is downloaded from the speechbrain HuggingFace repo
>>> tmpdir = getfixture("tmpdir")
>>> enhancer = SpectralMaskEnhancement.from_hparams(
...     source="speechbrain/metricgan-plus-voicebank",
...     savedir=tmpdir,
... )
>>> enhanced = enhancer.enhance_file(
...     "speechbrain/metricgan-plus-voicebank/example.wav"
... )
HPARAMS_NEEDED = ['compute_stft', 'spectral_magnitude', 'resynth']
MODULES_NEEDED = ['enhance_model']
compute_features(wavs)[source]

Compute the log spectral magnitude features for masking.

Parameters:

wavs (torch.Tensor) – A batch of waveforms to convert to log spectral mags.

enhance_batch(noisy, lengths=None)[source]

Enhance a batch of noisy waveforms.

Parameters:
  • noisy (torch.Tensor) – A batch of waveforms to perform enhancement on.

  • lengths (torch.Tensor) – The lengths of the waveforms if the enhancement model handles them.

Returns:

A batch of enhanced waveforms of the same shape as input.

Return type:

torch.Tensor

enhance_file(filename, output_filename=None, **kwargs)[source]

Enhance a wav file.

Parameters:
  • filename (str) – Location on disk to load file for enhancement.

  • output_filename (str) – If provided, writes enhanced data to this file.

training: bool
class speechbrain.pretrained.interfaces.EncodeDecodePipelineMixin[source]

Bases: object

A mixin for pretrained models that makes it possible to specify an encoding pipeline and a decoding pipeline

create_pipelines()[source]

Initializes the encode and decode pipeline

to_dict(data)[source]

Converts padded batches to dictionaries, leaves other data types as is

Parameters:

data (object) – a dictionary or a padded batch

Returns:

results – the dictionary

Return type:

dict

property batch_inputs

Determines whether the input pipeline operates on batches or individual examples (true means batched)

Returns:

batch_inputs

Return type:

bool

property input_use_padded_data

If turned on, raw PaddedData instances will be passed to the model. If turned off, only .data will be used

Returns:

result – whether padded data is used as is

Return type:

bool

property batch_outputs

Determines whether the output pipeline operates on batches or individual examples (true means batched)

Returns:

batch_outputs

Return type:

bool

encode_input(input)[source]

Encodes the inputs using the pipeline

Parameters:

input (dict) – the raw inputs

Returns:

results

Return type:

object

decode_output(output)[source]

Decodes the raw model outputs

Parameters:

output (tuple) – raw model outputs

Returns:

result – the output of the pipeline

Return type:

dict or list

class speechbrain.pretrained.interfaces.GraphemeToPhoneme(*args, **kwargs)[source]

Bases: Pretrained, EncodeDecodePipelineMixin

A pretrained model implementation for Grapheme-to-Phoneme (G2P) models that take raw natural language text as an input and

Example

>>> text = ("English is tough. It can be understood "
...         "through thorough thought though")
>>> from speechbrain.pretrained import GraphemeToPhoneme
>>> tmpdir = getfixture('tmpdir')
>>> g2p = GraphemeToPhoneme.from_hparams('path/to/model', savedir=tmpdir) 
>>> phonemes = g2p.g2p(text) 
INPUT_STATIC_KEYS = ['txt']
OUTPUT_KEYS = ['phonemes']
property phonemes

Returns the available phonemes

property language

Returns the language for which this model is available

g2p(text)[source]

Performs the Grapheme-to-Phoneme conversion

Parameters:

text (str or list[str]) – a single string to be encoded to phonemes - or a sequence of strings

Returns:

result – if a single example was provided, the return value is a single list of phonemes

Return type:

list

load_dependencies()[source]

Loads any relevant model dependencies

__call__(text)[source]

A convenience callable wrapper - same as G2P

Parameters:

text (str or list[str]) – a single string to be encoded to phonemes - or a sequence of strings

Returns:

result – if a single example was provided, the return value is a single list of phonemes

Return type:

list

forward(noisy, lengths=None)[source]

Runs enhancement on the noisy input

training: bool
class speechbrain.pretrained.interfaces.WaveformEnhancement(modules=None, hparams=None, run_opts=None, freeze_params=True)[source]

Bases: Pretrained

A ready-to-use model for speech enhancement.

Parameters:

Pretrained. (See) –

Example

>>> from speechbrain.pretrained import WaveformEnhancement
>>> # Model is downloaded from the speechbrain HuggingFace repo
>>> tmpdir = getfixture("tmpdir")
>>> enhancer = WaveformEnhancement.from_hparams(
...     source="speechbrain/mtl-mimic-voicebank",
...     savedir=tmpdir,
... )
>>> enhanced = enhancer.enhance_file(
...     "speechbrain/mtl-mimic-voicebank/example.wav"
... )
MODULES_NEEDED = ['enhance_model']
enhance_batch(noisy, lengths=None)[source]

Enhance a batch of noisy waveforms.

Parameters:
  • noisy (torch.Tensor) – A batch of waveforms to perform enhancement on.

  • lengths (torch.Tensor) – The lengths of the waveforms if the enhancement model handles them.

Returns:

A batch of enhanced waveforms of the same shape as input.

Return type:

torch.Tensor

enhance_file(filename, output_filename=None, **kwargs)[source]

Enhance a wav file.

Parameters:
  • filename (str) – Location on disk to load file for enhancement.

  • output_filename (str) – If provided, writes enhanced data to this file.

forward(noisy, lengths=None)[source]

Runs enhancement on the noisy input

training: bool
class speechbrain.pretrained.interfaces.SNREstimator(modules=None, hparams=None, run_opts=None, freeze_params=True)[source]

Bases: Pretrained

A “ready-to-use” SNR estimator.

MODULES_NEEDED = ['encoder', 'encoder_out']
HPARAMS_NEEDED = ['stat_pooling', 'snrmax', 'snrmin']
estimate_batch(mix, predictions)[source]

Run SI-SNR estimation on the estimated sources, and mixture.

Parameters:
  • mix (torch.Tensor) – The mixture of sources of shape B X T

  • predictions (torch.Tensor) –

    of size (B x T x C), where B is batch size

    T is number of time points C is number of sources

Returns:

Estimate of SNR

Return type:

tensor

forward(mix, predictions)[source]

Just run the batch estimate

gettrue_snrrange(inp)[source]

Convert from 0-1 range to true snr range

training: bool
class speechbrain.pretrained.interfaces.Tacotron2(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use wrapper for Tacotron2 (text -> mel_spec).

Parameters:

hparams – Hyperparameters (from HyperPyYAML)

Example

>>> tmpdir_tts = getfixture('tmpdir') / "tts"
>>> tacotron2 = Tacotron2.from_hparams(source="speechbrain/tts-tacotron2-ljspeech", savedir=tmpdir_tts)
>>> mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb")
>>> items = [
...   "A quick brown fox jumped over the lazy dog",
...   "How much wood would a woodchuck chuck?",
...   "Never odd or even"
... ]
>>> mel_outputs, mel_lengths, alignments = tacotron2.encode_batch(items)
>>> # One can combine the TTS model with a vocoder (that generates the final waveform)
>>> # Intialize the Vocoder (HiFIGAN)
>>> tmpdir_vocoder = getfixture('tmpdir') / "vocoder"
>>> hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir=tmpdir_vocoder)
>>> # Running the TTS
>>> mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb")
>>> # Running Vocoder (spectrogram-to-waveform)
>>> waveforms = hifi_gan.decode_batch(mel_output)
HPARAMS_NEEDED = ['model', 'text_to_sequence']
text_to_seq(txt)[source]

Encodes raw text into a tensor with a customer text-to-equence fuction

encode_batch(texts)[source]

Computes mel-spectrogram for a list of texts

Texts must be sorted in decreasing order on their lengths

Parameters:

texts (List[str]) – texts to be encoded into spectrogram

Return type:

tensors of output spectrograms, output lengths and alignments

encode_text(text)[source]

Runs inference for a single text str

forward(texts)[source]

Encodes the input texts.

training: bool
class speechbrain.pretrained.interfaces.FastSpeech2(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use wrapper for Fastspeech2 (text -> mel_spec). :param hparams: Hyperparameters (from HyperPyYAML)

Example

>>> tmpdir_tts = getfixture('tmpdir') / "tts"
>>> fastspeech2 = FastSpeech2.from_hparams(source="speechbrain/tts-fastspeech2-ljspeech", savedir=tmpdir_tts)
>>> mel_outputs, durations, pitch, energy = fastspeech2.encode_text(["Mary had a little lamb"])
>>> items = [
...   "A quick brown fox jumped over the lazy dog",
...   "How much wood would a woodchuck chuck?",
...   "Never odd or even"
... ]
>>> mel_outputs, durations, pitch, energy = fastspeech2.encode_text(items)
>>>
>>> # One can combine the TTS model with a vocoder (that generates the final waveform)
>>> # Intialize the Vocoder (HiFIGAN)
>>> tmpdir_vocoder = getfixture('tmpdir') / "vocoder"
>>> hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir=tmpdir_vocoder)
>>> # Running the TTS
>>> mel_outputs, durations, pitch, energy = fastspeech2.encode_text(["Mary had a little lamb"])
>>> # Running Vocoder (spectrogram-to-waveform)
>>> waveforms = hifi_gan.decode_batch(mel_outputs)
HPARAMS_NEEDED = ['spn_predictor', 'model', 'input_encoder']
encode_text(texts, pace=1.0, pitch_rate=1.0, energy_rate=1.0)[source]

Computes mel-spectrogram for a list of texts

Parameters:
  • texts (List[str]) – texts to be converted to spectrogram

  • pace (float) – pace for the speech synthesis

  • pitch_rate (float) – scaling factor for phoneme pitches

  • energy_rate (float) – scaling factor for phoneme energies

Return type:

tensors of output spectrograms, output lengths and alignments

encode_phoneme(phonemes, pace=1.0, pitch_rate=1.0, energy_rate=1.0)[source]

Computes mel-spectrogram for a list of phoneme sequences

Parameters:
  • phonemes (List[List[str]]) – phonemes to be converted to spectrogram

  • pace (float) – pace for the speech synthesis

  • pitch_rate (float) – scaling factor for phoneme pitches

  • energy_rate (float) – scaling factor for phoneme energies

Return type:

tensors of output spectrograms, output lengths and alignments

encode_batch(tokens_padded, pace=1.0, pitch_rate=1.0, energy_rate=1.0)[source]

Batch inference for a tensor of phoneme sequences :param tokens_padded: A sequence of encoded phonemes to be converted to spectrogram :type tokens_padded: torch.Tensor :param pace: pace for the speech synthesis :type pace: float :param pitch_rate: scaling factor for phoneme pitches :type pitch_rate: float :param energy_rate: scaling factor for phoneme energies :type energy_rate: float

forward(text, pace=1.0, pitch_rate=1.0, energy_rate=1.0)[source]

Batch inference for a tensor of phoneme sequences :param text: A text to be converted to spectrogram :type text: str :param pace: pace for the speech synthesis :type pace: float :param pitch_rate: scaling factor for phoneme pitches :type pitch_rate: float :param energy_rate: scaling factor for phoneme energies :type energy_rate: float

training: bool
class speechbrain.pretrained.interfaces.HIFIGAN(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use wrapper for HiFiGAN (mel_spec -> waveform). :param hparams: Hyperparameters (from HyperPyYAML)

Example

>>> tmpdir_vocoder = getfixture('tmpdir') / "vocoder"
>>> hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir=tmpdir_vocoder)
>>> mel_specs = torch.rand(2, 80,298)
>>> waveforms = hifi_gan.decode_batch(mel_specs)
>>> # You can use the vocoder coupled with a TTS system
>>> # Initialize TTS (tacotron2)
>>> tmpdir_tts = getfixture('tmpdir') / "tts"
>>> tacotron2 = Tacotron2.from_hparams(source="speechbrain/tts-tacotron2-ljspeech", savedir=tmpdir_tts)
>>> # Running the TTS
>>> mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb")
>>> # Running Vocoder (spectrogram-to-waveform)
>>> waveforms = hifi_gan.decode_batch(mel_output)
HPARAMS_NEEDED = ['generator']
decode_batch(spectrogram, mel_lens=None, hop_len=None)[source]

Computes waveforms from a batch of mel-spectrograms :param spectrogram: Batch of mel-spectrograms [batch, mels, time] :type spectrogram: torch.Tensor :param mel_lens: A list of lengths of mel-spectrograms for the batch

Can be obtained from the output of Tacotron/FastSpeech

Parameters:

hop_len (int) – hop length used for mel-spectrogram extraction should be the same value as in the .yaml file

Returns:

waveforms – Batch of mel-waveforms [batch, 1, time]

Return type:

torch.Tensor

mask_noise(waveform, mel_lens, hop_len)[source]

Mask the noise caused by padding during batch inference :param wavform: Batch of generated waveforms [batch, 1, time] :type wavform: torch.tensor :param mel_lens: A list of lengths of mel-spectrograms for the batch

Can be obtained from the output of Tacotron/FastSpeech

Parameters:

hop_len (int) – hop length used for mel-spectrogram extraction same value as in the .yaml file

Returns:

waveform – Batch of waveforms without padded noise [batch, 1, time]

Return type:

torch.tensor

decode_spectrogram(spectrogram)[source]

Computes waveforms from a single mel-spectrogram :param spectrogram: mel-spectrogram [mels, time] :type spectrogram: torch.Tensor

Returns:

  • waveform (torch.Tensor) – waveform [1, time]

  • audio can be saved by

  • >>> waveform = torch.rand(1, 666666)

  • >>> sample_rate = 22050

  • >>> torchaudio.save(str(getfixture(‘tmpdir’) / “test.wav”), waveform, sample_rate)

forward(spectrogram)[source]

Decodes the input spectrograms

training: bool
class speechbrain.pretrained.interfaces.WhisperASR(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use Whisper ASR model

The class can be used to run the entire encoder-decoder whisper model (transcribe()) to transcribe speech. The given YAML must contains the fields specified in the *_NEEDED[] lists.

Example

>>> from speechbrain.pretrained import WhisperASR
>>> tmpdir = getfixture("tmpdir")
>>> asr_model = WhisperASR.from_hparams(source="speechbrain/asr-whisper-large-v2-commonvoice-fr", savedir=tmpdir,) 
>>> asr_model.transcribe_file("speechbrain/asr-whisper-large-v2-commonvoice-fr/example-fr.mp3") 
HPARAMS_NEEDED = ['language']
MODULES_NEEDED = ['whisper', 'decoder']
transcribe_file(path)[source]

Transcribes the given audiofile into a sequence of words.

Parameters:

path (str) – Path to audio file which to transcribe.

Returns:

The audiofile transcription produced by this ASR system.

Return type:

str

encode_batch(wavs, wav_lens)[source]

Encodes the input audio into a sequence of hidden states

The waveforms should already be in the model’s desired format. You can call: normalized = EncoderDecoderASR.normalizer(signal, sample_rate) to get a correctly converted signal in most cases.

Parameters:
  • wavs (torch.tensor) – Batch of waveforms [batch, time, channels].

  • wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

The encoded batch

Return type:

torch.tensor

transcribe_batch(wavs, wav_lens)[source]

Transcribes the input audio into a sequence of words

The waveforms should already be in the model’s desired format. You can call: normalized = EncoderDecoderASR.normalizer(signal, sample_rate) to get a correctly converted signal in most cases.

Parameters:
  • wavs (torch.tensor) – Batch of waveforms [batch, time, channels].

  • wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

  • list – Each waveform in the batch transcribed.

  • tensor – Each predicted token id.

forward(wavs, wav_lens)[source]

Runs full transcription - note: no gradients through decoding

training: bool
class speechbrain.pretrained.interfaces.Speech_Emotion_Diarization(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use SED interface (audio -> emotions and their durations)

Parameters:

hparams – Hyperparameters (from HyperPyYAML)

Example

>>> from speechbrain.pretrained import Speech_Emotion_Diarization
>>> tmpdir = getfixture("tmpdir")
>>> sed_model = Speech_Emotion_Diarization.from_hparams(source="speechbrain/emotion-diarization-wavlm-large", savedir=tmpdir,) 
>>> sed_model.diarize_file("speechbrain/emotion-diarization-wavlm-large/example.wav") 
MODULES_NEEDED = ['input_norm', 'wav2vec', 'output_mlp']
diarize_file(path)[source]

Get emotion diarization of a spoken utterance.

Parameters:

path (str) – Path to audio file which to diarize.

Returns:

The emotions and their boundaries.

Return type:

dict

encode_batch(wavs, wav_lens)[source]

Encodes audios into fine-grained emotional embeddings

Parameters:
  • wavs (torch.tensor) – Batch of waveforms [batch, time, channels].

  • wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

The encoded batch

Return type:

torch.tensor

diarize_batch(wavs, wav_lens, batch_id)[source]

Get emotion diarization of a batch of waveforms.

The waveforms should already be in the model’s desired format. You can call: normalized = EncoderDecoderASR.normalizer(signal, sample_rate) to get a correctly converted signal in most cases.

Parameters:
  • wavs (torch.tensor) – Batch of waveforms [batch, time, channels].

  • wav_lens (torch.tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

  • batch_id (torch.tensor) – id of each batch (file names etc.)

Returns:

The frame-wise predictions

Return type:

torch.tensor

preds_to_diarization(prediction, batch_id)[source]

Convert frame-wise predictions into a dictionary of diarization results.

Returns:

A dictionary with the start/end of each emotion

Return type:

dictionary

forward(wavs, wav_lens)[source]

Runs full transcription - note: no gradients through decoding

is_overlapped(end1, start2)[source]

Returns True if segments are overlapping.

Parameters:
  • end1 (float) – End time of the first segment.

  • start2 (float) – Start time of the second segment.

Returns:

overlapped – True of segments overlapped else False.

Return type:

bool

Example

>>> from speechbrain.processing import diarization as diar
>>> diar.is_overlapped(5.5, 3.4)
True
>>> diar.is_overlapped(5.5, 6.4)
False
merge_ssegs_same_emotion_adjacent(lol)[source]

Merge adjacent sub-segs if they are the same emotion. :param lol: Each list contains [utt_id, sseg_start, sseg_end, emo_label]. :type lol: list of list

Returns:

new_lol – new_lol contains adjacent segments merged from the same emotion ID.

Return type:

list of list

Example

>>> from speechbrain.utils.EDER import merge_ssegs_same_emotion_adjacent
>>> lol=[['u1', 0.0, 7.0, 'a'],
... ['u1', 7.0, 9.0, 'a'],
... ['u1', 9.0, 11.0, 'n'],
... ['u1', 11.0, 13.0, 'n'],
... ['u1', 13.0, 15.0, 'n'],
... ['u1', 15.0, 16.0, 'a']]
>>> merge_ssegs_same_emotion_adjacent(lol)
[['u1', 0.0, 9.0, 'a'], ['u1', 9.0, 15.0, 'n'], ['u1', 15.0, 16.0, 'a']]
training: bool
class speechbrain.pretrained.interfaces.AudioClassifier(*args, **kwargs)[source]

Bases: Pretrained

A ready-to-use class for utterance-level classification (e.g, speaker-id, language-id, emotion recognition, keyword spotting, etc).

The class assumes that an encoder called “embedding_model” and a model called “classifier” are defined in the yaml file. If you want to convert the predicted index into a corresponding text label, please provide the path of the label_encoder in a variable called ‘lab_encoder_file’ within the yaml.

The class can be used either to run only the encoder (encode_batch()) to extract embeddings or to run a classification step (classify_batch()). ```

Example

>>> import torchaudio
>>> from speechbrain.pretrained import AudioClassifier
>>> tmpdir = getfixture("tmpdir")
>>> classifier = AudioClassifier.from_hparams(
...     source="speechbrain/cnn14-esc50",
...     savedir=tmpdir,
... )
>>> signal = torch.randn(1, 16000)
>>> prediction, _, _, text_lab = classifier.classify_batch(signal)
>>> print(prediction.shape)
torch.Size([1, 1, 50])
classify_batch(wavs, wav_lens=None)[source]

Performs classification on the top of the encoded features.

It returns the posterior probabilities, the index and, if the label encoder is specified it also the text label.

Parameters:
  • wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model. Make sure the sample rate is fs=16000 Hz.

  • wav_lens (torch.Tensor) – Lengths of the waveforms relative to the longest one in the batch, tensor of shape [batch]. The longest one should have relative length 1.0 and others len(waveform) / max_length. Used for ignoring padding.

Returns:

  • out_prob – The log posterior probabilities of each class ([batch, N_class])

  • score – It is the value of the log-posterior for the best class ([batch,])

  • index – The indexes of the best class ([batch,])

  • text_lab – List with the text labels corresponding to the indexes. (label encoder should be provided).

classify_file(path, savedir='audio_cache')[source]

Classifies the given audiofile into the given set of labels.

Parameters:

path (str) – Path to audio file to classify.

Returns:

  • out_prob – The log posterior probabilities of each class ([batch, N_class])

  • score – It is the value of the log-posterior for the best class ([batch,])

  • index – The indexes of the best class ([batch,])

  • text_lab – List with the text labels corresponding to the indexes. (label encoder should be provided).

forward(wavs, wav_lens=None)[source]

Runs the classification

training: bool
class speechbrain.pretrained.interfaces.PIQAudioInterpreter(*args, **kwargs)[source]

Bases: Pretrained

This class implements the interface for the PIQ posthoc interpreter for an audio classifier.

Example

>>> from speechbrain.pretrained import PIQAudioInterpreter
>>> tmpdir = getfixture("tmpdir")
>>> interpreter = PIQAudioInterpreter.from_hparams(
...     source="speechbrain/PIQ-ESC50",
...     savedir=tmpdir,
... )
>>> signal = torch.randn(1, 16000)
>>> interpretation, _ = interpreter.interpret_batch(signal)
preprocess(wavs)[source]

Pre-process wavs to calculate STFTs

classifier_forward(X_stft_logpower)[source]

the forward pass for the classifier

invert_stft_with_phase(X_int, X_stft_phase)[source]

Inverts STFT spectra given phase.

interpret_batch(wavs)[source]

Classifies the given audio into the given set of labels. It also provides the interpretation in the audio domain.

Parameters:

wavs (torch.Tensor) – Batch of waveforms [batch, time, channels] or [batch, time] depending on the model. Make sure the sample rate is fs=16000 Hz.

Returns:

  • x_int_sound_domain – The interpretation in the waveform domain

  • text_lab – The text label for the classification

  • fs_model – The sampling frequency of the model. Useful to save the audio.

interpret_file(path, savedir='audio_cache')[source]

Classifies the given audiofile into the given set of labels. It also provides the interpretation in the audio domain.

Parameters:

path (str) – Path to audio file to classify.

Returns:

  • x_int_sound_domain – The interpretation in the waveform domain

  • text_lab – The text label for the classification

  • fs_model – The sampling frequency of the model. Useful to save the audio.

forward(wavs, wav_lens=None)[source]

Runs the classification

training: bool