Logo

Getting started

  • Quick installation
    • Install via PyPI
    • Install locally
    • Test installation
    • Operating Systems
    • Setting up a Conda environment/virtualenv
      • Conda
    • venv setup
    • Test your GPU installation
  • Running an experiment
    • YAML basics
      • Security warning
      • Features
    • Running arguments
    • Tensor format
    • Modified PyTorch globals and GPU quirks
    • Reproducibility
  • Project Structure & Ecosystem
    • Directory Structure
    • External Platforms
    • Testing Infrastructure
  • Contributing

Tutorial notebooks

  • SpeechBrain Basics
    • Introduction to SpeechBrain
      • Motivation
      • Supported Technologies
      • Installation
        • Local Installation (Git clone)
      • Running an Experiment
      • Hyperparameter specification with YAML
      • Experiment File
        • Data Specification
        • Data processing pipeline
        • Custom forward and cost computation methods
        • Brain Class
      • Pretrain and use
      • Folder Organization
      • Tensor Format
      • Citing SpeechBrain
    • What can I do with SpeechBrain?
      • Speech Recognition on Different Languages
        • English
        • French
        • Italian
        • Mandarin
        • Kinyarwanda
      • Speech Separation
      • Speech Enhancement
      • Speaker Verification
      • Speech Synthesis (Text-to-Speech)
      • Citing SpeechBrain
    • The Brain Class
      • Arguments to Brain class
        • modules argument
        • opt_class argument
        • hparams argument
        • run_opts argument
        • checkpointer argument
      • The fit() method
        • Fit structure
        • make_dataloader
        • on_fit_start
        • on_stage_start
        • Training loop
        • on_stage_end
        • Validation loop
        • on_stage_end
      • The evaluate() method
      • Conclusion
      • Citing SpeechBrain
    • HyperPyYAML Tutorial
      • Basic YAML syntax
      • Tags !new: and !name:
      • Tags !ref and !copy
      • Other tags
      • Overrides
      • Conclusion
      • Citing SpeechBrain
    • Data Loading
      • Install dependencies
      • Preface: PyTorch data loading pipeline
        • Overview
        • Dataset
        • Collation function
        • Sampler
        • DataLoader
      • SpeechBrain Basic dataIO
        • Dataset Annotation
        • DynamicItemDataset
        • Dynamic Item Pipelines (DIPs)
        • CategoricalEncoder
        • PaddedBatch and SaveableDataLoader
      • Full Example: Training a simple Speaker Recognition System.
      • Acknowledgements
      • Citing SpeechBrain
    • Checkpointing
      • The role of the SpeechBrain checkpointer
      • Installing dependencies
      • The SpeechBrain Checkpointer in a nutshell
      • What does a checkpoint look like?
        • What goes in each file?
        • Meta info
      • Keeping a limited amount of checkpoints
        • Pretraining / parameter transfer
        • Finding the best checkpoint
        • Transferring parameters
        • Orchestrating transfer
      • Citing SpeechBrain
  • SpeechBrain Advanced
    • Performance Profiling
      • Installation
      • Calling the profiler
      • Visualizing the logs with tensorboard
      • Citing SpeechBrain
    • Dynamic Batching: What is it and why it is necessary sometimes
      • SpeechBrain DynamicBatchSampler class
        • Using speechbrain.dataio.samplers.DynamicBatchSampler
        • Advanced Parameters: Full control over randomness, training speed, and VRAM consumption.
      • How to find good hyper-parameters and speed up training with DynamicBatchSampler
        • Dynamic Batching with Web dataset
      • Acknowledgements
      • Citing SpeechBrain
    • Hyperparameter Optimization
      • Prerequisites
        • Imports
        • Install SpeechBrain
        • Dependency Fixes
        • Install Oríon
      • Update the Recipe to Support Hyperparameter Optimization
      • Perform the Hyperparameter Search
        • Choose and Prepare Hyperparameters
        • Configure Orion
        • Define the Search Space
      • Inspecting Results
      • Hyperparameter Optimization at Scale
        • Multiple GPUs
        • Parallel or Distributed Oríon
      • Citing SpeechBrain
    • Federated Speech Model Training via Flower and SpeechBrain
      • Installation
      • What steps are needed for your experiments?
      • Integration details — coupling SpeechBrain to Flower
        • Define a Brain class
        • Initialise Brain class and dataset
        • Define a SpeechBrain client
        • Define a Flower Strategy on the server side
      • Run an experiment
      • Citing SpeechBrain
    • Inferring on your trained SpeechBrain model
      • Prerequisites
      • Context
      • Different options available
        • 1. Custom function in the training script
        • 2. Using the EndoderDecoderASR interface
        • 3. Developing your own inference interface
      • General Pretraining Inference
      • Citing SpeechBrain
    • Pretrained Models and Fine-Tuning with HuggingFace
      • Prerequisites
      • Installing Dependencies
      • Using PreTrained models to perform inference on your data
        • Automatic Speech Recognition
        • Speaker Verification, Recognition and Diarization
        • Source Separation
      • Fine-tuning or using pretrained models as components of a new pipeline
        • Setting up the data pipeline
        • Fine-Tuning the ASR model
      • Pretrainer Class
      • Acknowledgements
      • Citing SpeechBrain
    • Data Loading for Big Datasets and Shared Filesystems
      • What is WebDataset?
      • Installing dependencies
      • Creating TAR shards
        • Download some data
        • Iterate over the data
      • WebDataset with SpeechBrain
        • Some changes in the train data loading pipeline
        • More complex data loading pipelines
        • How to handle the DataLoader
      • Citing SpeechBrain
    • Text Tokenization
      • Why do we need tokenization?
      • Train sentencepiece tokenizer within SpeechBrain
        • Advanced parameters
      • Loading a pre-trained sentence piece tokenizer within SpeechBrain
      • How to use the sentencepiece
      • Use SpeechBrain SentencePiece with Pytorch
        • Example for option 1
        • Example for option 2
      • Citing SpeechBrain
    • Applying Quantization to a Speech Recognition Model
      • Introduction to Quantization
      • Quantization Approaches
        • Dynamic Quantization
        • Static Quantization
        • Comparing Dynamic and Static Quantization
      • Purpose of Tutorial
      • Prerequisites
        • Install SpeechBrain
        • Install Other Dependencies
        • Imports
        • Model Selection
        • Data Download and Preprocessing
      • Quantization Set-Up
        • Utility Functions
        • Static Quantization Wrapper
        • Quantization Function
      • Benchmarking Set-Up
        • WER
        • Modify EncoderASR transcribe_batch
        • Benchmark Model Performance
      • Quantization and Benchmarking
        • Select Data
        • Original Model
        • Quantized Model
      • Citing SpeechBrain
  • Speech Preprocessing
    • Speech Augmentation
      • 1. Speed Perturbation
      • 2. Time Dropout
      • 3. Frequency Dropout
      • 4. Clipping
      • 5. Augmentation Combination
      • References
      • Citing SpeechBrain
    • Fourier Transforms and Spectrograms
      • 1. Fourier Transform
        • What is the intuition?
      • 2. Short-Term Fourier Transform (STFT)
      • 3. Spectrogram
      • References
      • Citing SpeechBrain
    • Speech Features
      • 1. Filter Banks (FBANKs)
      • 2. Mel-Frequency Cepstral Coefficients (MFCCs)
      • 3. Context Information
        • 3.1 Derivatives
        • 3.2 Context Windows
      • 4. Other Features
      • References
      • Citing SpeechBrain
    • Environmental Corruption
      • 1. Additive Noise
      • 2. Reverberation
      • References
      • Citing SpeechBrain
    • Multi-microphone Beamforming
      • Introduction
        • Propagation model
        • Covariance matrices
        • Time Difference of Arrival
        • Direction of Arrival
        • Beamforming
      • Install SpeechBrain
      • Prepare audio
      • Processing
        • Steered-Response Power with Phase Transform
        • Multiple Signal Classification
        • Delay-and-Sum Beamforming
        • Minimum Variance Distortionless Response
        • Generalized Eigenvalue Beamforming
      • Citing SpeechBrain
    • Analyzing Vocal Features for Pathology
      • Intro
      • Compute autocorrelation and related features
      • Computing period-based feats
      • Compute GNE step-by-step
      • PRAAT-Parselmouth
      • Comparison with OpenSMILE
      • Citing SpeechBrain
  • Speech Processing Tasks
    • Speech Recognition From Scratch
      • Overview of Speech Recognition
        • Connectionist Temporal Classification (CTC)
        • Transducers
        • Encoder-Decoder with Attention 👂
        • Beamsearch
      • Installation
      • Which steps are needed?
        • 1. Prepare Your Data
        • 2. Train a Tokenizer
        • 3. Train a Language Model
        • 4. Train the Speech Recognizer
        • 5. Use the Speech Recognizer (Inference)
      • Step 1: Prepare Your Data
        • Data Manifest Files
        • Preparation Script
        • Copy Your Data Locally
      • Step 2: Tokenizer
        • Using Characters as Tokens
        • Using Words as Tokens
        • Byte Pair Encoding (BPE) Tokens
        • Train a Tokenizer
      • Step 3: Train a Language Model
        • Text Corpus
        • Train a LM
        • Hyperparameters
        • Experiment file
        • Step 4: Training the Attention-Based End-to-End Speech Recognizer
        • Train the speech recognizer
        • Hyperparameters
        • Experiment file
        • Why relative lengths instead of absolute lengths?
        • Other Methods
      • Pretrain and Fine-tune
      • Step 5: Inference
        • Utilizing Your Custom Speech Recognizer
      • Customize your speech recognizer
        • Train with your data
        • Train with your own model
      • Conclusion
      • Related Tutorials
      • Citing SpeechBrain
    • Metrics for Speech Recognition
      • Word Error Rate (WER)
      • Character Error Rate (CER)
      • Part-of-speech Error Rate (POSER)
        • dPOSER
        • uPOSER
      • Lemma Error Rate (LER)
      • Embedding Error Rate (EmbER)
      • BERTScore
      • Sentence Semantic Distance: SemDist
      • Some comparisons
      • Citing SpeechBrain
    • Source Separation
      • Introduction
      • A toy example
      • Exercises
      • A sound source separation example with a pre-existing model from speechbrain
      • Citing SpeechBrain
    • Speech Enhancement From Scratch
      • Recipe overview in Train.py
      • Citing SpeechBrain
    • Speech Classification From Scratch
      • Models
      • Data
      • Code
      • Installation
      • Which steps are needed?
      • Step 1: Prepare your data
        • Data manifest files
        • Preparation Script
        • Copy your data locally
      • Step 2: Train the classifier
        • Train a speaker-id model
        • Hyperparameters
        • Experiment file
      • Step 3: Inference
        • Use the EncoderClassifier interface on your model
      • Extension to different tasks
        • Train with your data on your task
        • Train with your own model
      • Conclusion
      • Related Tutorials
      • Citing SpeechBrain
    • Voice Activity Detection
      • What is VAD useful for?
      • Why is challenging?
      • Pipeline description
      • Training
      • Inference
      • Inference Pipeline (Details)
        • 1- Posterior Computation
        • 2- Apply a Threshold
        • 3- Get the Boundaries
        • 4- Energy-based VAD (optional)
        • 5- Merge close segments
        • 6- Remove short segments
        • 7- Double check speech segments (optional)
      • Visualization
      • Appendix: on using energy-based VAD
      • Citing SpeechBrain
  • Neural Architectures
    • Fine-tuning or using Whisper, wav2vec2, HuBERT and others with SpeechBrain and HuggingFace
      • Prerequisites
      • Wav2Vec 2.0 and Whisper from HuggingFace
      • Wav2Vec 2.0 and Whisper encoders as a block of your pipeline (ASR, TIMIT)
        • Understanding the yaml parameters.
      • Using Whisper as a fully pre-trained encoder-decoder
      • Citing SpeechBrain
    • Neural Network Adapters for faster low-memory fine-tuning
      • Prerequisites
      • Introduction and Background
        • Relevant bibliography
      • Too Long ; Didn’t Read
      • Detailed Tutorial
      • Inference
      • Adding adapters
      • Training the adapted model
      • Custom adapter
      • Conclusion
      • Citing SpeechBrain
    • Complex and Quaternion Neural Networks
      • Prerequisites
      • Introduction and Background
        • Connection to Neural Networks:
        • Relevant bibliography
      • SpeechBrain Representation of Complex and Quaternions
      • Complex and quaternion products
      • Complex-valued Neural Networks
        • Convolution layers
        • Linear layer
        • Normalization layers
        • Recurrent Neural Networks
      • Quaternion Neural Networks
        • Quaternion Spinor Neural Networks
        • Turning a quaternion layer into a spinor layer
      • Putting everyting together!
      • Citing SpeechBrain
    • Recurrent Neural Networks
      • 1. Vanilla RNN
      • 2. Long-Short Term Memory (LSTM)
      • 3. Gated Recurrent Units (GRUs)
      • 4. Light Gated Recurrent Units (LiGRU)
      • Citing SpeechBrain
    • Streaming Speech Recognition with Conformers
      • Recommended prerequisites
        • Installing SpeechBrain
      • What a streaming model needs to achieve
        • Summary of the tutorial
      • Architectural Changes to the Conformer
        • Chunked Attention
        • Dynamic Chunk Convolutions
        • What we aren’t changing
      • Training strategies and Dynamic Chunk Training
        • What metrics does the chunk size and left context size impact?
        • How to pick the chunk size?
        • Loss function(s)
      • Training: Piecing it all together with SpeechBrain
        • Automatic masking by passing a Dynamic Chunk Training configuration
        • Changes to the .yaml
        • Changes to the train.py
      • Debugging Streaming architectures
        • Detecting future dependencies in NN layers
      • Inference: The gory details
        • Wrapping the feature extractor for inference
        • Streaming context objects
        • Streaming forward methods
        • Streaming tokenizers
        • Streaming transducer Greedy Search
      • Inference: Practical example with StreamingASR
        • From trained model to StreamingASR hyperparameters
        • Inference with StreamingASR
        • ffmpeg live-stream functionality
        • Manually transcribing chunks
      • Alternatives and Further Reading
      • Citing SpeechBrain

Tips & tricks

  • Audio loading troubleshooting
    • Introduction
    • Recommended install steps
    • Note for developers & breaking torchaudio 2.x changes
    • Installing/troubleshooting backends
      • ffmpeg
      • SoundFile
      • SoX
  • Basics of multi-GPU
    • Multi-GPU training using Distributed Data Parallel (DDP)
      • Writing DDP-safe code in SpeechBrain
      • Single-node setup
      • Multi-node setup
        • Basics & manual multi-node setup
        • Multi-node setup with Slurm
    • (DEPRECATED) Single-node multi-GPU training using Data Parallel

API

  • Core library (speechbrain)
    • speechbrain.core module
      • Summary
      • Reference
        • create_experiment_directory()
        • parse_arguments()
        • Stage
        • Brain
    • speechbrain.alignment
      • speechbrain.alignment.aligner module
        • Summary
        • Reference
      • speechbrain.alignment.ctc_segmentation module
    • speechbrain.augment
      • speechbrain.augment.augmenter module
        • Summary
        • Reference
      • speechbrain.augment.codec module
        • Summary
        • Reference
      • speechbrain.augment.freq_domain module
        • Summary
        • Reference
      • speechbrain.augment.preparation module
        • Summary
        • Reference
      • speechbrain.augment.time_domain module
        • Summary
        • Reference
    • speechbrain.dataio
      • speechbrain.dataio.batch module
        • Summary
        • Reference
      • speechbrain.dataio.dataio module
        • Summary
        • Reference
      • speechbrain.dataio.dataloader module
        • Summary
        • Reference
      • speechbrain.dataio.dataset module
        • Summary
        • Reference
      • speechbrain.dataio.encoder module
        • Summary
        • Reference
      • speechbrain.dataio.iterators module
        • Summary
        • Reference
      • speechbrain.dataio.legacy module
        • Summary
        • Reference
      • speechbrain.dataio.preprocess module
        • Summary
        • Reference
      • speechbrain.dataio.sampler module
        • Summary
        • Reference
      • speechbrain.dataio.wer module
        • Summary
        • Reference
    • speechbrain.decoders
      • speechbrain.decoders.ctc module
        • Summary
        • Reference
      • speechbrain.decoders.language_model module
      • speechbrain.decoders.scorer module
        • Summary
        • Reference
      • speechbrain.decoders.seq2seq module
        • Summary
        • Reference
      • speechbrain.decoders.transducer module
        • Summary
        • Reference
      • speechbrain.decoders.utils module
        • Summary
        • Reference
    • speechbrain.inference
      • speechbrain.inference.ASR module
        • Summary
        • Reference
      • speechbrain.inference.SLU module
        • Summary
        • Reference
      • speechbrain.inference.ST module
        • Summary
        • Reference
      • speechbrain.inference.TTS module
        • Summary
        • Reference
      • speechbrain.inference.VAD module
        • Summary
        • Reference
      • speechbrain.inference.classifiers module
        • Summary
        • Reference
      • speechbrain.inference.diarization module
        • Summary
        • Reference
      • speechbrain.inference.encoders module
        • Summary
        • Reference
      • speechbrain.inference.enhancement module
        • Summary
        • Reference
      • speechbrain.inference.interfaces module
        • Summary
        • Reference
      • speechbrain.inference.interpretability module
        • Summary
        • Reference
      • speechbrain.inference.metrics module
        • Summary
        • Reference
      • speechbrain.inference.separation module
        • Summary
        • Reference
      • speechbrain.inference.speaker module
        • Summary
        • Reference
      • speechbrain.inference.text module
        • Summary
        • Reference
      • speechbrain.inference.vocoders module
        • Summary
        • Reference
    • speechbrain.lm
      • speechbrain.lm.arpa module
        • Summary
        • Reference
      • speechbrain.lm.counting module
        • Summary
        • Reference
      • speechbrain.lm.ngram module
        • Summary
        • Reference
    • speechbrain.lobes
      • speechbrain.lobes.beamform_multimic module
        • Summary
        • Reference
      • speechbrain.lobes.downsampling module
        • Summary
        • Reference
      • speechbrain.lobes.features module
        • Summary
        • Reference
      • speechbrain.lobes.models
        • speechbrain.lobes.models.BESTRQ module
        • speechbrain.lobes.models.CRDNN module
        • speechbrain.lobes.models.Cnn14 module
        • speechbrain.lobes.models.ContextNet module
        • speechbrain.lobes.models.DiffWave module
        • speechbrain.lobes.models.ECAPA_TDNN module
        • speechbrain.lobes.models.ESPnetVGG module
        • speechbrain.lobes.models.EnhanceResnet module
        • speechbrain.lobes.models.FastSpeech2 module
        • speechbrain.lobes.models.HifiGAN module
        • speechbrain.lobes.models.L2I module
        • speechbrain.lobes.models.MSTacotron2 module
        • speechbrain.lobes.models.MetricGAN module
        • speechbrain.lobes.models.MetricGAN_U module
        • speechbrain.lobes.models.PIQ module
        • speechbrain.lobes.models.RNNLM module
        • speechbrain.lobes.models.ResNet module
        • speechbrain.lobes.models.Tacotron2 module
        • speechbrain.lobes.models.VanillaNN module
        • speechbrain.lobes.models.Xvector module
        • speechbrain.lobes.models.beats module
        • speechbrain.lobes.models.conv_tasnet module
        • speechbrain.lobes.models.convolution module
        • speechbrain.lobes.models.dual_path module
        • speechbrain.lobes.models.fairseq_wav2vec module
        • speechbrain.lobes.models.kmeans module
        • speechbrain.lobes.models.resepformer module
        • speechbrain.lobes.models.segan_model module
        • speechbrain.lobes.models.wav2vec module
        • speechbrain.lobes.models.discrete
        • speechbrain.lobes.models.g2p
        • speechbrain.lobes.models.transformer
    • speechbrain.nnet
      • speechbrain.nnet.CNN module
        • Summary
        • Reference
      • speechbrain.nnet.RNN module
        • Summary
        • Reference
      • speechbrain.nnet.activations module
        • Summary
        • Reference
      • speechbrain.nnet.adapters module
        • Summary
        • Reference
      • speechbrain.nnet.attention module
        • Summary
        • Reference
      • speechbrain.nnet.autoencoders module
        • Summary
        • Reference
      • speechbrain.nnet.containers module
        • Summary
        • Reference
      • speechbrain.nnet.diffusion module
        • Summary
        • Reference
      • speechbrain.nnet.dropout module
        • Summary
        • Reference
      • speechbrain.nnet.embedding module
        • Summary
        • Reference
      • speechbrain.nnet.hypermixing module
        • Summary
        • Reference
      • speechbrain.nnet.linear module
        • Summary
        • Reference
      • speechbrain.nnet.losses module
        • Summary
        • Reference
      • speechbrain.nnet.normalization module
        • Summary
        • Reference
      • speechbrain.nnet.pooling module
        • Summary
        • Reference
      • speechbrain.nnet.quantisers module
        • Summary
        • Reference
      • speechbrain.nnet.schedulers module
        • Summary
        • Reference
      • speechbrain.nnet.unet module
        • Summary
        • Reference
      • speechbrain.nnet.utils module
        • Summary
        • Reference
      • speechbrain.nnet.complex_networks
        • speechbrain.nnet.complex_networks.c_CNN module
        • speechbrain.nnet.complex_networks.c_RNN module
        • speechbrain.nnet.complex_networks.c_linear module
        • speechbrain.nnet.complex_networks.c_normalization module
        • speechbrain.nnet.complex_networks.c_ops module
      • speechbrain.nnet.loss
        • speechbrain.nnet.loss.guidedattn_loss module
        • speechbrain.nnet.loss.si_snr_loss module
        • speechbrain.nnet.loss.stoi_loss module
        • speechbrain.nnet.loss.transducer_loss module
      • speechbrain.nnet.quaternion_networks
        • speechbrain.nnet.quaternion_networks.q_CNN module
        • speechbrain.nnet.quaternion_networks.q_RNN module
        • speechbrain.nnet.quaternion_networks.q_linear module
        • speechbrain.nnet.quaternion_networks.q_normalization module
        • speechbrain.nnet.quaternion_networks.q_ops module
        • speechbrain.nnet.quaternion_networks.q_pooling module
      • speechbrain.nnet.transducer
        • speechbrain.nnet.transducer.transducer_joint module
    • speechbrain.processing
      • speechbrain.processing.NMF module
        • Summary
        • Reference
      • speechbrain.processing.PLDA_LDA module
        • Summary
        • Reference
      • speechbrain.processing.decomposition module
        • Summary
        • Reference
      • speechbrain.processing.diarization module
      • speechbrain.processing.features module
        • Summary
        • Reference
      • speechbrain.processing.multi_mic module
        • Summary
        • Reference
      • speechbrain.processing.signal_processing module
        • Summary
        • Reference
      • speechbrain.processing.vocal_features module
        • Summary
        • Reference
    • speechbrain.tokenizers
      • speechbrain.tokenizers.SentencePiece module
        • Summary
        • Reference
      • speechbrain.tokenizers.discrete_SSL_tokenizer module
        • Summary
        • Reference
    • speechbrain.utils
      • speechbrain.utils.Accuracy module
        • Summary
        • Reference
      • speechbrain.utils.DER module
        • Summary
        • Reference
      • speechbrain.utils.EDER module
        • Summary
        • Reference
      • speechbrain.utils.autocast module
        • Summary
        • Reference
      • speechbrain.utils.bertscore module
        • Summary
        • Reference
      • speechbrain.utils.bleu module
      • speechbrain.utils.callchains module
        • Summary
        • Reference
      • speechbrain.utils.checkpoints module
        • Summary
        • Reference
      • speechbrain.utils.data_pipeline module
        • Summary
        • Reference
      • speechbrain.utils.data_utils module
        • Summary
        • Reference
      • speechbrain.utils.depgraph module
        • Summary
        • Reference
      • speechbrain.utils.dictionaries module
        • Summary
        • Reference
      • speechbrain.utils.distances module
        • Summary
        • Reference
      • speechbrain.utils.distributed module
        • Summary
        • Reference
      • speechbrain.utils.dynamic_chunk_training module
        • Summary
        • Reference
      • speechbrain.utils.edit_distance module
        • Summary
        • Reference
      • speechbrain.utils.epoch_loop module
        • Summary
        • Reference
      • speechbrain.utils.fetching module
        • Summary
        • Reference
      • speechbrain.utils.filter_analysis module
        • Summary
        • Reference
      • speechbrain.utils.hparams module
        • Summary
        • Reference
      • speechbrain.utils.hpopt module
        • Summary
        • Reference
      • speechbrain.utils.importutils module
        • Summary
        • Reference
      • speechbrain.utils.kmeans module
        • Summary
        • Reference
      • speechbrain.utils.logger module
        • Summary
        • Reference
      • speechbrain.utils.metric_stats module
        • Summary
        • Reference
      • speechbrain.utils.optimizers module
        • Summary
        • Reference
      • speechbrain.utils.parallel module
        • Summary
        • Reference
      • speechbrain.utils.parameter_transfer module
        • Summary
        • Reference
      • speechbrain.utils.pretrained module
        • Summary
        • Reference
      • speechbrain.utils.profiling module
        • Summary
        • Reference
      • speechbrain.utils.quirks module
        • Summary
        • Reference
      • speechbrain.utils.seed module
        • Summary
        • Reference
      • speechbrain.utils.semdist module
        • Summary
        • Reference
      • speechbrain.utils.streaming module
        • Summary
        • Reference
      • speechbrain.utils.superpowers module
        • Summary
        • Reference
      • speechbrain.utils.text_to_sequence module
        • Summary
        • Reference
      • speechbrain.utils.torch_audio_backend module
        • Authors
        • Summary
        • Reference
      • speechbrain.utils.train_logger module
        • Summary
        • Reference
    • Summary
    • Reference
      • make_deprecated_redirections()
  • HyperPyYAML (hyperpyyaml)
    • hyperpyyaml.core module
      • Summary
      • Reference
        • load_hyperpyyaml()
        • RefTag
        • Placeholder
        • dump_hyperpyyaml()
        • resolve_references()
        • deref()
        • recursive_resolve()
        • parse_arithmetic()
        • recursive_update()
    • Summary
    • Reference
      • TestThing
        • TestThing.from_keys()
SpeechBrain
  • Search


© Copyright 2021, SpeechBrain.

Built with Sphinx using a theme provided by Read the Docs.