speechbrain.lobes.downsampling module
Combinations of processing algorithms to implement downsampling methods.
- Authors
Salah Zaiem
Summary
Classes:
1D Convolutional downsampling with a learned convolution |
|
Wrapper for downsampling techniques |
|
1D Pooling downsampling (non-learned) |
|
Signal downsampling (Decimation) |
Reference
- class speechbrain.lobes.downsampling.Downsampler(*args, **kwargs)[source]
Bases:
Module
Wrapper for downsampling techniques
- class speechbrain.lobes.downsampling.SignalDownsampler(downsampling_factor, initial_sampling_rate)[source]
Bases:
Downsampler
Signal downsampling (Decimation)
- Parameters:
Example
>>> sd = SignalDownsampler(2,16000) >>> a = torch.rand([8,28000]) >>> a = sd(a) >>> print(a.shape) torch.Size([8, 14000])
- class speechbrain.lobes.downsampling.Conv1DDownsampler(downsampling_factor, kernel_size)[source]
Bases:
Downsampler
1D Convolutional downsampling with a learned convolution
- Parameters:
Example
>>> sd = Conv1DDownsampler(3,161) >>> a = torch.rand([8,33000]) >>> a = sd(a) >>> print(a.shape) torch.Size([8, 10947])
- class speechbrain.lobes.downsampling.PoolingDownsampler(downsampling_factor, kernel_size, padding=0, pool_type='avg')[source]
Bases:
Downsampler
1D Pooling downsampling (non-learned)
- Parameters:
Example
>>> sd = PoolingDownsampler(3,41) >>> a = torch.rand([8,33000]) >>> a = sd(a) >>> print(a.shape) torch.Size([8, 10987])