speechbrain.nnet.quantisers module

Gumbel Softmax implementation with multiple groups possible.

Authors
  • Rudolf A. Braun 2022

Summary

Classes:

GumbelVectorQuantizer

Vector quantization using gumbel softmax.

Reference

class speechbrain.nnet.quantisers.GumbelVectorQuantizer(input_dim, num_vars, temp_tuple, groups, vq_dim)[source]

Bases: Module

Vector quantization using gumbel softmax. Copied from fairseq implementation. :param input_dim: Input dimension (channels). :type input_dim: int :param num_vars: Number of quantized vectors per group. :type num_vars: int :param temp_tuple: Temperature for training. this should be a tuple of 3 elements: (start, stop, decay factor). :type temp_tuple: float :param groups: Number of groups for vector quantization. :type groups: int :param vq_dim: Dimensionality of the resulting quantized vector. :type vq_dim: int

Example

>>> quantiser = GumbelVectorQuantizer(128, 100, (2.0, 0.25, 0.999995,), 2, 50 )
>>> inputs = torch.rand(10, 12, 128)
>>> output = quantiser(inputs)
>>> output["x"].shape
torch.Size([10, 12, 50])
update_temp(steps)[source]

Update the temperature given the current step

forward(x)[source]

Forward the latent vector to obtain a quantised output

training: bool