speechbrain.lm.arpa module

Tools for working with ARPA format N-gram models

Expects the ARPA format to have: - a dataheader - counts of ngrams in the order that they are later listed - line breaks between dataand n-grams: sections - endE.G.

``` datangram 1=2 ngram 2=1

1-grams: -1.0000 Hello -0.23 -0.6990 world -0.2553

2-grams: -0.2553 Hello world

end```

Example

>>> # This example loads an ARPA model and queries it with BackoffNgramLM
>>> import io
>>> from speechbrain.lm.ngram import BackoffNgramLM
>>> # First we'll put an ARPA format model in TextIO and load it:
>>> with io.StringIO() as f:
...     print("Anything can be here", file=f)
...     print("", file=f)
...     print("\\data\\", file=f)
...     print("ngram 1=2", file=f)
...     print("ngram 2=3", file=f)
...     print("", file=f)  # Ends data section
...     print("\\1-grams:", file=f)
...     print("-0.6931 a", file=f)
...     print("-0.6931 b 0.", file=f)
...     print("", file=f)  # Ends unigram section
...     print("\\2-grams:", file=f)
...     print("-0.6931 a a", file=f)
...     print("-0.6931 a b", file=f)
...     print("-0.6931 b a", file=f)
...     print("", file=f)  # Ends bigram section
...     print("\\end\\", file=f)  # Ends whole file
...     _ = f.seek(0)
...     num_grams, ngrams, backoffs = read_arpa(f)
>>> # The output of read arpa is already formatted right for the query class:
>>> lm = BackoffNgramLM(ngrams, backoffs)
>>> lm.logprob("a", context = tuple())
-0.6931
>>> # Query that requires a backoff:
>>> lm.logprob("b", context = ("b",))
-0.6931
Authors
  • Aku Rouhe 2020

  • Pierre Champion 2023

Summary

Functions:

arpa_to_fst

Use kaldilm to convert an ARPA LM to FST.

read_arpa

Reads an ARPA format N-gram language model from a stream

Reference

speechbrain.lm.arpa.read_arpa(fstream)[source]

Reads an ARPA format N-gram language model from a stream

Parameters:

fstream (TextIO) – Text file stream (as commonly returned by open()) to read the model from.

Returns:

  • dict – Maps N-gram orders to the number ngrams of that order. Essentially the datasection of an ARPA format file.

  • dict – The log probabilities (first column) in the ARPA file. This is a triply nested dict. The first layer is indexed by N-gram order (integer). The second layer is indexed by the context (tuple of tokens). The third layer is indexed by tokens, and maps to the log prob. This format is compatible with speechbrain.lm.ngram.BackoffNGramLM Example: In ARPA format, log(P(fox|a quick red)) = -5.3 is expressed:

    -5.3 a quick red fox

    And to access that probability, use:

    ngrams_by_order[4][('a', 'quick', 'red')]['fox']

  • dict – The log backoff weights (last column) in the ARPA file. This is a doubly nested dict. The first layer is indexed by N-gram order (integer). The second layer is indexed by the backoff history (tuple of tokens) i.e. the context on which the probability distribution is conditioned on. This maps to the log weights. This format is compatible with speechbrain.lm.ngram.BackoffNGramLM Example: If log(P(fox|a quick red)) is not listed, we find log(backoff(a quick red)) = -23.4 which in ARPA format is:

    <logp> a quick red -23.4

    And to access that here, use:

    backoffs_by_order[3][('a', 'quick', 'red')]

Raises:

ValueError – If no LM is found or the file is badly formatted.

speechbrain.lm.arpa.arpa_to_fst(words_txt: str | Path, in_arpa: str | Path, out_fst: str | Path, ngram_order: int, disambig_symbol: str = '#0', cache: bool = True)[source]

Use kaldilm to convert an ARPA LM to FST. For example, you could use speechbrain.lm.train_ngram to create an ARPA LM and then use this function to convert it to an FST.

It is worth noting that if the fst already exists in the output_dir, then they will not be converted again (so you may need to delete them by hand if you, at any point, change your ARPA model).

Parameters:
  • words_txt (str | Path) – path to the words.txt file created by prepare_lang.

  • in_arpa (str | Path) – Path to an ARPA LM to convert to an FST.

  • out_fst (str | Path) – Path to where the fst will be saved.

  • ngram_order (int) – ARPA (and FST) ngram order.

  • disambig_symbol (str) – the disambiguation symbol to use.

  • cache (bool) – Whether or not to re-create the fst.txt file if it already exist.

Raises:

ImportError – If kaldilm is not installed.:

Example

>>> from speechbrain.lm.arpa import arpa_to_fst
>>> # Create a small arpa model
>>> arpa_file = getfixture('tmpdir').join("bigram.arpa")
>>> arpa_file.write(
...     "Anything can be here\n"
...     + "\n"
...     + "\\data\\\n"
...     + "ngram 1=3\n"
...     + "ngram 2=4\n"
...     + "\n"
...     + "\\1-grams:\n"
...     + "0 <s>\n"
...     + "-0.6931 a\n"
...     + "-0.6931 b 0.\n"
...     + "" # Ends unigram section
...     + "\\2-grams:\n"
...     + "-0.6931 <s> a\n"
...     + "-0.6931 a a\n"
...     + "-0.6931 a b\n"
...     + "-0.6931 b a\n"
...     + "\n"  # Ends bigram section
...     + "\\end\\\n")  # Ends whole file
>>> # Create words vocab
>>> vocav = getfixture('tmpdir').join("words.txt")
>>> vocav.write(
...     "a 1\n"
...     + "b 2\n"
...     + "<s> 3\n"
...     + "#0 4")  # Ends whole file
>>> out = getfixture('tmpdir').join("bigram.txt.fst")
>>> arpa_to_fst(vocav, arpa_file, out, 2)