[SCISPACE] NGRAM Language Model

sifoo badsifoo bad
1 min read

N-gram language models are probabilistic models of text that use a limited amount of history or word dependencies. They are widely used in speech applications and natural language systems. One issue with n-gram language models is that they do not completely represent the language, so smoothing methods are needed to distribute probabilities over unknown values. Different smoothing algorithms may perform better depending on the application of the language model. For example, unigram models with interpolation smoothing may be better for information retrieval applications, while trigram models with backoff smoothing may be better for speech. Several smoothing methods have been examined for bigram language models created using chat data, which are used for machine translation and text classification models

https://typeset.io/search?q=ngram%20language%20model

0
Subscribe to my newsletter

Read articles from sifoo bad directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

sifoo bad
sifoo bad