An Empirical Study of Smoothing Techniques for Language Modeling

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Harvard University, 1998. – 63 с.
We present a tutorial introduction to n-gram models for language modeling and survey the most widely-used smoothing algorithms for such models. We then present an extensive empirical comparison of several of these smoothing techniques. We investigate how factors such as training data size, training corpus (e.g., Brown versus Wall Street Journal), count cutoffs, and n-gram order (bigram versus trigram) affect the relative performance of these methods, which is measured through the cross-entropy of test data. Our results show that previous comparisons have not been complete enough to fully characterize smoothing algorithm performance. We introduce methodologies for analyzing smoothing algorithm efficacy in detail, and using these techniques we motivate a novel variation of Kneser-Ney smoothing that consistently outperforms all other algorithms evaluated. Finally, results showing that improved language model smoothing leads to improved speech recognition performance are presented.

Author(s): Chen S.F., Goodman J.

Language: English
Commentary: 989437
Tags: Информатика и вычислительная техника;Искусственный интеллект;Компьютерная лингвистика