You are here

Context-based Language Model Adaptation for Lecture Translation

Event date: 
Thursday, 18 April, 2013 - 11:00
Location: 
Sala EIT
Speaker: 
Nick Ruiz
Abstract: 

Generally, Statistical Machine Translation systems are trained on general-purpose corpora, such as legislative proceedings or newswire texts. For SMT systems to be useful in the real world, it is necessary that SMT systems are robust with respect to the form or genre of new, untranslated texts. In many cases, domain adaptation is applied by adapting the probabilistic models of a SMT system (e.g. translation and language models) to statistically represent an entire translation task. However, in other cases, such as lecture translation, each document or discourse can vary widely from one another and can even consist of topical changes that cannot be accurately accounted for in a birds' eye perspective. In such scenarios, it is preferable to employ topic adaptation, which seeks to adapt a discourse based on small contexts of information that neighbor a given sentence or utterance.

In this talk, we focus primarily on topic adaptation for language modeling to improve the fluency of translations, both through word choice and small reordering decisions. We present crosslingual topic adaptation methods which adapt a language model (LM) based on the topic distribution of an adaptation context during translation. We construct a topic model on trained a collection of bilingual documents to model both topic and unigram distributions which are later used to adapt general purpose LMs on the fly, given only source language texts. In particular, we explore adaptation techniques based on the theory of Minimum Discrimination Information (MDI) (Della-Pietra et al. 1992).

Since MDI adaptation cannot be computed in real-time for scenarios such as lecture translation, we additionally present a lazy log-linear approximation that can be efficiently computed during translation decoding.