Speaker: Tsung-Hsien Wen, doctoral candidate at University of Cambridge, Cambridge, U.K
Abstract: Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages.
In this talk I’m going to introduce our recently proposed language generator based on a Recurrent Neural Network architecture, dubbed the Semantic Conditioned Long-short Term Memory generator. The SC-LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. In a following-up study, I will also show that the model can be rapidly extended to new dialogue domains by data counterfeiting and discriminative training methods.