Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Cruz

UC Santa Cruz Electronic Theses and Dissertations bannerUC Santa Cruz

Generating Natural Language with Semantic and Syntactic Generalization

Creative Commons 'BY' version 4.0 license
Abstract

Traditional statistical natural language generation (NLG) systems require substantial hand-engineering: many of the components, such as content planners, sentence planners and surface realizers must all be designed and created manually and updated when a new type of utterance is required. Neural natural language generation (NNLG) models, on the other hand, learn to generate text through processing massive amounts of data in end-to-end encoder-decoder frameworks, where syntactic properties are learned automatically by the model. While the learning components of NNLG models are mostly accomplished automatically, they do require, however, that the training data be collected and labeled, which can often be a laborious process. The advantages of not needing handcrafted templates and syntax-to-semantics dictionaries may thereby be offset by the need to retrain neural models with new data as new domains are added, effectively replacing a "knowledge bottleneck" with a "data bottleneck".

To overcome the data bottleneck, we experiment with methods to leverage existing datasets to allow our NNLG models to generalize to novel meaning representations and sentence planning operations. We explore the generation of artificial data and mixing data from different sources as a way to augment existing training data available to the NNLG. Given our different methods for augmentation, we evaluate whether NNLGs can learn syntactic and semantic generalizations.

In this thesis, we developed methods to enable NNLG models to perform three types of generalization. First, we generalized multiple stylistic features to a single supervision token which represented the personality formed by the stylistic features. We then generated output with two different personality supervision tokens to create novel personalities not present in the training data. Second, we performed generalization on sentence planning operations. Sentence planning is the module of NLG models that affects the way individual propositions are combined into sentences, which usually has an effect on the final style of the realization. We generalized specific sentence planning operations, such as sentence scoping and contrastive discourse structuring, with values and attribute combinations beyond what was seen in the original training data. Finally, we combined training data from different sources to produce outputs for meaning representations that blend the ontologies from both sources. In all three experiments, we investigated different representations and architectures to enable models to generalize.

Our contributions include the development of methods that enable NNLG models to generalize and thereby expand beyond their original training data. This is an important extension of results to date and a significant step in making NNLG models more useful at low data cost. We also generated and released multiple datasets which we used to train and test our models.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View