Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Cruz

UC Santa Cruz Electronic Theses and Dissertations bannerUC Santa Cruz

Stylistic Control for Neural Natural Language Generation

Abstract

Neural models for generating text from structured representations of meaning have recently gained popularity in the natural language generation (NLG) community. Instead of using a traditional NLG pipeline involving separate modules for sentence planning and surface realization, neural models combine these steps into a single end-to-end framework. This new paradigm allows for low effort data-driven generation, but makes it very unclear how to control model output and produce the required semantics with the desired syntactic or stylistic constructions for a given application.

This thesis takes on the task of learning to control neural natural language generation systems, with the goal of producing natural language outputs that are both semantically correct and stylistically varied. We tackle three critical bottlenecks of neural NLG: how to introduce a mechanism to produce style with neural generators, how to systematically acquire massive amounts of data required for training them, and how to jointly control semantic and stylistic choices, to allow for more diverse model outputs. We address the style bottleneck by experimenting with different methods for supervision in neural models with a synthetic dataset that we build (PersonageNLG), showing that we can produce diverse sentence planning operations in our model outputs. We address the data bottleneck by using freely available review data to create a massive, highly descriptive, and stylistically diverse corpus for training neural generators (YelpNLG), instead of relying on crowdsourcing. We address the control bottleneck by constructing stylistically rich meaning representations derived from review text based on parse information and freely-available ontologies, providing different forms of supervision to our neural models, and allowing us to produce outputs exhibiting a rich array of stylistic variation from semantically-grounded inputs.

We show that by controlling the nature of our input data and how it is represented in our models, we can control a model's ability to produce a required style without sacrificing its ability to produce fluent outputs that express the required content. Our data and experiments introduce novel methods for producing stylistic variation within a neural natural language generation pipeline, and are generalizable to new domains and style choices.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View