Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Making the Most of It: Word Sense Annotation and Disambiguation in the Face of Data Sparsity and Ambiguity

Abstract

Natural language is highly ambiguous, with the same word having different meanings depending on the context. While human readers often have no trouble interpreting the correct meaning, semantic ambiguity poses a significant problem for many natural language systems, such as those that translate text or perform machine reading. The task of identifying which meaning of a word is present in a given context is known as Word Sense Disambiguation (WSD), where a word's meanings are discretized into units referred to as senses. Because languages contain hundreds of thousand of unique words and each of those words can have multiple meanings, comprehensive sense-annotated corpora are often sparse, with only tens to low-hundreds of annotated examples of each word. As a result, creating high performance WSD systems requiring overcoming this data sparsity.

This thesis provides a three-fold approach to improving WSD performance in the face of data sparsity. First, we introduce two new algorithms that take the role of a lexicographer and automatically learn the senses of a word from example uses in a fully unsupervised way. We then demonstrate that these unsupervised systems can be combined with a limited amount of annotated data to create a semi-supervised WSD system that significantly outperforms a state-of-the-art supervised WSD system trained on the same data. Second, we propose a novel method for gathering high-quality sense annotations from large numbers of untrained, online workers, commonly referred to as crowdsourcing. Our method lowers the time and cost of building sense-annotated corpora, while maintaining as high a level of agreement between annotators, comparable with that of trained experts. Third, we analyze cases of ambiguity in sense annotations, when two annotators differ about which sense best describes the meaning of a particular usage of a word. To perform this analysis, we built the largest sense-annotated corpus where cases of semantic ambiguity are explicitly marked. Our analysis of this corpus revealed multiple causes for this ambiguity as well as how the ambiguity may be interpreted and resolved by natural language applications using ambiguous data. To complement this work on ambiguity, we have also introduced a new methodology for evaluating WSD systems that explicitly report ambiguous instances.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View