Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Computational Modeling of Cortical and Behavioral Responses to Emotional Stimuli

Abstract

Emotions are stereotyped responses to situations of high survival value. Recognition of these high survival value situations is a necessary precondition to emotional elicitation. As vision is the primary human sensory modality, visual perception of an emotional situation is often responsible for the elicitation of emotion in humans. This raises the question: how does the human brain extract the emotional content from patterns of light on the retina in order to initiate appropriate behavioral responses that are adaptive for survival? And how are these emotional scenes differentially processed from non-emotional scenes? In order to address these questions I conducted a series of experiments using techniques and theory from a broad range of disciplines including cognitive neuroscience, machine learning, psychophysics, and affective science. I motivate, describe, and interpret these experiments in the five chapters of this dissertation.

Chapter 1 contains a brief review of the previous scientific findings that motivated this dissertation. Functional magnetic resonance imaging (fMRI) is a non-invasive neuroimaging modality which allows researchers to record brain activity with high spatial resolution. Traditional techniques used in the analysis of fMRI data have recently been complimented by computational machine learning techniques, giving neuroscientists powerful new tools to infer the brain’s mechanisms and representations. Voxel-wise modeling (VWM) is one such method. Chapter 2 describes a fMRI experiment using a large corpus of naturalistic emotional images and VWM analysis to model brain representations of a combined semantic and emotional feature space. Principal components analysis (PCA) of voxel tuning was then used to uncover the primary dimensions of representation within occipital-temporal cortex (OTC). Alongside animacy, the valence and arousal of animate stimuli are primary dimensions of OTC tuning. Furthermore, this tuning is better able to predict the appropriate behavioral response to the images that subjects viewed than are the semantic and emotional image features used to model the fMRI data. These findings suggest that OTC representations of naturalistic emotional images may be used by other brain regions to elicit appropriate behavioral responses to situations of high survival value depicted in these emotional scenes.

In order to address several theoretical limitations of the study described in chapter 2, I conducted two literature reviews in Chapter 3. The first reviews studies on congenitally blind subjects which suggest that OTC contains supramodal semantic representations that may be idiosyncratically tuned to support appropriate behavior responses. The second reviews the literature on attention and perception of emotional images and the neural mechanisms that subserve these processes. Additionally, Chapter 3 describes seven fMRI analyses of the data from chapter 2 that address empirical limitations of chapter 2. These analyses include a representational similarity analysis (RSA), univariate SPM analysis, variance partitioning of the VWM model in chapter 2, and several control analyses. The findings from these analyses further boost our claims from chapter 2 that alongside animacy, the valence and arousal of animate stimuli are represented within OTC.

Often situations of high survival value occur quickly, or are only perceived briefly, and thus the human visual system has little information from which action can be taken. Turning from cortical to behavioral responses from naturalistic emotional images, in chapter 4 I describe an experiment using ultra-rapid image presentation to explore the limits of human performance in semantic and emotional valence categorization during a brief glance. Using stimuli and categorization tasks from a broader range of semantic and valence categories than had been done in previous studies, along with controls for observed response bias, I found that humans can accurately categorize both the semantic category (animal, human, object, or building) and emotional valence (negative, neutral or positive) of an image when presented for as little as 17ms and backwards masked. Furthermore, when the image depicted an emotionally negative scene, semantic performance was significantly worse than that for images depicting neutral or positive scenes. I also found that only when subjects successfully categorized the semantic category of an image was valence categorization also above chance across several valence by semantic category conditions. The converse was not true however. This suggests that the semantic information of an image must first be extracted before the emotional information can be extracted. This finding supports the cognitive primacy hypothesis as opposed to the affective primacy hypothesis, two competing hypotheses of visual emotional processing in the field of affective science. Finally, in chapter 5, I interpret the results of the experiments described in chapters 2-4 taken as a whole, and offer closing remarks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View