Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Biologically Inspired Facial Emotion Recognition

Abstract

When facial emotion recognition is performed in unconstrained settings, humans outperform state-of-the-art algorithms. The major technical problems of a state-of-the-art system are that: (1) they attempt to use all of the frames in the training data to build a training model, even frames that are redundant or are not necessary to describe a person's emotions. (2) If the system is using a Gabor filter as a facial feature descriptor, it captures noise due to background texture as important edge information when this information is erroneous, and the amount of computer memory required to describe the faces using the Gabor filter is undesireable high. (3) Most of the current algorithms do not generalize to unconstrained data because each person expresses his/her emotions in different ways, and the persons in the testing data are not the same persons encountered in the training data. In these situations, current approaches perform inadequately because models developed from training data cannot properly predict the emotions of unseen testing samples. We address each of these three problems by presenting systems that are based on the human visual system. The first system, called vision and attention theory, downsamples the training and testing data temporally to reduce the memory cost. The second system, called background-suppressing Gabor filtering, represented the face in the same way the human visual system's non-classical receptive field represents a face to overcome background texture. The third system, called score-based facial emotion recognition, scores a frontal face image's relationship to references of a face and temporal. We thoroughly test all systems on four different, publicly available datasets: the Japanese Female Facial Expression Database, Cohn-Kanade+, Man-Machine Interface and the Audio/Visual Emotion Challenge. We find that our systems, which emulated the human visual system, perform better than state-of-the-art systems. This work shows promise for the detection of facial emotion in unconstrained settings.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View