Date of Award





Computer Science - Applied Computing Track


TSYS School of Computer Science

First Advisor

Shamim Khan


Facial expressions conveying emotions are vital for human communication. They are also important in the studies of human interaction and behavioral studies. Recognition of emotions, using facial images, may provide a fast and practical approach that is noninvasive. Most previous studies of emotion recognition through facial images were based on the Facial Action Coding System (FACS). The FACS, which was developed by Ekman and Freisen in 1978, was created to identify different facial muscular actions. Previous artificial neural network-based approaches for classification of facial expressions focused on improving one particular neural network model for better accuracy. The purpose of this present study was to compare different artificial neural network models, and determine which model was best at recognizing emotions through facial images. The three neural network models were: 1 . The Hopfield network 2. The Learning Vector Quantization network 3. Multilayer (Feedforward) back-propagation network Several facial parameters were extracted from facial images and used in training the different neural network models. The best performing neural network was the Hopfield network at 72.50% accuracy. Next, the facial parameters were tested for their significance in identifying facial expressions and a subset of the original facial parameters was used to retrain the networks. The best performing network using the subset of facial parameters was the LVQ network at 67.50% accuracy. This study has helped to understand which neural network model was best at identifying facial expression and to understand the importance of having a good set of parameters representing the facial expression. This study has shown that more research is needed to find a good set of parameters that will improve the accuracy of emotion identification using artificial neural networks.