Performance Analysis and Evaluation of Different Deep Learning Algorithms for Facial Expression Recognition

Submission Deadline-30th July 2024
June 2024 Issue : Publication Fee: 30$ USD Submit Now
Submission Deadline-20th July 2024
Special Issue of Education: Publication Fee: 30$ USD Submit Now

Performance Analysis and Evaluation of Different Deep Learning Algorithms for Facial Expression Recognition

Iffat Tamanna, Md Ahsanul Haque
Dept of Computer Science & Engineering, Bangladesh University of Business & Technology Dhaka, Bangladesh
DOI: https://doi.org/10.51584/IJRIAS.2023.8719
Received: 24 May 2023; Revised: 24 June 2023; Accepted: 29 June 2023; Published: 05 August 2023

 

IJRISS Call for paper

Abstract—Emotions are dynamic biological states that are connected to all of the nerve systems. The problem of facial expression recognition has been thoroughly investigated, leading to the development of some robust and accurate face recognition algorithms. The effectiveness of three such algorithms (CNN, VGG16, and ResNet50) that have been widely studied and applied in the research community are investigated and compared in this paper. The aim is to use grayscale images to train these training models and compare their accuracy and data losses. The system will be able to detect the seven facial expressions Angry, Neutral, Contempt, Disgust, Fear, Happy, and Sad after training these models. To compare their precision, the same batch size and epoch were used. After reviewing all possible evaluations based on these output matrices, it is clear that all three networks produce reliable effect identification, with CNN being the most accurate.

Index Terms—Convolutional Neural Network(CNN),VGG16, ResNet50, Facial Expression, Gabor Filter, Deep Learning

I. Introduction

Every person communicates with others not only through words, but also through body movements, which they use to highlight specific parts of their speech and to express emotions. Emotion can be expressed by many things but facial expression is the best expression to display human emotion [1]. This facial emotion plays a vital role in our communication. Machine learning plays an important in the history of computer science. [2] By using computer-based technology, human facial emotions can be detected. It used to be able to instantly identify faces, code facial expressions, and recognize emotional states. Using a computer to detect this human emotion is still a huge challenge. The emotion detection method is implemented using many algorithms. The majority of the system accomplishes this by analyzing faces in images or videos through computer-powered cameras embedded in laptops, cell phones, and digital signage systems, as well as cameras mounted on computer screens. Emotion data is now being used by market analysts to deploy product advertisements. Emotion recognition is used in the videogame research process. By using facial expression analysis, game developers can obtain knowledge and draw conclusions about the feelings encountered during game play, and integrate that input into the final product [3]. A lot of recent study is centered on the use of artificial intelligence (AI) and deep learning algorithms to classify emotions. People are working nonstop to close the distance between machine and human contact.
In this paper we have tried to figure out the best methodological approach between Convolutional neural network (CNN), VGG16, and ResNet50 for human facial detection. Our machine receives an image as input, and then we use these models to predict the facial expression mark, which should be one of the following: Angry, Neutral, Contempt, Disgust, Fear, Happy, or Sad. Emotion can be used in a variety of ways in our business and everyday lives. In this analysis, we compared three different approaches currently in use to see which one provides the best precision for a vast volume of data, such as FER-2 which is taken from kaggle.
The rest of the paper is summarized as follows.A brief review of some existing research work is provided in Section I. In Section III, a detailed description of emotion detection techniques is presented. In Section IV, experimental procedure is shown. Result and Performance analysis is presented in Section V. Finally, a conclusion with future work is provided in Section VI.

II. Related Work

Over the last few decades, many kinds of research have been conducted and the interest of research on the formulation of methods and systems for the classification and recognition of human facial expressions raises. We have found that most of the studies and implementation are done to use facial expressions detection from videos and images. Currently, mobile application and social media websites use different techniques to detect emotions.
There has been work conducted on seven classes [4] with ResNet50, VGG16, and SE-Resnet50 to distinguish facial recognition from seven classes. Instead of accuracy, this research focuses on the precision and recall of qualified models. The most critical criterion for determining how well the model works is consistency. They can’t make an informed decision on the right model if they don’t have enough accuracy.