International Journal of Research and Innovation in Applied Science (IJRIAS)

Submission Deadline-26th December 2024
Last Issue of 2024 : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th January 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-21st January 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Sign Language Recognition System using Flex Sensor Network

Sign Language Recognition System using Flex Sensor Network

Alumona T. L., Okorogu V.N, Nworabude E. F.

Electronic and Computer Engineering, Nnamdi Azikiwe University Awka, Anambra State Nigeria

DOI: https://doi.org/10.51584/IJRIAS.2023.8919

Received: 02 August 2023; Accepted: 29 August 2023; Published: 24 October 2023

ABSTRACT

This journal work on sign language recognition system using flex sensor network aims to x-ray the implementation of a sign language recognition system using smart gloves, raspberry pi, python and C language. Therefore, research into sign language interpretation using gestures has been explored progressively during recent decades to serve as an auxiliary tool for deaf and mute people to blend into society without barriers. In this work, a smart sign language interpretation system using a wearable hand device. This wearable system utilizes five flex-sensors, then using a PIC microcontroller, raspberry pi, python and C language to create a real-time Sign Language Recognition system. In this journal work, a sensor-based sign language recognition can serve as a key for overcoming many difficulties and providing convenience for human life. The ability of machines to understand human activities and their meaning can be utilized in a vast array of applications. This work provides a thorough technique in recent hand gesture and sign language recognition. After implementation, the system was able to output up to thirty-one phrases successfully. This paper was focused on improving the accuracy and usability of the system by optimizing the sensor placement and developing a user-friendly interface.

Key words– Sign recognition, Flex Sensor network. Python language, C- language, microcontroller, Raspberry pi and Smart glove

BACKGROUND OF STUDY

The use of sign language is essential for people who have hearing or speech impairments to communicate with others. However, it can be challenging for those who do not understand sign language to communicate effectively with this community. An AI-based sign language system can solve this problem by converting sign language into spoken or written language [1].

According to the World Federation of the Deaf and World Health Organization Estimates, about 70 million people worldwide are deaf-mute and face communication difficulties   because   they   cannot   read or write in standard two languages [2]. Sign language is the mother tongue that the deaf and mute use to communicate with others.  SL mainly   relies   on   gestures   to   convey   meaning, combining   finger   shapes, hand motions, and   facial   expressions [3], but the problem is the inability of the others to understand these languages, which act as a communication   barrier.   The   person   every   time   needs   a   translator for communication. With the aid of a lightweight SLR-based glove system fitted with sensors and an electronic circuit, this contact barrier can be overcome and used by both dumb and able people to learn SL [4]. There are many different sign languages used throughout the world, each with its own unique grammar, vocabulary, and cultural significance. For example, American Sign Language (ASL) is widely used in the United States and Canada, while British Sign Language (BSL) is used in the United Kingdom. Sign languages have a long and rich history, with evidence of sign language use dating back to ancient civilizations. Communication can be defined as the act of transferring information from one place, person, or group to another [5]. It consists of three components: the speaker, the message that is being communicated, and the listener. It can be considered successful only when whatever messages the speaker is trying to convey is received and understood by the listener. A lot of research has been done in this field and there is still a need for further research. For gesture translation, data gloves, motion capturing systems, or sensors have been used. Vision-based SLR systems have also been developed previously. The existing Indian Sign Language Recognition system was developed using machine learning algorithms with MATLAB [6]. They used two algorithms to train their system, K Nearest Neighbors Algorithm and Back Propagation Algorithm. Their system achieved 93-96% accuracy. Though being highly accurate, it is not a real-time SLR system.

There are two ways to make communication feasible between a healthy and affected person [6]. Firstly, convince that healthy person to learn all sign language gestures for communication with the deaf-mute person or, secondly, make any deaf-mute person capable of translating gestures into some normal speaking format so everyone can understand sign language easily. Considering the first option, it almost looks impossible to convince any healthy person to learn sign language for communication. This is also the main drawback of sign language. Therefore, technologists and researchers have focused on the second option to make deaf-mute people capable of converting their gestures into some meaningful voice or texture information. In this work Sign language recognition, a smart glove embedded with sensors module, a processing module, and display unit mobile application module was introduced that can convert handmade gestures into meaningful information easily understandable by ordinary people. Smart technology-based sign language interpreters that remove the communication gap between normal and affected people. These techniques are based on image processing or vision-sensor based techniques, and sensor fusion-based smart data glove-related techniques, or hybrid techniques. No such limitations are seen in these technological interpreters as extracting required features from an image is done at ease If we consider an image or vision-sensor based recognition system, there is no limitation of foreground or background in gesture recognition. Considering a sensor-based smart data glove, there is little to no limitation of carrying this data glove as it is mobile, lightweight, and flexible [4].

Theory of Sign Language Recognition System

Sign language recognition refers to the process of recognizing and interpreting hand gestures and facial expressions used in sign languages. Sign languages are used by deaf and hard-of-hearing individuals as a means of communication, and sign language recognition systems aim to bridge the communication gap between hearing and non-hearing individuals by translating sign language into spoken or written language. This can be accomplished using machine learning techniques, such as deep learning, which can be implemented using the Python programming language and the TensorFlow library [1].

Sign language recognition systems typically use computer vision, machine learning, and deep learning techniques to track and recognize hand gestures and facial expressions in video sequences. These systems can be used in a variety of applications, such as in sign language translation devices, virtual reality communication systems, and educational tools for teaching sign language [7].

Sign languages are fully developed natural languages with their own unique syntax, grammar, and vocabulary. They are not simply gestures or visual representations of spoken languages [8]. Studies have shown that sign languages are processed in the same areas of the brain as spoken languages, indicating that they are processed as linguistic input rather than visual input.

Sign language is also shaped by the social and cultural contexts in which it is used. Just as spoken languages have regional and cultural variations, sign languages have dialects and regional variations as well [9]. Sign language users may also code-switch between different sign languages or use different registers depending on the situation.

Sign language also provides unique insights into the cognitive processes involved in language use and acquisition. Studies have shown that deaf children who learn sign language from a young age develop language skills at a similar pace to their hearing peers, indicating that the visual-spatial modality of sign language does not impede language acquisition [10]

In [6], Starner et al. proposes using Hidden Markov Models (HMMs) to classify orientation, trajectory information and resultant shape of the sign language. HMMs is adapted from speech recognition, and its intrinsic properties make it suitable to be applied in gesture recognition, a total of 262 signs were collected from two different signers, and the average accuracy using HMMs classifier reaches accuracy of 94%. It is found out that the accuracy is greatly reduced when the database trained by the signs of one person is used to test by signs of another person, dropping to accuracy as low as 47.6%. Training the database with both signers improve accuracy to 91.3%.

In [11], Vogler and Metaxas stated that the use of HMMs alone has several limitations especially in training context dependent models. The authors employed Ascension Technologies Flock of Birds devices to collect the three-dimensional translation and rotation data of the sign. By using a bigram and epenthesis modeling, the average accuracy achieved is 95.83%. Research used similar experiment setup, and by using a context-dependent HMMs and a method of coupling three-dimensional techniques, the system classifies 53 ASL and attained highest accuracy of 89.91%. Gestures recognition involves complex processes such as motion modeling, motion analysis, pattern recognition and machine learning [21]. It consists of methods with manual and non-manual parameters. The structure of environment such as background illumination and speed of movement affects the predictive ability. The difference in viewpoints causes the gesture to appear different in 2D space. This paper presents a real-time American Sign Language recognition system using artificial neural networks. The system achieved high accuracy rates on a dataset of sign language gestures.

In [12], S. Ben Youssef et al. proposes a system for sign language recognition using a combination of different data sources, such as video and accelerometer sensors. The authors aim to improve the accuracy of sign language recognition by integrating multiple modalities of data. The proposed system consists of two modules: a data acquisition module and a sign language recognition module. The data acquisition module captures video data and accelerometer data simultaneously using a camera and an accelerometer sensor attached to the signer’s wrist [12]. The sign language recognition module then processes this data to recognize the signs. The authors use a combination of feature extraction and classification techniques to recognize signs. For video data, they use a feature extraction method based on the histogram of oriented gradients (HOG) and a classification technique based on hidden Markov models (HMM). For accelerometer data, they use a feature extraction method based on the magnitude of the acceleration and a classification technique based on k-nearest neighbors (KNN). The system achieved an overall recognition rate of 95.5%.

In [13], J. Han et al. proposes a system for recognizing sign language using kinematic features extracted from motion capture data and support vector machines (SVMs). The authors aim to improve the accuracy and robustness of sign language recognition by using kinematic features that capture the movement of the signer’s hands. The proposed system consists of three main stages: preprocessing, feature extraction, and classification. In the preprocessing stage, the authors apply a filtering algorithm to remove noise from the motion capture data. In the feature extraction stage, they extract kinematic features such as hand velocity, acceleration, and jerk. In the classification stage, they use SVMs to classify the sign language gestures based on the extracted kinematic features. The authors evaluate their system using a dataset of 10 sign language gestures performed by 15 signers. The results show that the proposed system achieved an overall recognition rate of 91.9%. The authors also compare their system with other existing methods in the literature and show that their system outperforms these methods in terms of accuracy and robustness.

In [14], S. Balasubramanian et al. provides a comprehensive review of the literature on sign language recognition using machine learning techniques. The authors aim to provide an overview of the different approaches used in the field and highlight the challenges and future directions for research. The paper begins with a brief overview of sign language and the importance of sign language recognition in facilitating communication for the deaf community. The authors then provide a taxonomy of the different approaches to sign language recognition, such as data modalities (video, accelerometer etc.), feature extraction techniques like HOG and classification methods (HMMs, SVMs, KNN, etc.). The authors also discuss the challenges in sign language recognition, such as the variability of sign language across signers and the need for large datasets with diverse signers and variations in lighting and camera angles. They suggest that future research could focus on addressing these challenges through the development of more robust and scalable systems.

In [15], M. Al-Rousan et al. presents a method for recognizing sign language gestures using hand shape and motion features. The authors focus on recognizing the hand shape and motion features of American Sign Language (ASL) gestures. The paper begins with an overview of the existing literature on sign language recognition, highlighting the importance of recognizing both the hand shape and motion features for accurate recognition. The authors then describe their proposed method, which involves extracting features related to hand shape and motion using a combination of image processing and computer vision techniques. For hand shape features, the authors extract features related to the contours of the hand, including the aspect ratio, centroid, and convex hull. For motion features, the authors extract features related to the trajectory of the hand, including the direction and speed of movement. The authors evaluate their method using a dataset of 60 ASL gestures performed by 6 different signers. They compare the performance of their method with that of other state-of-the-art methods in the literature, including HMMs and SVMs. The results show that their proposed method achieves a recognition accuracy of 91.2%, outperforming other state-of-the-art methods in the literature.

METHODOLOGY

The methodology used in this work is prototyping. The prototype is subdivided into two major subsystems, namely; the smart glove system and the Raspberry Pi system. The smart glove system translates analog signals from the flex sensors into digital signals that the Raspberry Pi system can understand. The Raspberry Pi system then translates the digital signals into words that the listeners can understand. This communication aids the listener in understanding what the user is attempting to say.

Figure 3.1: Block Diagram of a Sign Language Interpreter system

3.1.1 Methodology Used for the Hardware Development

Here, I started with the construction and testing of the sensors before moving on to the control system development, the creation of the output interfaces, the connection of our output devices, and ultimately the addition of the final output components. The block diagram for the top-down design approach is shown in Figure below.

Figure 3.2: The Block Diagram for Top-Down Design Approach

In this design, a flex sensor was employed as the sensor. The microcontroller receives and converts the analog voltage signals from the flex sensor into digital stream of data before sending it to the raspberry pi unit. The microcontroller can measure how far the finger bends via the sensor by interpreting the sensor information, and it can also conduct the necessary control actions, which in this case would be calling out the intended gesture. The PIC microcontroller and all of the components required to correctly set it up for optimal operations serve as the primary control system in most situations. The main controller, the Raspberry Pi, uses the PIC 16f877a microcontroller as an ADC.
Furthermore, the flowchart and pseudo-code were used to depict the program development used in the software design. Pseudo-codes and flowcharts accurately depicted the intended action that was anticipated to be carried out by the control systems. The system’s overall operation is under the control of the software implementation. Below is the full circuit diagram of the system.

Figure 3.3: full circuit diagram of the system

The above circuit basically works with the principle of voltage divider circuit, and voltage divider circuit is an electronic circuit that divides a voltage into two or more smaller voltages, using a series of resistors. The purpose of a voltage divider circuit is to provide a way of obtaining a specific voltage from a larger voltage source.

From the flex sensor datasheet, the resistance range of the flex is from 40k ohms to 125k ohms. The input voltage is a fixed voltage of 5V and a variable output voltage is needed from the flex sensor.

CONCLUSION

The sign language recognition system was successfully developed using smart glove flex sensors, a raspberry Pi, a microcontroller, python and C language which can detect and interpret hand gestures and convert them into speech. The system achieved its set objectives of creating a flex sensor system, utilizing a microcontroller, compiling a database of signals, incorporating a text-to-voice/audio subsystem, and merging all components to ensure full functionality. The system’s success contributes to the development of sign language technology, particularly in the area of smart gloves and flex sensors. It demonstrates the capability of flex sensors and microcontrollers in sign language interpretation and speech generation.

Overall, the system has the potential to enhance communication between deaf or hard of hearing individuals and the hearing community. It provides a more inclusive means of communication, breaking down language barriers and promoting equal opportunities.

REFERENCES

  1. Zhang, J., & Chang, S. (2019). Sign language recognition using deep learning: A review. IEEE Transactions on Human-Machine Systems, 49(3), 229-243.
  2. World Health Organization. Deafness http://www.who.int/mediacentre/factsheets/ fs300/en/#content (accessed on 13 November 2017).
  3. Sharma, V.; Kumar, V.; Masaguppi, S.C.; Suma, M.; Ambika, D. Virtual Talk for Deaf, Mute, Blind and Normal Humans. In Proceedings of the 2013 Texas Instruments India Educators’ Conference (TIIEC), Bangalore, India, 4–6 April 2013; pp. 316320.
  4. Tanyawiwat, N.; Thiemjarus, S. Design of an assistive communication glove using combined sensory channels. In Proceedings of the 2012 Ninth International Conference on Wearable and Implantable Body Sensor Networks (BSN), London, UK, 9–12 May 2012; pp. 34–39.
  5. Kapur, R.: The Types of Communication. MIJ. 6, (2020)
  6. Starner T, Pentland A (1997) Real-time American sign language recognition from video using hidden Markov models. In: Motionbased recognition. Springer, pp 227–243
  7. Leong, S. W., & Kao, H. Y. (2019). Sign language recognition: a review of recent progress and challenges. Journal of Ambient Intelligence and Humanized Computing, 10(1), 1-20.
  8. Brentari, D. (2018). Sign Languages: A Cambridge Language Survey. Cambridge University Press.
  9. Johnston, T. (2003). Auslan: The Australian Sign Language. Cambridge University Press.
  10. Newport, E. L. (1990). Maturational Constraints on Language Learning. Cognitive Science, 14(1), 11–28.
  11. Vogler C, Metaxas D (1999) Parallel hidden markov models for american sign language recognition. In: The Proceedings of the seventh IEEE international conference, IEEE, pp 116–122
  12. Ben Youssef, S., Hamdani, T. M., & Kachouri, R. (2014). Sign language recognition using a multi-modal approach. Pattern Recognition Letters, 43, 16-23.
  13. Han, J., Wang, Y., & Yu, D. (2019). Sign language recognition using kinematic features and support vector machines. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27(1), 74-84.
  14. Balasubramanian, S., Chetty, G., & Palaniswami, M. (2012). A review of sign language recognition using machine learning. IEEE Transactions on Human-Machine Systems, 42(6), 929-941.
  15. Al-Rousan, M., Lee, C., & Kim, T. (2018). Sign language recognition using hand shape and motion features. IEEE Access, 6, 23167-23177.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

6

PDF Downloads

1,109 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

GET OUR MONTHLY NEWSLETTER