Real Time Sign Language Recognition and Translation to Text for Vocally and Hearing-Impaired People
Authors
UG Student, AGM Rural College of Engineering and Technology, Hubli (India)
UG Student, AGM Rural College of Engineering and Technology, Hubli (India)
UG Student, AGM Rural College of Engineering and Technology, Hubli (India)
UG Student, AGM Rural College of Engineering and Technology, Hubli (India)
Article Information
DOI: 10.51584/IJRIAS.2026.11010087
Subject Category: Artificial Intelligence
Volume/Issue: 11/1 | Page No: 1016-1033
Publication Timeline
Submitted: 2026-01-28
Accepted: 2026-02-02
Published: 2026-02-11
Abstract
The Real-Time Sign Language Recognition and Translation system shown in this study aims to improve communication between sign language users and non-sign language speakers. The system uses a webcam to record hand movements, which are then processed using OpenCV for real-time image processing and MediaPipe for hand landmark identification.
Next, American Sign Language (ASL) movements are accurately classified using a Convolutional Neural Network (CNN). Smoother and more natural communication is made possible by a Text-to-Speech (TTS) engine that translates the identified motions into readable text and then into speech.
By integrating computer vision, deep learning, and speech synthesis, the project provides an accessible, efficient, and user-friendly tool for vocally and hearing-impaired individuals. The goal of this approach is to improve communication and encourage inclusivity in commonplace situations like social contact, healthcare, and education.
The solution is designed to be cost-effective, easy to use, and scalable, making it highly beneficial in educational environments, workplaces, hospitals, and public interactions. The ultimate goal of this project is to use an intelligent, real-time translation system to close the communication gap, encourage inclusivity, and support the freedom of people with hearing and voice impairments.
Keywords
Real-Time Gesture Recognition, Sign Language Recognition, Text-to-Speech (TTS)
Downloads
References
1. J. Yang and Y. Xu, “Hidden Markov Model for Gesture Recognition,” 1994, doi: 10.21236/ada282845. [Google Scholar] [Crossref]
2. V. Lopez-Ludena, R. San-Segundo, R. Martin, D. Sanchez and A. Garcia, “Evaluating a Speech Communication System for Deaf People,” IEEE Latin America Transactions, vol. 9, no. 4, pp. 565–570, July 2011, doi: 10.1109/TLA.2011.5993744. [Google Scholar] [Crossref]
3. D. Kelly, J. McDonald, and C. Markham, “Weakly Supervised Training of a Sign Language Recognition System Using Multiple Instance Learning Density Matrices,” IEEE Trans. Syst., Man, Cybern. B, vol. 41, no. 2, pp. 526–541, Apr. 2011, doi: 10.1109/TSMCB.2010.2065802. [Google Scholar] [Crossref]
4. H.-I. Lin, M.-H. Hsu, and W.-K. Chen, “Human Hand Gesture Recognition Using a Convolution Neural Network,” Aug. 2014, doi: 10.1109/CoASE.2014.6899454. [Google Scholar] [Crossref]
5. B. Garcia and S. A. Viesca, “Real-time American Sign Language Recognition with Convolutional Neural Networks,” Convolutional Neural Networks for Visual Identification, vol. 2, pp. 225–232, 2016. [Google Scholar] [Crossref]
6. V. N. T. Truong, C. Yang, and Q. Tran, “A Translator for American Sign Language to Text and Speech,” IEEE GCCE, pp. 1–2, 2016, doi: 10.1109/GCCE.2016.7800427. [Google Scholar] [Crossref]
7. M. S. Nair, A. P. Nimitha, and S. M. Idicula, “Conversion of Malayalam Text to Indian Sign Language Using Synthetic Animation,” ICNGIS, pp. 1–4, 2016, doi: 10.1109/ICNGIS.2016.7854002. [Google Scholar] [Crossref]
8. M. Mahesh, A. Jayaprakash, and M. Geetha, “Sign Language Translator for Mobile Platforms,” ICACCI, 2017, pp. 1176–1181, doi: 10.1109/ICACCI.2017.8126001. [Google Scholar] [Crossref]
9. T. Wangyal, V. Saboo, S. S. Kumar, and R. Srinath, “Time Series Neural Networks for Real-Time Sign Language Translation,” ICMLA, pp. 243–248, 2018, doi: 10.1109/ICMLA.2018.00043. [Google Scholar] [Crossref]
10. K. Kunaisa, A. Kulsom A., C. Y. P. Chandan, F. Farheen, and N. Halima, “A Vision-Based System for Identifying and Converting Sign Language Motions into English Text,” 2020. [Google Scholar] [Crossref]
11. W. Sari, D. P. Rini, and R. Malik, “Text Classification Using Long Short-Term Memory with GloVe Features,” JITEKI, vol. 5, no. 2, pp. 85–?, 2020, doi: 10.26555/jiteki.v5i2.15021. [Google Scholar] [Crossref]
12. Y. Saleh and G. F. Issa, “Arabic Sign Language Recognition Through Deep Neural Networks Fine-Tuning,” Int. J. Online Biomed. Eng., vol. 16, pp. 71–83, 2020. [Google Scholar] [Crossref]
13. Lee, C. K. M. et al., “American Sign Language Recognition and Training Method with Recurrent Neural Network,” Expert Systems with Applications, vol. 167, 2021. [Google Scholar] [Crossref]
14. AlKhuraym, B. Y., et al., “Arabic Sign Language Recognition Using Lightweight CNN-based Architecture,” International Journal of Advanced Computer Science and Applications, 2022. [Google Scholar] [Crossref]
15. X. Han, F. Lu, and G. Tian, “Sign Language Recognition Based on Lightweight 3D MobileNet-v2 and Knowledge Distillation,” ICETIS, 2022, pp. 1–6. [Google Scholar] [Crossref]
16. Krupashankari S Sandyal, Kiran, Y.C. “Analysis on Preprocessing Techniques for Offline Handwritten Recognition”, Intelligent Data Communication Technologies and Internet of Things ICICI, Lecture Notes on Data Engineering and Communications Technologies, vol 38, pp. 546-553, Springer, Cham, 2019. DOI: https://doi.org/10.1007/978-3-030-34080-3_62 [Google Scholar] [Crossref]
17. Krupashankari S Sandyal, Kiran Y Chandrappa, “Segmentation approach for offline handwritten Kannada scripts”, Indonesian Journal of Electrical Engineering and Computer Science, Vol. 31, No. 1, July 2023, pp.521-530,ISSN: 2502-4752, DOI: http://doi.org/10.11591/ijeecs.v31.i1.pp521-530 [Google Scholar] [Crossref]
18. Rehaan Sajjad Arai , Skanda Shanubog A , Rithik Jain , Pushkar Kumar , Krupashankari Sandyal, “Offline Handwritten Text Recognition and Signature Verification”, TechRxiv. May 26, 2021. DOI:https://www.techrxiv.org/doi/full/10.36227/techrxiv.14602029.v1 [Google Scholar] [Crossref]
19. Krupashankari Sandyal, Kiran Y.C “Analysis on Skew Detection and Rectification Techniques for Offline Handwritten Scripts”, Inventive Systems and Control, Lecture Notes in Networks and Systems, vol 436. Springer, Singapore, August 2022. DOI: https://doi.org/10.1007/978-981-19-1012-8_57 [Google Scholar] [Crossref]
20. S Pawan Kumar, Priyanka N, Sankarsh S, Sumantha S, Krupashankari S S, “Computer-Based Facial Expression Recognition”, International Journal for Research in Applied Science & Engineering Technology, Vol 8 , Issue 6, 2020. DOI: Google scholar [Google Scholar] [Crossref]
21. M. Papatsimouli, K. F. Kollias, L. Lazaridis, and G. Marasidis, “A Review on Real-Time Sign Language Translation Systems: Accuracy and Technology Evolution,” 2022. [Google Scholar] [Crossref]
22. M. A. Band, H. R. Maghroor, and I. Garibay, “A Comprehensive Survey of Sign Language Recognition, Translation, and Datasets Using Hardware- and Vision-Based Approaches,” 2023. [Google Scholar] [Crossref]
23. M. Papatsimouli, P. Sarigiannidis, and G. F. Fragulis, “Advancements in Real-Time Sign Language Translation Systems Integrated with IoT Technology,” 2023. [Google Scholar] [Crossref]
24. “Sign Language to Text and Speech Conversion,” IJARIIT, B. K. Yadav, D. Jadhav, H. Bohra, and R. Jain, 2024. [Google Scholar] [Crossref]
25. A. S., D. D. K., J. B. Jayasri, and R. Rajkumar, “Evaluation of LSTM-Based Systems for Real-Time Sign Language Recognition and Text Conversion,” 2025. [Google Scholar] [Crossref]
Metrics
Views & Downloads
Similar Articles
- The Role of Artificial Intelligence in Revolutionizing Library Services in Nairobi: Ethical Implications and Future Trends in User Interaction
- ESPYREAL: A Mobile Based Multi-Currency Identifier for Visually Impaired Individuals Using Convolutional Neural Network
- Comparative Analysis of AI-Driven IoT-Based Smart Agriculture Platforms with Blockchain-Enabled Marketplaces
- AI-Based Dish Recommender System for Reducing Fruit Waste through Spoilage Detection and Ripeness Assessment
- SEA-TALK: An AI-Powered Voice Translator and Southeast Asian Dialects Recognition