A Deep CNN-Based Framework for Real-Time Gujarati Sign Language Character Recognition Using Transfer Learning

Authors

Mr. Ronak Jitendrabhai Goda

Research Scholar, Department of Computer Science, Saurashtra University, Rajkot (India)

Prof. Dr C.K. Kumbharana

Professor, Department of Computer Science, Saurashtra University, Rajkot (India)

Article Information

DOI: 10.47772/IJRISS.2026.10190046

Subject Category: Computer Science

Volume/Issue: 10/19 | Page No: 529-535

Publication Timeline

Submitted: 2026-01-23

Accepted: 2026-01-27

Published: 2026-02-16

Abstract

Gujarati Sign Language (GSL) lacks large-scale annotated datasets and digital communication tools, creating significant barriers for the deaf and hard-of-hearing community. This research proposes a deep learning–based real-time hand gesture recognition framework for GSL using transfer learning with pre-trained convolutional neural networks (CNNs). A custom dataset comprising static hand gestures representing the 26-character GSL alphabet is developed for this study. To address data scarcity, transfer learning is employed using lightweight and deep CNN architectures, including MobileNetV2, VGG16, and ResNet50. Among these, MobileNetV2 demonstrates superior efficiency in terms of training time and inference speed, making it suitable for real-time and mobile deployment. The proposed real-time system achieves an accuracy of 96.87%, while the highest offline classification accuracy of 99.5% is obtained using ResNet50. Furthermore, MediaPipe-based hand keypoint detection is integrated to improve robustness and reduce background noise during real-time inference. Comparative experimental results confirm that transfer learning significantly outperforms a baseline CNN trained from scratch. The proposed framework offers a scalable and computationally efficient solution for real-time Gujarati Sign Language recognition and contributes toward assistive technologies for low-resource regional sign languages.

Keywords

Gujarati Sign Language, CNN, Transfer Learning

Downloads

References

1. N. Pigou, S. Dieleman, P.-J. Kindermans, and B. Schrauwen, “Sign language recognition using convolutional neural networks,” in Proc. ECCV Workshops, 2014, pp. 572–578. [Google Scholar] [Crossref]

2. A. Kumar and S. Rani, “Static hand gesture recognition for Indian Sign Language using deep convolutional neural networks,” Int. J. Comput. Appl., vol. 182, no. 30, pp. 1–7, 2021. [Google Scholar] [Crossref]

3. S. Ong and S. Ranganath, “Automatic sign language analysis: A survey and the future beyond lexical meaning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 6, pp. 873–891, 2020. [Google Scholar] [Crossref]

4. M. Mondal, S. Ghosh, and A. Rakshit, “ASL alphabet recognition using transfer learning with VGG16,” in Proc. ICCCNT, 2021, pp. 1–6. [Google Scholar] [Crossref]

5. DOI: 10.1109/ICCCNT51525.2021.9579775 [Google Scholar] [Crossref]

6. G. Thakur and P. Kumar, “Deep residual learning-based Indian Sign Language gesture classification,” Multimed. Tools Appl., vol. 82, pp. 21275–21290, 2023. [Google Scholar] [Crossref]

7. Y. Zhang, H. Huang, and J. Li, “Continuous sign language recognition using hybrid 3D CNN-LSTM networks,” IEEE Access, vol. 10, pp. 39812–39824, 2022. [Google Scholar] [Crossref]

8. M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proc. ICML, 2019, pp. 6105–6114. (Used in sign recognition studies 2023–2024.) [Google Scholar] [Crossref]

9. H. Patel and R. Shah, “Real-time Indian Sign Language digit recognition using CNN,” Int. J. Image Process., vol. 16, no. 2, pp. 45–55, 2022. [Google Scholar] [Crossref]

10. R. Sharma and V. Kumar, “MobileNetV2-based lightweight real-time Indian Sign Language recognition,” Comput. Electr. Eng., vol. 110, p. 108803, 2024. [Google Scholar] [Crossref]

11. M. Alfarraj and A. AlZahrani, “Arabic Sign Language recognition using DenseNet architectures,” Sensors, vol. 23, no. 4, p. 2018, 2023. [Google Scholar] [Crossref]

12. R. Priya and A. Singh, “Challenges in regional sign language dataset construction: A study on Indian regional signs,” J. Vis. Commun., vol. 92, p. 103757, 2023. [Google Scholar] [Crossref]

13. P. Singh, K. Chauhan, and R. Jain, “A multilingual benchmark dataset for South Asian sign languages,” IEEE Access, vol. 12, pp. 101245–101260, 2024. [Google Scholar] [Crossref]

14. S. Sahoo, M. Swain, and S. Mishra, “Dynamic gesture recognition using CNN-BiLSTM hybrid deep networks,” Pattern Recognit. Lett., vol. 169, pp. 85–93, 2023. [Google Scholar] [Crossref]

15. X. Xu, C. Li, and Z. Wu, “Vision transformer-based static sign gesture recognition,” Expert Syst. Appl., vol. 237, p. 121643, 2024. [Google Scholar] [Crossref]

16. M. Rahman, A. Hasan, and T. Ahmed, “Lightweight ASL recognition using MobileNet for edge devices,” IEEE Embedded Syst. Lett., vol. 16, no. 1, pp. 45–48, 2024. [Google Scholar] [Crossref]

17. J. Patel and V. Desai, “YOLOv8-based enhanced Indian Sign Language recognition system for real-time hand detection,” IEEE Sensors J., vol. 24, no. 7, pp. 1–10, 2024. [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles