Indian Sign Language Alphabet Recognition Using Transfer Learning with MobileNetV2

Authors

Shalaka Gaikwad

Research Scholar, Department of Computer Science, Taywade College, Koradi (India)

Dr. Girish Katkar

Assistant Professor, Department of Computer Science, Taywade College, Koradi (India)

Dr. Ajay Ramteke

Assistant Professor, Department of Computer Science, Taywade College, Koradi (India)

Article Information

DOI: 10.51584/IJRIAS.2026.11020009

Subject Category: Computer Science

Volume/Issue: 11/2 | Page No: 94-101

Publication Timeline

Submitted: 2026-02-09

Accepted: 2026-02-14

Published: 2026-02-25

Abstract

Indian Sign Language (ISL) recognition plays a vital role in bridging the communication gap between the hearing-impaired community and the general population. This research presents an efficient deep learning-based approach for static ISL alphabet recognition using transfer learning with MobileNetV2. A dataset consisting of 26,000 images representing 26 alphabet classes (A–Z) was used. The proposed model leverages a pre-trained MobileNetV2 backbone for feature extraction, followed by custom classification layers. Experimental results demonstrate a high validation accuracy of 99% and test accuracy 99.89%, indicating the effectiveness of the approach for real-world ISL recognition tasks.

Keywords

Indian Sign Language, Transfer Learning, MobileNetV2, Deep Learning, Image Classification, Gesture Recognition

Downloads

References

1. T. Starner and A. Pentland, “Real-time American Sign Language recognition using hidden Markov models,” Proc. Int. Symp. Computer Vision, 1995. [Google Scholar] [Crossref]

2. J. Fang et al., “Hand gesture recognition using skin color segmentation and SVM,” IEEE Trans. Multimedia, vol. 18, no. 6, pp. 1091–1104, 2016. [Google Scholar] [Crossref]

3. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. [Google Scholar] [Crossref]

4. S. Pigou et al., “Sign language recognition using convolutional neural networks,” ECCV Workshops, 2014. [Google Scholar] [Crossref]

5. A. Krizhevsky et al., “ImageNet classification with deep convolutional neural networks,” NeurIPS, 2012. [Google Scholar] [Crossref]

6. M. Sandler et al., “MobileNetV2: Inverted residuals and linear bottlenecks,” CVPR, 2018. [Google Scholar] [Crossref]

7. K. He et al., “Deep residual learning for image recognition,” CVPR, 2016. [Google Scholar] [Crossref]

8. S. Kumar et al., “Indian Sign Language recognition using transfer learning,” IJISAE, vol. 9, no. 2, 2021. [Google Scholar] [Crossref]

9. A. Mittal et al., “Deep learning-based real-time sign language recognition,” Procedia Computer Science, vol. 171, pp. 2212–2221, 2020. [Google Scholar] [Crossref]

10. A. G. Howard et al., “MobileNets: Efficient CNNs for mobile vision,” arXiv:1704.04861, 2017. [Google Scholar] [Crossref]

11. J. Donahue et al., “Long-term recurrent convolutional networks,” CVPR, 2015. [Google Scholar] [Crossref]

12. M. Hasan et al., “CNN-based real-time sign language recognition,” IEEE Access, vol. 8, pp. 168557–168570, 2020. [Google Scholar] [Crossref]

13. A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv preprint arXiv:1704.04861, 2017. [Google Scholar] [Crossref]

14. M. Sandler et al., “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4510–4520. [Google Scholar] [Crossref]

15. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT Press, 2016. [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles