Development of Affordable Smart Glasses for Individuals With Visual Impairments
- Venant Niyonkuru
- Sekou Sylla
- Jimmy Jackson Sinzinkayo
- 1004-1010
- Aug 14, 2025
- Computer Science
Development of Affordable Smart Glasses for Individuals with Visual Impairments
Venant Niyonkuru1, Sekou Sylla2, Jimmy Jackson Sinzinkayo3
1Department of Computing and Information System, Kenyatta University, Kenya
2Department of Mathematics, Institute for Basic Science, Technology and Innovation, Pan-African University, Kenya
3Department of Software Engineering College of Software, Nankai University, China
DOI: https://doi.org/10.51584/IJRIAS.2025.100700091
Received: 07 July 2025; Accepted: 16 July 2025; Published: 14 August 2025
ABSTRACT
Smart glasses are designed for various applications, including industrial, healthcare, medical, consumer, logistics, security, and government. Traditional assistive tools like white canes are limited in functionality, lacking information about objects. This study proposes a cost-effective design of head-mounted optical devices that mimic normal eyewear. These devices capture video from the user’s perspective and process it to assist blind users. The proposed model does not require manual intervention and offers a wide field of vision of 120 degrees. This technology can provide more precise and informative environmental perception for visually impaired individuals.
Keywords: Computer Vision, Deep Learning, Image Processing, Machine Learning, Object Recognition
INTRODUCTION
Global statistics of visually impaired individuals have been rapidly increasing over the decades. According to World Health Organization (WHO) statistics, over 285 million people worldwide are estimated to have some degree of visual impairment, of which approximately 39 million are completely blind[1],[2],[19],[20],[21].
Regardless of etiology a congenital condition, a disease, or trauma people with visual impairments might need to substantially rely on other senses or the assistance of others in order to move around their environments and sense obstacles. This dependence not only hinders their autonomy but also has an impact on their mental health and confidence level in carrying out everyday activities[3].
The majority of visually impaired people have difficulties in seeking assistance due to personal preferences, cultural background, or embarrassment in front of others. There will be those who will not want to ask for assistance from even intimate friends, while others will find themselves in situations where they must ask strangers for assistance. These individuals tend to introduce safety and trust issues into the equation, further complicating this group’s challenges [1],[2].
To counteract these issues, research into assistive technologies has resulted in the creation of novel solutions such as wearable smart glasses that are meant to support independent navigation. Embedded vision systems, audio cues, and real-time data processing are integrated into these gadgets to assist users in navigating around objects and recognizing objects in their environment [4]. Through this, they minimize external dependency while providing a hands-free and more integrated approach [4],[15]. This article presents the design of an affordable and accessible smart glasses system specifically for the visually impaired. The objective of the design is to enhance outdoor mobility using real-time object detection and obstacle avoidance, produced at low costs so that low-income customers can afford the technology. Furthermore, modularity of the system allows future upgradeability as well as inclusion of other functions such as GPS or face recognition to improve its use.
Current Technology
As the demand for visually impaired assistive technology continues to rise, a number of firms and research entities have developed smart glasses to enable mobility, identification of objects, and overall autonomy. Several commercially available devices are instances where advanced technologies such as artificial intelligence (AI), augmented reality (AR), and computer vision are being integrated into wearable devices to aid people with visual impairment [16],[17]. However, though their possibilities are vast, most of these technologies are beyond the pockets of those belonging to low-income groups.
One of such fine examples is the OrCam that has been developed by an Israeli start-up that is committed to AI-powered wearable technology. The OrCam MyEye technology is a light, portable, and wearable device that is mounted on the edge of glasses. The gadget reads out printed and digital words aloud in real time, recognizes faces, and identifies banknotes and products via simple pointing motions[8],[10],[12],[13]. Although efficient, the system supports few languages and currently costs approximately $2,500, which is unaffordable to most potential users[18]. Another solution is the NuEyes e2, a wearable electronic magnification that is especially designed for those with low vision. The device is an aid to the eye in that it zooms up the vision of objects close to the user, such as printed words or faces, to make them readable or recognizable to him/her [9]. Although it provides a clear view, it does not possess high-level object recognition or navigational assistance. At around $2,750, it also remains out of reach for many people[7]. eSight glasses are one of the most cutting-edge solutions present today. They have a high-definition, high-speed camera with live video that is amplified and projected onto two OLED screens that rest in front of the subject’s eyes. Sophisticated algorithms optimize the visual input to provide an experience that simulates natural vision for individuals with a certain type of vision loss[11]. Though extremely technical and successful, the eSight device is reported to be one of the most expensive devices available at nearly $9,995[10],[11]. While these technologies lead to promising advances in visual assistance, they also reveal the largest gap: affordability. A majority of the visually impaired population, particularly in low- and middle-income nations, cannot benefit from these advances due to being too costly . This necessitates the creation of a less costly, more scalable, and accessible smart glasses solution that has important features but is affordable for the majority[14].
Proposed System
The work here proposed suggests a cost-effective and simplified smart glasses system designed particularly to address the shortcomings of existing visually impaired assistive technologies. The primary objective is to develop a wearable device that not only identifies objects in real time but also provides simple audio messages to allow individuals to perceive and understand their immediate environment without vision.
The smart glasses platform combines key technologies such as computer vision, machine learning, and image processing. Hardware components consist of a light-weighted glasses frame with a mounted camera module, a small single-board computer (Raspberry Pi Zero), a speaker for audio output, and a rechargeable battery pack to provide portable energy.
At the core of the object recognition feature lies an algorithm based on deep learning. To work best on hardware with constrained resources like the Raspberry Pi Zero, the system employs a high-speed object detection model YOLOv3 (You Only Look Once version 3). The algorithm is widely used for its real-time processing capability and correct detection of multiple objects in a single frame.
In order to train the model, there is an enormous-scale, labeled dataset Microsoft COCO to do this with. Due to the computational needs of training models of this size, this is performed on a more powerful GPU-based platform than the final embedded target device. Trained model files are uploaded to the Raspberry Pi to support running inference on video streams directly captured.
The camera in the center streams live video in 720p resolution, but is limited to an effective frame rate of around 10–15 fps due to processing limitations. In order to provide real-time performance, the Raspberry Pi detects objects in the viewer’s sight by processing one out of every five frames using the YOLOv3 algorithm. The outcomes, rather than being displayed on a screen, are converted to audio descriptions using a text-to-speech engine (gTTS) that is Python-friendly. The audio output is delivered to the user via a miniature speaker integrated into the glasses or mounted on the side.
The design is emulated with low power mode. Consuming only approximately 80mA (0.4W), the Raspberry Pi Zero enables long battery life in a single charge, allowing the device to be highly portable and power efficient. Although this device occupies a small hardware footprint, the performance and precision of the device remain sufficient, yielding a feasible, low-cost solution over commercial offerings that are far more expensive.
By employing open-source technology as well as low-cost components, the destination smart glasses system can further the autonomy and mobility of the visually impaired individuals to a significant level, especially in outdoor settings.
Since the design is modular, in the future, additional functionality can be incorporated, like GPS or face recognition, without redesigning the core structure.
System Architecture
System architecture details the principal components that comprise the building blocks of the smart glasses. It is founded upon a four-block framework of key functional units which all play significant roles toward enabling the system to assist the visually impaired user.
The user interacts with the wearable device directly, and it is powered by an embedded processor. The control unit manages and coordinates the input and output of the system through the interaction with the camera, speaker, and memory module.
The major components of the architecture are as follows:
Processor: The brain of the system that processes video input and runs the object detection algorithm.
Camera: Captures real-time visual data from the environment of the user.
Speaker: Gives audio output to notify the user of information that has been detected.
Memory/Database: Stores the trained model, system software, and working data during execution.
The above diagram (not shown here) illustrates how these components are related. It highlights the flow of data and control signals from the user interaction through the various modules to provide real-time assistance.
Fig 1: Architecture Diagram
Working Principle
Its operation can be understood by examining its basic functional steps, which are performed sequentially to offer real-time assistance to visually impaired users. These steps define how the system handles visual data and offers significant audio output.
The primary phases of the system’s working process are:
- Powering the Device (Switch On)
The user initiates the operation by powering the device on. Upon being powered on, the internal components (camera, processor, speaker) are powered, and the system begins consuming input.
- Consuming Live Visual Input
The camera, positioned to rest on the line of sight of the user, continuously captures video from the surrounding world. This keeps the field of view of the user continuously monitored in real-time.
- Detecting Objects and Obstacles
The video frames captured by the camera are processed by the onboard processor via a trained object detection model. Objects and potential obstacles in the user’s path are detected.
- Interpretation and Analysis of Visual Data
Frame-by-frame analysis is performed by the system to analyze the scene and detect relevant objects or obstacles. The detected objects are labeled and forwarded to the audio module for rendering.
Fig 2. DFD – Working of System
System Operation Overview
The overall operation of the device is most easily observed in a data flow description, as illustrated in the referenced diagram. The operation begins from the time the user has donned the smart glasses and activated the power supply.
The Raspberry Pi Zero is the processing unit and is the initial device to be turned on. At startup, it turns on the connected hardware components in order—i.e., camera module and speaker thereby powering the whole system circuitry.
After it is activated, the camera begins to scan the environment within its field of view. This visual capture is ideal under well-lit conditions, where the camera can easily capture and detect images of the environment. The detected video stream is transmitted directly to the processor for processing.
In a move to ensure maximized processing velocity and cut out redundancy, the processor sporadically picks about every third or fifth frame of the video stream. In doing so, duplicated data are taken out to curb redundant audio alert for stationary conditions.
Each of the selected frames is subsequently run through a pre-trained YOLO (You Only Look Once) object detection model to identify the objects in the scene at that moment. The output, a list of detected object labels in text form, is input to a text-to-speech (TTS) engine. This generates an audio description, which is played out through the speaker to inform the user in real time.
This entire process video capture to audio output functions continuously in a loop to provide dynamic and responsive assistance to the blind.
Fig 3. Frame Extraction
Figure 4 illustrates the process by which selected frames are processed using the YOLO algorithm, culminating in the generation of audio signals for the connected speakers.
Fig 4. Generation of output Signal
Experimental Setup
A functional prototype of the smart glasses was designed employing carefully selected hardware and software pieces for high performance and reliability. Table 1 gives all the specifications and requirements required by the system in order to operate effectively.
Table 1: Hardware Specifications
CONCLUSION AND FUTURE WORK
The Smart Glasses for the visually impaired provide an innovative solution enhancing the overall quality of life of both users who are completely blind and partially sighted. Via real-time identification of objects using audio output, the glasses increase the user’s awareness of what is around them. It highly improves spatial awareness, offering higher accuracy and stability compared to the traditional aids of the white cane, which are able to feel obstacles directly in front of a user. Audio feedback from the glasses will act as an instinctive signal that will allow the users to more easily comprehend what is around them, feel potential obstacles, and navigate with more ease.
Besides, the glasses also give users the ability to guide users through open spaces so that they can obtain the best and safest way forward as they move ahead. This feature will not only enhance mobility but also give the impression of independence since it reduces dependence on other people and allows the visually impaired individuals to walk easily through public places.
The use of low-cost but highly efficient processors, like the Raspberry Pi Zero, is central to keeping this technology affordable while ensuring that it reaches a wider number of users without sacrificing performance.
As far as future development is concerned, the functionality of the Smart Glasses can be extensively enhanced. Additional modules can be introduced for increasing their flexibility.
For instance, the glasses can be further enhanced to offer advanced functions such as environment awareness, where the glasses can interpret and analyze advanced situations like stairs, curb presence, or terrain variation. Another potential development could be to add navigational features, in that the glasses would be capable of offering personal navigation, learning and understanding a user’s typical paths and adjusting to those, and offering live information as they travel down familiar paths.
Furthermore, the glasses can become able to recognize faces of people the wearer frequently comes into contact with, such as family, friends, or workmates. This would be achieved through facial recognition software so that users can recognize the people they encounter, thus enhancing social relations and enabling users to feel a greater sense of belonging in public areas. It would be a great tool to enable social communication and integration of the visually impaired by training glasses to recognize and remember these faces.
As the technology advances further, even the future versions of the Smart Glasses will be able of integrating more sophisticated sensors, machine learning, and cloud processing to deliver even greater accuracy, personalized advice, and engagement. Through further advancement and sophistication, this new technology for assistance has a great chance of changing the blind people’s daily lives, helping them become more independent and better quality of life.
REFERENCES
- Fraser, S., Beeman, I., Southall, K., & Wittich, W. (2019). Stereotyping as a barrier to the social participation of older adults with low vision: a qualitative focus group study. BMJ open, 9 (9), e029940.
- Tshuma, C., Ntombela, N., & van Wijk, H. C. (2022). Challenges and coping strategies of the visually impaired adults: A brief exploratory systematic literature review. Prizren Social Science Journal, 6(2), 71-80.
- Bourne, R. R., Flaxman, S. R., & Braithwaite, T. (2017). Global causes of blindness and distance vision impairment 1990-2020: A systematic review and meta-analysis. The Lancet Global Health, 5(12), 1129-1139.
- Bhowmick, A., & Hazarika, S. M. (2017). An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends. Journal on Multimodal User Interfaces, 11, 149-172.
- Foster, A., & Resnikoff, S. (2005). The impact of Vision 2020 on global Eye, 19(10), 1133-1135.
- Bourne, R., Steinmetz, J. D., Flaxman, S., Briant, P. S., Taylor, H. R., Resnikoff, S., … & Tareque, M. I. (2021). Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the Global Burden of Disease Study. The Lancet global health, 9(2), e130-e143.
- Yu, X., & Saniie, J. (2025). Visual Impairment Spatial Awareness System for Indoor Navigation and Daily Activities. Journal of Imaging, 11(1), 9.
- Galić, I., Habijan, M., Leventić, H., & Romić, K. (2023). Machine learning empowering personalized medicine: A comprehensive review of medical image analysis methods. Electronics, 12(21), 4411.Vasiu, R., Mihaila, S., & Popescu, M. (2018). Challenges in affordable assistive technologies for visually impaired people. International Journal of Human-Computer Interaction, 34(5), 485-495.
- OrCam Technologies. (2020). OrCam MyEye 2.0: Empowering the visually impaired. Retrieved from https://www.orcam.com
- NuEyes. (2020). NuEyes e2 wearable electronic magnifier. Retrieved from https://www.nueyes.com
- Lorenzini, M. C., Jarry, J., & Wittich, W. (2017). The impact of using eSight eyewear on functional vision and oculo-motor control in low vision patients. Investigative Ophthalmology & Visual Science, 58(8), 3267-3267.
- eSight. (2020). Life-changing device for people with low vision. Retrieved from https://www.esighteyewear.com/
- Finetti, M., & Luongo, N. (2023). Assistive Technology for Blindness and Visual Impairments: Supporting Teachers in K-12 Classrooms. In Using Assistive Technology for Inclusive Learning in K-12 Classrooms (pp. 74-103). IGI Global.
- Operating, R. (2018, May). Intelligent Smart Glass for Visually Impaired Using Deep Learning Machine. In Robot Intelligence Technology and Applications 5: Results from the 5th International Conference on Robot Intelligence Technology and Applications (Vol. 751, p. 99). Springer.
- Wittich, W., Lorenzini, M. C., Markowitz, S. N., Tolentino, M., Gartner, S. A., Goldstein, J. E., & Dagnelie, G. (2018). The effect of a head-mounted low vision device on visual function. Optometry and vision science, 95(9), 774-784.
- Zhang, X., Huang, X., Ding, Y., Long, L., Li, W., & Xu, X. (2024). Advancements in Smart Wearable Mobility Aids for Visual Impairments: A Bibliometric Narrative Review. Sensors, 24(24), 7986.
- Naayini, P., Myakala, P. K., Bura, C., Jonnalagadda, A. K., & Kamatala, S. (2025). AI-Powered Assistive Technologies for Visual Impairment. arXiv preprint arXiv:2503.15494.
- Satani, N., Patel, S., & Patel, S. (2020). AI powered glasses for visually impaired person. International Journal of Recent Technology and Engineering (IJRTE), 9(2), 416-421.
- Cottier, B., Rahman, R., Fattorini, L., Maslej, N., Besiroglu, T., & Owen, D. (2024). The rising costs of training frontier AI models. arXiv preprint arXiv:2405.21015.
- WHO (2023). Blindness and vision impairment. Retrieved from https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment
- Lee, S. Y., Gurnani, B., & Mesfin, F. B. (2024). Blindness. In StatPearls [Internet]. StatPearls Publishing