INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
Enhancing Mobility and Independence of Visually Impaired  
Individuals through Mobile-Based Real-Time Obstacle Detection  
Systems  
Khairul Adilah binti Ahmad 1, Anis Faradella Abdul Malik 2*, Norin Rahayu Shamsuddin 1,3*  
1Faculty of Computer and Mathematical Science, Universiti Teknologi MARA Cawangan Kedah,  
Malaysia  
2*Faculty of Information Science, Universiti Teknologi MARA Cawangan Kedah, Malaysia  
3*Integrated Simulation & Visualization Research Interest Group, Universiti Teknologi MARA (UiTM)  
Cawangan Kedah, Malaysia  
Received: 16 November 2025; Accepted: 24 November 2025; Published: 02 December 2025  
ABSTRACT  
Independent mobility is a critical determinant of social health and Quality of Life for individuals with visual  
impairments, yet physical barriers and limitations in traditional aids often lead to restricted travel, contributing  
significantly to loneliness, social exclusion, and heightened risks of depression and anxiety. This paper  
systematically analyzes the development and implementation challenges of Mobile-Based Real-Time Obstacle  
Detection Systems as a pivotal technological intervention designed to overcome these barriers. Successful RT-  
ODS relies on highly optimized technical architectures, such as the lightweight YOLOv8 deep learning model,  
tailored for efficient real-time inference on resource-constrained mobile platforms. Empirical evidence  
demonstrates the feasibility of achieving robust performance, with some systems attaining an accuracy greater  
than 90% and a mAP less than 0.5, under varying environmental conditions. Crucially, the adoption and long-  
term efficacy of these systems are contingent upon addressing socio-economic and ethical constraints. User-  
centric design requires integrating multimodal feedback (auditory and haptic), while economic accessibility  
demands low production costs to serve a population often facing financial vulnerability. This synthesis concludes  
that Real-Time Obstacle Detection Systems, when developed with a comprehensive interdisciplinary approach  
that balances technical optimization, cost-effectiveness, and rigorous ethical compliance, offers a viable, scalable  
pathway to significantly enhance the confidence, independence, and social integration of visual impairments  
individuals.  
Keywords: Visual Impairment, Social Exclusion, Object Detection, Real-Time Systems, Deep Learning,  
Assistive Technology  
INTRODUCTION  
Background and Context of Visual Impairment  
Visual impairment presents one of the most significant global health and social challenges, affecting millions  
worldwide. The high prevalence of visual impairment necessitates focused efforts toward functional  
rehabilitation and community integration. Beyond the immediate health consequences, visual impairment often  
creates systemic socio-economic hurdles. Individuals with visual impairment frequently encounter high  
unemployment rates and financial vulnerability, exacerbating the pervasive lack of resources, such as advanced  
Braille equipment and accessible buildings, required for full societal participation [1]. Addressing the  
foundational barrier of independent mobility is paramount to enabling employment opportunities and alleviating  
the cycle of poverty often experienced by this population.  
Page 1629  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
The Social and Psychological Consequences of Mobility Restriction  
Mobility is essentially related to functional Quality of Life (QOL). Restrictions in independent movement  
severely limit community participation, contributing to a lack of social communication and subsequent exclusion  
[1], [2]. Studies indicate that limitations in functional independence lead to loneliness and social isolation, which  
are prevalent issues among older adults with vision impairment [2].  
The difficulty in navigating challenging surroundings reliably imposes significant psychological burdens.  
Individuals who struggle with independent movement are at a greater risk for clinical significant psychological  
conditions, including depression, anxiety, and agoraphobia [1]. Research demonstrates that the psychological  
burden increases as the severity of visual impairment worsens, and the uncertainties associated with visual loss  
can significantly disrupt individual lives. Interventions that effectively reduce mobility risks are therefore critical  
not only for ensuring physical safety but also for achieving positive psychological outcomes and improving  
overall social well-being [3].  
Limitations of Traditional Aids and the Emergence of Real-Time Obstacle Detection Systems  
Historically, visually impaired individuals have relied on traditional mobility tools, primarily white canes. While  
essential, these tools are increasingly inadequate for navigating the complex, dynamic, and rapidly changing  
obstacles encountered in contemporary public and urban environments [4]. Traditional aids typically fail to  
detect obstacles above ground level or provide detailed, real-time spatial awareness necessary for confident,  
independent movement.  
The intersection of computer vision innovations and the ubiquity of mobile devices presents a promising new  
approach: Real-Time Obstacle Detection Systems (RT-ODS) integrated into smartphones or light-weight  
wearables [4]. These systems aim to utilize sophisticated object detection algorithms to identify obstacles in the  
user’s path, providing a scalable and intelligent navigational supplement. However, early proposals often  
suffered from limitations such as relying on histogram or edge information only, or employing computationally  
intensive image processing techniques that adversely affected mobile device performance and battery life,  
thereby hindering user adoption [5]. The successful emergence of modern RT-ODS requires overcoming these  
technical deficiencies through architectural optimization.  
Research Objectives and Paper Structure  
The primary objective of this paper is to systematically analyze the critical intersection of technical performance,  
user-centric design, and the necessary ethical and policy frameworks required for mobile-based RT-ODS to  
successfully enhance the mobility and independence of visual impairment individuals. This analysis focuses on  
bridging the demonstrated technical feasibility of real-time detection with the ultimate social goals of reducing  
exclusion and improving QOL.  
The paper proceeds by first detailing the profound social imperative for intervention (Section II). It then reviews  
the necessary architectural and technical optimizations required for functional mobile deployment (Section III  
and IV), followed by an examination of user-centric design principles and socio-economic accessibility (Section  
V). Finally, concluding with a summary and direction for future research (Section VI).  
The Social Imperative: Consequences of Mobility Restriction and the Need for Intervention  
Impaired Independence, Functional Limitations, and QOL Scores  
Mobility constraints are directly linked to a loss of independence and the presence of functional limitations in  
essential everyday activities [6]. This loss often leads to an increased level of dependence on others and a  
significant loss of freedom, factors which complicate the adjustment process after the onset of visual impairment.  
Empirical research utilizing QOL scoring demonstrates a clear correlation between specific visual deficits and  
functional decline. Participants with severe visual field restrictions, such as tunnel vision, registered significantly  
lower scores on both the mobility and self-care categories compared to other participants [3]. Furthermore, lower  
Page 1630  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
social and leisure QOL scores were observed in participants without stereopsis [3], [7]. These findings  
underscore that vision quality directly dictates functional autonomy, highlighting the urgent need for tools that  
compensate for these functional limitations.  
Psychological Morbidity and Confidence Loss  
Mobility challenges that limit activity and participation can lead to considerable psychological harm. Individuals  
with visual impairment are subject to psychological challenges including agoraphobia, high levels of anxiety,  
and a significantly elevated risk for depression.2 Negative feelings about life are reported, and studies show that  
scores within the psychological domain consistently decrease as the severity of the visual impairment increases  
[1].  
The literature highlights a particularly hazardous interplay involving visual loss, depression, and an increased  
risk of falls [8]. Falls are often viewed as a sequence of negative events that result in serious negative impacts,  
especially on the elderly population with visual impairment [9], [10].Intervening effectively to prevent falls  
through real-time navigation assistance is posited as a fundamental strategy to break this negative sequence,  
leading to a consequent decline in depression and related negative psychological outcomes.[1] Therefore, the  
technical reliability of RT-ODS directly translates into psychological resilience and improved emotional well-  
being for the user.  
Social Dynamics: Exclusion, Isolation, and Communication Barriers  
The difficulty associated with independent movement is a primary cause of social isolation and exclusion [11].  
Without reliable means to navigate their environment, visual impairment individuals often restrict their  
community engagement, leading to loneliness.3 However, social exclusion extends beyond mere physical  
barriers. Even when participating in social gatherings, visual impairment individuals may experience  
discrimination and poor services from others, reinforcing feelings of being marginalized.2 Improving  
independent travel through robust assistive technology is thus not merely a matter of safety, but a prerequisite  
for restoring social communication and enabling full, equitable participation in society [2].  
Critical Evaluation of Socio-Psychological Impact Methodologies  
To move beyond descriptive reporting of consequences and establish the scientific rigor of RT-ODS  
interventions, the socio-psychological literature must be critically evaluated based on validated assessment  
instruments. While the manuscript cites numerous studies documenting the link between restricted mobility and  
negative outcomes (e.g., depression, reduced QOL), a robust scientific analysis requires detailing how the  
positive benefits of assistive technology (AT) are quantitatively measured. Key standardized frameworks  
essential for this critical evaluation include [4]:  
1) Psychosocial Impact of Assistive Devices Scale: Used to assess the psycho-affective status and subjective  
well-being derived from AT use.  
2) Quebec User Evaluation of Satisfaction with Assistive Technology: Measures user satisfaction with the  
specific characteristics of the device.  
3) World Health Organization Quality of Life: A widely utilized instrument assessing four domains critical to  
QOL: physical health, mental health, social relationships, and environment.  
Table I summarizes the observed socio-psychological correlates of restricted mobility, establishing the  
foundational need for effective technical intervention.  
Table I Correlates of Mobility Restriction and Quality of Life Outcomes in Visually Impaired Individuals  
Area of Impact Observed Consequence  
Supporting Literature  
Increased dependence; Loss of freedom; Increased level of dependence on others and  
Difficulty adjusting to life changes  
loss of freedom 2  
Functional  
Autonomy  
Page 1631  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
Psychological  
Health  
Increased  
Agoraphobia risk  
Depression,  
Anxiety, High risk for depression and anxiety;  
psychological domain scores decrease as  
virtual impaired severity increases 2  
Social  
Participation  
Loneliness, Social Isolation, Exclusion, Lack of social communication; lower social  
Discrimination  
and leisure QOL scores 1  
Safety & Health Higher risk of Falls  
Falls linked in a triad with depression and  
visual loss 2  
Architectural Evolution of Mobile Obstacle Detection Systems  
Technical History: From Dedicated Hardware to Smartphone Integration  
The development of electronic travel aids has progressed significantly. Early generations of object detection  
systems often relied on dedicated, bulky hardware or employed resource-intensive image processing methods  
that analyzed simple cues like histograms or edge information [5]. While such systems demonstrated feasibility,  
a critical drawback was their computational cost, which negatively impacted mobile device performance, led to  
excessive battery drain, and limited overall user adoption.  
The shift towards smartphone-based systems represents a crucial step in making these technologies widely  
viable. By leveraging the advanced computing capabilities and built-in cameras of existing mobile devices, the  
barrier to access is significantly lowered. Modern proposals have advanced the field by using image processing  
techniques on smartphone camera feeds to track multiple obstacles simultaneously [12].  
Deep Learning Optimization for Edge Computing  
Contemporary RT-ODS relies on deep learning, which allows the system to identify complex obstacles with  
high accuracy. However, deploying high-performing deep learning models (Neural Networks) on mobile  
devices, often referred to as edge computing, necessitates extreme computational efficiency. Complex models  
consume significant resources, which conflicts with the real-time requirements and limited battery capacity of  
handheld devices.  
To address this conflict, researchers have successfully implemented lightweight and efficient object detection  
models. The YOLOv8 model, for example, is highly suitable for real-time applications on resource-constrained  
platforms because of its smaller model size, which requires less memory and computational power[4], [13], [14],  
[15]. Yolo [14], [15]is one of the best and efficient models for object detection and tracking and plays a  
significant in real-world applications. Intentional architectural optimization to reduce model size and increase  
speed is more than an engineering decision, it is fundamental to achieving the reliability required for user  
confidence and sustained, safe operation.  
Tailoring Architectures for Enhanced Performance  
Achieving reliable, real-time performance requires specialized refinement of the deep learning architecture. This  
involves techniques designed to optimize the neural network itself. For example, Neural Architecture Search  
(has been applied to automatically search for optimal detection frameworks, resulting in tangible performance  
benefits, such as a 2.6% improvement in average precision (AP) over baseline models while maintaining  
acceptable computational complexity [16].  
Furthermore, the success of RT-ODS is heavily dependent on domain specificity. Existing object detectors are  
frequently trained on generalized datasets (e.g., from platforms like Kaggle), which often limit the number and  
type of obstacles relevant to visual impairment individual [4]. Focusing detection on obstacles relevant to  
visually impaired users and training on environment-specific proprietary datasets greatly strengthens model  
robustness and accuracy . This targeted training ensures that the technology effectively solves the specific, real-  
world navigational problems faced by visual impairment users, moving beyond generalized efforts to provide  
reliable and specialized assistance.  
Page 1632  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
System Performance, Latency Trade-offs, and Reliability Metrics  
The Criticality of Real-Time Response and Latency Analysis  
For an obstacle detection system to be functionally useful and safe, it must operate in real-time. Inference latency,  
defined as the delay between receiving an input (camera feed) and producing a prediction (obstacle alert), must  
be minimized [17], [18]. Any significant delay can translate directly into a safety hazard, particularly when  
navigating dynamic or rapidly approaching obstacles [16]  
In computer vision systems, latency is composed of three sequential steps: Input Processing, Model Inference,  
and Post-Processing [17]. Input Processing, which includes decoding the image, resizing, and normalization, can  
introduce noticeable delays, especially when processing continuous high-resolution video streams on mobile  
devices[17]. Model Inference, where the neural network generates predictions, typically accounts for the  
majority of the latency pipeline, often consuming 6090% of the total processing time. This high dependency  
on model complexity and hardware capability underscores the necessity for lightweight architectures.  
Benchmarking and Validation  
Reliability requires robust performance under diverse real-world conditions. Comprehensive evaluation must  
confirm that the system meets the real-time detection requirements across different scenarios, including varied  
lighting. Academic evaluations have demonstrated the feasibility of achieving exceptionally high performance  
using advanced, lightweight models. For instance, enhanced YOLOv8 algorithms have demonstrated superior  
obstacle identification, achieving a high detection accuracy of 90% and Mean Average Precision (mAP) (1.5%)  
[14]. Such rigorous evaluation requires benchmarking the system against a baseline value, which is the  
measurable performance level (e.g., accuracy, mAP, recall) of an existing or simpler state-of-the-art model.  
Outperforming this baseline, such as by increasing the mAP or accuracy score, is crucial as it underscores the  
enhanced ability of the new model to detect small, less conspicuous obstacles, a critical feature for safe  
navigation. These high performance metrics validate the feasibility of integrating advanced, optimized  
technology to solve critical real-world problems and highlight the potential for positive impact on the end users’  
quality of life.  
Comparative Benchmarking of Lightweight Architectures  
Object detection has become a central research domain in recent years, particularly with the rise of deep  
learningbased approaches that enable robust visual perception capabilities. This section presents a comparative  
benchmarking of several lightweight yet high-performing architectures, namely YOLOv8, SSD-MobileNet,  
NanoDet, and Faster R-CNN which commonly employed in embedded and real-time systems. YOLOv8, a  
modern one-stage detector, consistently demonstrates an optimal balance between accuracy and inference speed,  
enabling highly reliable real-time detection while maintaining computational efficiency. In contrast, SSD-  
MobileNet, although efficient and capable of detecting objects across multiple scales, exhibits reduced accuracy  
in small-object detection and lower overall precision, which may compromise reliability in dynamic or safety-  
critical conditions [19], [20]. NanoDet, designed for extreme compactness and memory efficiency, is well suited  
for deeply embedded devices ; however, this minimal footprint often results in lower detection performance  
relative to optimized architectures [21] such as YOLOv8 [22]. As contemporary smartphones offer sufficient  
computational resources, sacrificing accuracy for marginal memory savings becomes less justifiable in critical  
applications. Meanwhile, Faster R-CNN, a two-stage detector known for strong localization accuracy through  
its region-proposal mechanism, suffers from slower inference times, rendering it less suitable for scenarios  
requiring instantaneous feedback [19], [20]. Overall, the comparative analysis indicates that YOLOv8 provides  
the highest performance across accuracy, precision, and recall metrics, reaffirming its suitability as the primary  
architecture for the proposed system [19], [20], [23], [24].  
Sensor Fusion as a Latency Mitigation Strategy  
A critical challenge in relying solely on computer vision is the inherent processing delay that can compromise  
the accuracy of continuous spatial tracking [25], [26]. While computer vision offers high precision, it demands  
Page 1633  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
increased computing power and is prone to these processing delays [26]. To compensate for the vision system's  
latency and improve navigational reliability, especially concerning trajectory corrections, sensor fusion  
techniques are highly valuable. This involves combining the output of the vision system with faster, more  
traditional sensors, such as rotary encoders or laser sensors [27]. This strategy, utilizing techniques like a Kalman  
filter, can significantly reduce position estimation uncertainty compared to using the vision system alone [27].  
Although increased latencies and reduced sampling frequencies negatively impact uncertainty, the use of sensor  
fusion aims to mitigate these effects, balancing the high precision of vision-based detection with the speed and  
consistency of traditional sensory inputs [26].  
Table II synthesizes the key technical characteristics and performance requirements that must be met for a  
successful mobile RT-ODS.  
User-Centric Design, Usability, and Accessibility  
Design for Multimodal Interaction  
The technical achievement of accurate, low-latency obstacle detection is unresolved if the information cannot  
be conveyed to the user immediately and intuitively. Therefore, user-centric design principles emphasize  
multimodal feedback to maximize accessibility and responsiveness in diverse navigation contexts [16].  
Table II: Key Technical Characteristics And Performance Metrics Of Modern Mobile Rt-Ods  
Component  
Requirement  
Example Implementation  
Latency Criticality  
Minimal delay is essential for safety Model Inference typically consumes 60-  
and reliability  
90% of total processing time 14  
Model Efficiency  
Accuracy  
Must be resource-optimized for edge YOLOv8 architecture utilized for low  
deployment  
memory and computational requirements 4  
Must provide reliable obstacle Achieved mAP 0.5 and Accuracy of 90% 4  
identification under various conditions  
Reliability Enhancement  
Cost and Wearability  
Compensation for visual processing Sensor Fusion (vision + traditional sensors)  
delays  
reduces positional uncertainty 15  
Accessibility for mass adoption and Target production low cost and lightweight  
sustained use devices  
RT-ODS must integrate both auditory and vibration interaction [26]. Advanced haptic modules are necessary to  
provide clear spatial coding of the environment. For instance, some designs propose placing multiple vibrators  
on the user's wrist to code the direction and proximity of surrounding objects via an intuitive haptic signal [27].  
This approach ensures that users receive timely alerts even in noisy or distracting environments where auditory  
feedback might be insufficient or unclear.  
Practical Usability and Wearability  
Usability for visual impairment individuals depends critically on comfort and ease of sustained use. Research  
prototypes emphasize specific design goals to maximize adoption. These include providing a fully wireless  
connection between the sensor and haptic modules to enhance user comfort [27]. Furthermore, minimizing the  
weight of the system is essential, with design objectives targeting a module weight limit [28], [29]. Systems that  
are heavy, cumbersome, or have poor battery life compromise the user’s willingness to rely on the device for  
daily mobility, negating the system’s potential social benefit. Empirical evaluations confirm that applications  
incorporating multiple interaction categories, such as voice recognition, touchpads, and buttons are generally  
found to be useful and usable by visually impaired people in various scenarios [29].  
Page 1634  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
Empirical Validation and Evaluation Methods  
The success of any assistive technology must be validated through direct engagement with the end-users. Final  
testing should be conducted with real visually impaired users to confirm the system's usability and efficacy in  
solving actual navigational challenges[16]. The most prevalent methodologies used to evaluate the usability of  
such applications are qualitative in nature, primarily consisting of surveys and interviews, which capture user  
perception of utility, confidence, and system reliability [6]. Continuous feedback loops derived from these  
evaluations are necessary to move research prototypes closer to the ideal user-friendly assistive tool that, despite  
many existing efforts, has yet to be fully realized [27].  
Socio-Economic Accessibility: The Cost Imperative  
The technical sophistication required for robust real-time performance often implies high manufacturing costs.  
This poses a major barrier to adoption, particularly since people with visual impairment often face high poverty  
rates and lack of resources[2], [6]. Therefore, the successful widespread deployment of RT-ODS must be  
predicated on cost-effective engineering. Designs that utilize cost-effective and light-weight electronic  
components and sensors, targeting a low production cost vital for equitable access. The ability to achieve low  
production costs directly addresses the systemic issue of lack of affordable resources among the visual  
impairment population, ensuring that technical innovation serves social inclusion goals rather than reinforcing  
economic disparities.  
Comprehensive Ethical and Privacy Compliance Framework  
Mandatory Ethical Review and Informed Consent  
The use of a forward-facing camera acts as a data capture mechanism, necessitating specialized informed consent  
protocols to ensure ethical compliance and user comprehension [30]. For visually impaired participants, the  
consent process must be adapted for accessibility:  
1) Multimodal Presentation: Consent information should be presented orally and supplemented by audio  
recordings or, where feasible, provided in Braille [30], [31].  
2) Low Health-Literacy Strategies: Targeted efforts, such as visual aids or clear descriptions, must be employed  
to explain the complex processes of data capture, retention policies, and the continuous nature of the camera  
feed, ensuring participants fully comprehend the risks and scope of data usage [31].  
Mitigating Bystander Privacy and Algorithmic Fallibility  
A shared ethical concern among both visually impaired users and sighted bystanders relates to the privacy of  
individuals inadvertently captured and the risk of algorithmic mischaracterization [32].  
1) Bystander Privacy (Technical Mitigation): Since users themselves express concern over compromising  
bystander privacy [25], [32], the system must implement mandatory technical safeguards. Real-time, on-device  
anonymization techniques, such as blurring or cropping identifiable features before data is stored or processed,  
are essential for equitable privacy compliance and minimizing data collection.  
2) Algorithmic Safety and Misclassification: In a safety-critical domain, technical errors translate directly into  
physical and psychological harm [26].  
Safety Critical Errors (False Negatives): The model failing to detect an obstacle is the most critical error,  
leading to injury (e.g., falls) and increased anxiety [1]. Ethical design therefore requires optimizing the  
model for exceptionally high Recall > 95% for critical obstacle classes to prevent these failures [29].  
Usability Critical Errors (False Positives): Repeated false alarms erode user trust, potentially leading to  
automation bias, the tendency to ignore alerts [28]. Ethical design requires high Precision to maintain user  
confidence [29].  
Page 1635  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
3) Avoiding Subjective Judgments: The system's output must be strictly limited to objective, navigational facts  
(e.g., object type, distance, and direction). The design framework must prohibit the AI from making sensitive,  
subjective judgments, such as classifying human identity, gender, emotion, or intent, to preserve dignity and  
autonomy [26].  
CONCLUSION AND FUTURE RESEARCH DIRECTIONS  
Summary of Contribution and Interdisciplinary Success  
Mobile-based RT-ODS represent a paradigm shift in assistive technology, providing a highly effective  
mechanism to circumvent the physical barriers that underpin social exclusion and psychological morbidity in  
the visual impairment population. Through the application of optimized deep learning architectures like  
YOLOv8, developers have demonstrated the capability to provide robust, real-time navigational assistance with  
high accuracy. The technical success, combined with user-centric features such as multimodal haptic and  
auditory feedback, creates a direct pathway to increase user confidence, reduce dependence, and mitigate the  
risks of anxiety, depression, and falls.  
The evidence strongly suggests that the systemic success of RT-ODS as a tool for social integration is defined  
by its ability to manage intersecting technical, economic, and ethical constraints. Technological optimization,  
specifically, achieving minimal latency for safe navigation and engineering low-cost components, is essential  
for transforming the technology from a prototype into an equitably accessible social aid, particularly for a  
population facing high financial strain.  
Research Gaps and Future Work  
While technical feasibility has been established, several critical research gaps remain that require further  
interdisciplinary attention:  
1) Extended Social and Psychological Impact Studies: Current evaluations focus primarily on technical  
performance and immediate usability. Future research must adopt standardized QOL evaluation instruments,  
such as the Psychosocial Impact of Assistive Devices Scale and the World Health Organization Quality of Life,  
to quantify the long-term causal effects of RT-ODS adoption, moving beyond qualitative user feedback to  
concrete social data. This includes measuring sustained reductions in psychological morbidity (depression,  
agoraphobia), improvements in QOL domain scores, and verifiable increases in economic independence.  
2) Empirical Ethical Validation and Algorithmic Safety: The domain of Ethical Human-Computer Interaction  
requires specialized focus. Future work must explore the effectiveness of ethical protocols, including specialized,  
multimodal informed consent for visually impaired participants and on-device technical mitigation strategies  
(e.g., real-time anonymization) to protect bystander privacy. This includes rigorous testing to ensure high Recall  
(> 95%) for safety-critical obstacles to prevent dangerous False Negatives and maintaining high Precision to  
mitigate automation bias from False Positives.  
3) Policy Simplification and Funding Access must be addressed. The current landscape of assistive technology  
funding is complex and overwhelming for consumers. Policy research should focus on consolidating resources  
and streamlining application processes to maximize the utilization of existing governmental and non-profit  
subsidies, ensuring that cost-effective technology reaches all who need it.  
Final Call for Interdisciplinary Development  
The evolution of assistive technology from a niche technical solution to a scalable social determinant of health  
requires integrated expertise. Continued collaboration between computer scientists optimizing edge computing  
models, social scientists evaluating user experience and psychological impact, and policy experts ensuring  
regulatory compliance and equitable financial access will ensure that RT-ODS fulfills its profound potential to  
empower the visually impaired community toward greater independence and social participation.  
Page 1636  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
ACKNOWLEDGMENT  
The authors would like to express their sincere gratitude to the Kedah State Research Committee, UiTM Kedah  
Branch, for the generous funding provided under the Tabung Penyelidikan Am. This support was crucial in  
facilitating the research and ensuring the successful publication of this article.  
REFERENCES  
1. Tshuma, C., Ntombela, N., and van Wijk, H. C., (2022). Challenges and coping strategies of the visually  
impaired adults: A brief exploratory systematic literature review. Prizren Social Science Journal, 6(2),  
71–80.  
2. Remillard, E. T., Koon, L. M., Mitzner, T. L., and Rogers, W. A., (2024). Everyday challenges for  
individuals aging with vision impairment: Technology implications. Gerontologist, 64(6), gnad169.  
3. Purola, P., Koskinen, S., and Uusitalo, H., (2023). Impact of vision on generic health-related quality of  
life-A systematic review. Acta Ophthalmol, 101(7), 717–728.  
4. [4] Bastidas-Guacho, G. K., Paguay Alvarado, M. A., Moreno-Vallejo, P. X., Moreno-Costales, P. R.,  
Ocaña Yanza, N. S., and Troya Cuestas, J. C., (2025). Computer Vision-Based Obstacle Detection Mobile  
System for Visually Impaired Individuals. Multimodal Technologies and Interaction, 9(5), 48.  
5. Khan, A., and Khusro, S., (2021). An insight into smartphone-based assistive solutions for visually  
impaired and blind people: issues, challenges and opportunities. Univers Access Inf Soc, 20(2), 265–298.  
6. Al-Razgan, M., et al., (2021). A systematic literature review on the usability of mobile applications for  
visually impaired users. PeerJ Comput Sci, 7, e771.  
7. Collart, L., Ortibus, E., and Ben Itzhak, N., (2024). An evaluation of health-related quality of life and its  
relation with functional vision in children with cerebral visual impairment. Res Dev Disabil, 154, 104861.  
8. Wang, J., Li, Y., Yang, G.-Y., and Jin, K., (2024). Age-related dysfunction in balance: a comprehensive  
review of causes, consequences, and interventions. Aging Dis, 16(2), 714.  
9. Singh, R. R., and Maurya, P., (2022). Visual impairment and falls among older adults and elderly:  
evidence from longitudinal study of ageing in India. BMC Public Health, 22(1), 2324.  
10. Ouyang, S., et al., (2022). Risk factors of falls in elderly patients with visual impairment. Front Public  
Health, 10, 984199.  
11. Kim, H., and Sohn, D., (2020). The urban built environment and the mobility of people with visual  
impairments: analysing the travel behaviours based on mobile phone data. Journal of Asian Architecture  
and Building Engineering, 19(6), 731–741.  
12. Thoma, M., Partaourides, H., Sreedharan, I., Theodosiou, Z., Michael, L., and Lanitis, A., (2023).  
Performance Assessment of Fine-Tuned Barrier Recognition Models in Varying Conditions. In:  
International Conference on Computer Analysis of Images and Patterns, Springer, 172–181.  
13. Alsuwaylimi, A. A., Alanazi, R., Alanazi, S. M., Alenezi, S. M., Saidani, T., and Ghodhbani, R., (2024).  
Improved and efficient object detection algorithm based on YOLOv5. Engineering, Technology &  
Applied Science Research, 14(3), 14380–14386.  
14. Jia, X., Tong, Y., Qiao, H., Li, M., Tong, J., and Liang, B., (2023). Fast and accurate object detector for  
autonomous driving based on improved YOLOv5. Sci Rep, 13(1), 9711.  
15. Rasheed, A. F., and Zarkoosh, M., (2025). Optimized YOLOv8 for multi-scale object detection. J Real  
Time Image Process, 22(1), 6.  
16. Said, Y., Atri, M., Albahar, M. A., Ben Atitallah, A., and Alsariera, Y. A., (2023). Obstacle detection  
system for navigation assistance of visually impaired people based on deep learning techniques. Sensors,  
23(11), 5262.  
17. Hanhirova, J., Kämäräinen, T., Seppälä, S., Siekkinen, M., Hirvisalo, V., and Ylä-Jääski, A., (2018).  
Latency and throughput characterization of convolutional neural networks for mobile computer vision.  
In: Proceedings of the 9th ACM Multimedia Systems Conference, 204–215.  
18. Mahendran, J. K., Barry, D. T., Nivedha, A. K., and Bhandarkar, S. M., (2021). Computer vision-based  
assistance system for the visually impaired using mobile edge artificial intelligence. In: Proceedings of  
the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2418–2427.  
19. Kaliappan, V. K., M. S. V., Shanmugasundaram, K., Ravikumar, L., and Hiremath, G. B., (2023).  
Performance Analysis of YOLOv8, RCNN, and SSD Object Detection Models for Precision Poultry  
Page 1637  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
Farming Management. In: 2023 IEEE 3rd International Conference on Applied Electromagnetics, Signal  
Processing, & Communication (AESPC), 1–6. doi: 10.1109/AESPC59761.2023.10389906.  
20. Sonu, B., Singh, Ajay, and Sharma, A., (2024). A Comparative Study of YOLOv8, Faster R-CNN, and  
SSD in Traffic Sign Detection with Consideration of GPS and Central Feedback. In: 3rd International  
Conference on Advances in Computing, Communication and Materials, ICACCM 2024, IEEE. doi:  
10.1109/ICACCM61117.2024.11059135.  
21. Wang, Y., Wang, K., Zhang, Z., Boydens, J., Pissoort, D., and Verbeke, M., (2024). Navigating the Waters  
of Object Detection: Evaluating the Robustness of Real-time Object Detection Models for Autonomous  
Surface Vehicles. In: Proceedings - 2024 IEEE Conference on Artificial Intelligence, CAI 2024, IEEE,  
985–992. doi: 10.1109/CAI59869.2024.00180.  
22. Ling, A. L. I., Xiang, G. Y., Bingi, K., Omar, M., and Ibrahim, R., (2024). Review of Machine Learning-  
Based Techniques for Detecting Specific and General Objects. In: IET Conference Proceedings, IET,  
119–124. doi: 10.1049/icp.2025.0244.  
23. Shobaki, W. A., and Milanova, M., (2025). A Comparative Study of YOLO, SSD, Faster R-CNN, and  
More for Optimized Eye-Gaze Writing. Sci, 7(2). doi: 10.3390/sci7020047.  
24. Dharma, A. S., Pardosi, C. N. S., and Silaen, Z. P., (2025). Comparative Performance of Yolov8 and SSD-  
mobilenet Algorithms for Road Damage Detection in Mobile Applications. sinkron, 9(3), 1159–1169.  
doi: 10.33395/sinkron.v9i3.15008.  
25. Yeong, D. J., Velasco-Hernandez, G., Barry, J., and Walsh, J., (2021). Sensor and sensor fusion  
technology in autonomous vehicles: A review. Sensors, 21(6), 2140.  
26. Nagy, M., and Lăzăroiu, G., (2022). Computer vision algorithms, remote sensing data fusion techniques,  
and mapping and navigation tools in the Industry 4.0-based Slovak automotive sector. Mathematics,  
10(19), 3543.  
27. Ghaffari, G., Tagaro Andersson, A., Hallberg, P., and Saremi, A., (2025). An assistive haptic-based  
obstacle avoidance system for individuals with profound visual impairment. Cogent Eng, 12(1),  
2560974.  
28. Căilean, A.-M., Avătămăniței, S.-A., Beguni, C., Zadobrischi, E., Dimian, M., and Popa, V., (2023).  
Visible light communications-based assistance system for the blind and visually impaired: design,  
implementation, and intensive experimental evaluation in a real-life situation. Sensors, 23(23), 9406.  
29. Patil, K., Jawadwala, Q., and Shu, F. C., (2018). Design and construction of electronic aid for visually  
impaired people. IEEE Trans Hum Mach Syst, 48(2), 172–182.  
30. Senjam, S. S., Manna, S., and Bascaran, C., (2021). Smartphones-based assistive technology:  
Accessibility features and apps for people with visual impairment, and its usage, challenges, and usability  
testing. Dove Medical Press Ltd. doi: 10.2147/OPTO.S336361.  
31. Wittich, W., Boie, N. R., and Jaiswal, A., (2023). Methodological Approaches to Obtaining Informed  
Consent when Conducting Research with Individuals with Deafblindness. Int J Qual Methods, 22. doi:  
10.1177/16094069231205176.  
32. Akter, T., Ahmed, T., Kapadia, A., and Swaminathan, M., (2022). Shared privacy concerns of the visually  
impaired and sighted bystanders with camera-based assistive technologies. ACM Transactions on  
Accessible Computing (TACCESS), 15(2), 1–33.  
Page 1638