International Journal of Research and Scientific Innovation (IJRSI)

Submission Deadline-22nd July 2025
July Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th August 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-18th July 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

AI-Powered Automated and Portable Device for Retinal Health Assessment

  • Sakthi Kaviya
  • R. Praveen kumar
  • B. Santhosh
  • Dr.J.Sudhakar
  • 539-544
  • Jun 2, 2025
  • Education

AI-Powered Automated and Portable Device for Retinal Health Assessment

1Sakthi Kaviya, 1R. Praveen Kumar, 1B. Santhosh, 2Dr. J. Sudhakar

1UG Student, Department of Biomedical Engineering, Karpaga Vinayaga College of Engineering and Technology, Chengalpattu, 

2Associate professor, Department of Biomedical Engineering, Karpaga Vinayaga College of Engineering and Technology, Chengalpattu,

DOI: https://doi.org/10.51244/IJRSI.2025.12050048

Received: 14 May 2025; Accepted: 16 May 2025; Published: 02 June 2025

ABSTRACT

In recent years, advancements in Artificial Intelligence (AI) and deep learning have opened up new possibilities for automated, accurate, and faster detection of eye diseases, particularly glaucoma. This paper presents a smart, low-cost, and portable solution using a 20D Ophthalmology Lens attached to a smartphone via a PVC (Polyvinyl Chloride) pipe adapter. The device is capable of capturing clear fundus images, which are then analysed using Convolutional Neural Networks (CNNs) and other deep learning models to detect early signs of retinal diseases.This article describes the method to early diagnosis and monitoring of glaucoma through non-invasive, smartphone-assisted fundus imaging. This project integrates a 20D lens with a smartphone camera to capture high-resolution central retinal as well as the peripheral retina up to the pars plana. These images are processed using advanced machine learning algorithms to detect signs of glaucoma, such as optic disc cupping and nerve fiber layer thinning. It is a cost-effective alternative to the fundus camera. Glaucoma is one of the major causes of irreversible blindness across the world, especially in countries like India where early detection is often missed due to limited access to specialised eye care. Traditional methods of diagnosing glaucoma rely heavily on manual evaluation by ophthalmologists, which can be both time-consuming and subjective

Keywords: Smartphone, PVC Pipe Adapter, 20D Ophthalmology Lens, deep learning, retinal imaging.

INTRODUCTION

Our study highlights various AI-based architectural models such as autoencoders, attention networks, generative adversarial networks (GANs), and geometric deep learning, which are proving effective in retinal image classification and diagnosis[1]. In addition to this, we explored feature extraction techniques including structural, statistical, and hybrid methods to enhance diagnostic accuracy. The Smart Fundus system not only simplifies retinal imaging but also promotes teleophthalmology, enabling remote diagnosis in rural and underserved areas[2].This paper also discusses the challenges faced in real-time implementation such as dataset limitations, the need for multimodal integration, model transparency, and clinical acceptance. With further development and validation, the Smart Fundus device has the potential to transform eye care in India and globally by offering affordable, AI-powered, and user-friendly retinal screening that can be used by non-specialists with minimal training[3]. The integration of artificial intelligence (AI) into ophthalmology has revolutionized the way we approach glaucoma detection and management.  The research paper on the “Diagnostic Performance of the Offline Medios Artificial Intelligence for Glaucoma Detection in a Rural Tele-Ophthalmology Setting” highlights the potential of AI to significantly improve access to eye care, particularly in underserved areas. The paper demonstrates the potential of artificial intelligence detection, particularly in remote and resource-constrained settings.  By integrating AI with a portable, smartphone-based fundus camera, the study offers a promising solution to address the global burden of undiagnosed and untreated glaucoma[4].  The  AI system exhibited impressive diagnostic accuracy, with high sensitivity and specificity in detecting referable glaucoma, facilitating early identification and timely intervention.  This approach has the potential to alleviate the workload of ophthalmologists, improve access to eye care services, and reduce the risk of irreversible vision loss.

The integration of a  PVC(Polyvinyl Chloride)pipe with 20D Ophthalmology Lens adaptation design can further enhance image quality and diagnostic rate earlier.  Incorporating advanced AI algorithms, developing a user-friendly interface and ensuring data privacy and security are crucial for widespread adoption[5].  Rigorous clinical validation is necessary to establish the reliability and accuracy of the system.  By addressing these aspects this project can contribute to the advancement of AI-powered eye care and make a significant impact on global eye health.  The Teleophthalmology is a paradigm change in eye care that expands the availability of ophthalmic treatments by removing geographical restrictions.  Innovative strategies are required to provide complete eye care for everyone due to the global burden of eye illnesses, including glaucoma, diabetic retinopathy, and age-related macular degeneration (AMD)[6]. By enabling remote consultations, screening and diagnostics, teleophthalmology provides an answer and transforms the field of eye care.  Particularly in impoverished and rural locations, it has the potential to increase patient outcomes, lower healthcare costs and improve access to care.  Optical Coherence Tomography (OCT) represents the cutting edge of technical advancement in ophthalmology[7].Ocular structural cross-sectional imaging at high resolution is made possible by OCT with the use of low-coherence interferometry.  OCT is a useful tool for the diagnosis and treatment of a variety of ocular disorders due to its non-invasive nature, capacity to record intricate structural information and real-time imaging capabilities.  OCT has been widely utilized to diagnose and track anterior segment abnormalities, glaucoma, and retinal illnessnes, offering important insights into the etiology and course of these conditions.  Conventional OCT equipment, however, has its limits.  They are not portable, only available in dedicated rooms within hospital eye clinics and centers and frequently require pupil dilation, involving trained ophthalmic technicians for their operation which can be not convenient for patients.  Futhermore, interpreting OCT scans can be difficult and time-consuming, necessitating image analysis experience.  In a effort to get beyond these restrictions, OCT technology has recently advanced.  Notal Vision Home OCT (NVHO) and other home monitoring[8].  These gadgets have demonstrated potential in raising patient adherence and assisting in the early identification of illness progression.   Additionally, developments in remote OCT technology have made it posssible to obtain high-quality, real-time imaging from a distance[9].  Because they do not require pupil dilation to obtain high-resolution pictures, handheld, portable OCT devices liked the Bioptigen Envisu C-Class are ideal for teleophthalmology applications.  Futhermore, cloud-based platforms have been created to make it easier for OCT data to be securely transmitted, stored and analyzed.  This allows healthcare providers to collaborate and consult remotely[10].

PROPOSED METHODOLOGY

A smart phone camera is interfaced with a 20D ophthalmic lens using a PVC pipe adapter engineered to maintain consistent focal length, alignment, and ergonomic usability. This low-cost configuration replicates the core optics of traditional fundus cameras, enabling acquisition of posterior pole images with adequate resolution to visualize the optic disc, cup, blood vessels, and retinal nerve fiber layer (RNFL).

Fig. 2.1 Block diagram of proposed Methodology

Systematic Approach to app development

Image Acquisition Protocol

Images are captured under controlled illumination using built-in or ring LED lighting to ensure proper red reflex and visualization of optic structures. The device is stabilized either through handheld alignment or a customized 3D-printed mount. The captured fundus images are stored in standardized formats (e.g., PNG or JPEG) with metadata.These steps help reduce inter-image variability and enhance relevant anatomical features such as the cup-to-disc ratio (CDR), neuroretinal rim, and peripapillary atrophy zones.

Deep Learning Model Architecture

The AI diagnostic engine comprises layered Convolutional Neural Networks (CNNs)-specifically, architectures such as U-Net, ResNet50, MobileNetV2, and DenseNet121-pre-trained on large-scale ophthalmic image datasets like RIM-ONE, DRISHTI-GS, and ORIGA. Training includes both classification (normal vs. glaucomatous) and segmentation (optic cup and disc boundaries) tasks.

Integration into Mobile/Cloud-based Platform

The trained model is embedded into a mobile-compatible interface built using frameworks such as the sensor Flow Lite, Flask API, or Stream lit. Diagnostic results are displayed in real-time, including. User interaction data and confirmed diagnostic labels (from clinicians) are logged for future model retraining. This adaptive feedback loop ensures long-term system accuracy in diverse populations.

Integration into Mobile/Cloud-based Platform

The trained model is embedded into a mobile-compatible interface built using frameworks such as TensorFlow Lite, Flask API, or Streamlit. Diagnostic results are displayed in real-time.User interaction data and confirmed diagnostic labels (from clinicians) are logged for future model retraining. This adaptive feedback loop ensures long-term system accuracy in diverse populations.

RESULTS AND DISCUSSION

The development of the “Smart Fundus” device achieved significant milestones in retinal imaging. By integrating a 20D lens with a PVC (Polyvinyl Chloride) pipe adapter, the device offers a cost-effective and portable solution for capturing high-quality retinal images. Advanced image processing techniques, including Gaussian and Median filtering for noise removal, CLAHE for contrast enhancement, and illumination correction algorithms, significantly improved the clarity and diagnostic accuracy of the retinal images. Additionally, innovative segmentation methods for identifying key retinal features such as blood vessels, the optic disk, macula, and fovea demonstrated reliable performance in isolating and analyzing pathological markers. These advancements ensured the device’s functionality in diverse lighting conditions and user environments, making it suitable for both clinical settings and at-home monitoring.

Step-1 App developmental process using code (dart programing language) in Android Studio Software

 

Step-2 Android Application – Project view structure

Fig 3.1 Real time Evaluation prototype Kit

Fig3.2 output image detection by smart Fundus

The integration of machine learning models, particularly convolutional neural networks (CNNs), elevated the diagnostic capabilities of the Smart Fundus device. Training the AI on publicly available datasets like DRIVE, STARE, and DIARETDB1 resulted in over 90% accuracy in detecting conditions such as glaucoma, diabetic retinopathy, and age-related macular degeneration (AMD). The AI algorithms effectively segmented retinal images and classified features to differentiate healthy eyes from those with retinal diseases. Real-time feedback provided by the AI models significantly streamlined the diagnostic process, allowing ophthalmologists and users to access immediate results. This feature enhances the device’s utility as a telemedicine tool, reducing reliance on specialized equipment and facilitating early detection in underserved areas.

The proposed Smart Fundus system represents a significant advancement in affordable and accessible eye care diagnostics by combining low-cost optical hardware with robust deep learning algorithms. Through the integration of a 20D ophthalmology lens mounted via a PVC adapter onto a smartphone, this portable device enables high-quality retinal image acquisition without the need for bulky or expensive equipment. The captured fundus images are processed through a carefully designed deep learning pipeline using convolutional neural networks (CNNs) for the segmentation of optic disc and cup, and for classification of glaucomatous features. The system demonstrates promising diagnostic accuracy, with reliable performance across publicly available datasets. Furthermore, the mobile interface and optional cloud-based reporting support remote and real-time diagnosis, making it a valuable tool for rural teleophthalmology and community health screening. By bridging the gap between advanced AI-driven diagnostics and low-resource healthcare environments, the Smart Fundus model holds strong potential for scalable deployment in public health initiatives and vision care programs.

CONCLUSION

This Methodology has  embodies a groundbreaking approach to retinal health diagnostics, combining advanced technologies like AI-powered image processing, deep learning algorithms, and PVC (Polyvinyl Chloride) pipe components to deliver an accessible, portable, and cost-effective solution for glaucoma detection and general retinal health assessment. By addressing key challenges such as early-stage detection, lack of accessibility in remote areas, and the need for affordable diagnostic devices, this initiative significantly contributes to advancing ophthalmic care. The integration of a 20D lens and PVC (Polyvinyl Chloride) pipe adapters ensures precise, high-quality imaging while leveraging existing smartphone technology for capturing retinal images. This design enhances convenience and adaptability for both patients and healthcare providers. The device’s automated focusing system and noise-reduction techniques, like Gaussian and median filtering, ensure that image clarity and diagnostic accuracy are prioritized, even in non-clinical settings.

The application of advanced AI models, including convolutional neural networks (CNNs), facilitates robust feature extraction and segmentation for detecting critical retinal features such as optic disk, blood vessels, and lesions. Image pre-processing steps, including green channel extraction, histogram equalization, and illumination correction, further optimize the quality of captured images, enabling more precise diagnoses. Machine learning techniques like K-means clustering and intensity thresholding enhance the detection of specific features such as the optic disk, macula, and fovea, while lesion detection algorithms accurately identify abnormalities like microaneurysms and hemorrhages. These capabilities allow for the early detection of conditions such as diabetic retinopathy, glaucoma, and age-related macular degeneration (AMD), reducing the likelihood of severe vision loss when diagnosed and treated promptly.

FUTURE ENHANCEMENT

Future enhancements for the “Smart Fundus” project could focus on making the system even more advanced, accessible, and effective for a wider range of healthcare applications. One significant improvement could involve the integration of additional diagnostic features to detect multiple eye diseases, such as diabetic retinopathy, macular degeneration, and cataracts, using the same platform. This could be achieved by incorporating more advanced machine learning models capable of analyzing subtle variations in retinal images. Another enhancement could be the use of cloud-based storage systems to handle larger volumes of data, making it easier for healthcare providers to store and access patient records securely from any location. To improve the system’s reach in remote areas, the hardware design could be made more compact and affordable, ensuring it can be used by non-specialized healthcare workers after basic training. In addition to detecting eye diseases, the system could expand its capabilities to monitor general health parameters, such as blood pressure and glucose levels, by integrating with wearable devices. This would allow for continuous health monitoring and early detection of systemic diseases. Enhancing the user interface to be more intuitive and available in multiple languages could make the technology accessible to a broader audience, including rural populations and non-English speakers. The inclusion of augmented reality (AR) or virtual reality (VR) features for doctors could provide real-time overlays of critical information during diagnostics, making analysis quicker and more accurate.

REFERENCES

  1. Abramoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digital Medicine. 2018;1(1):39.
  2. Ting DSW, Pasquale LR, Peng L, Campbell JP, Lee AY, Raman R, Tan GSW, Schmetterer L, Keane PA, Wong TY. Artificial intelligence and deep learning in ophthalmology. British Journal of Ophthalmology. 2019;103(2):167–175.
  3. Asaoka R, Murata H, Iwase A, Araie M. Detecting preperimetric glaucoma with standard automated perimetry using a deep learning classifier. Ophthalmology. 2016;123(9):1974–1980.  multimodal data, and enhancing clinical validation through prospective field trials.
  4. Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmology. 2017;135(11):1170–1176.
  5. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, Askham H, Glorot X, O’Donoghue B, Visentin D, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine. 2018;24(9):1342–1350.
  6. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402–2410.
  7. Mookiah MRK, Acharya UR, Lim CM, Petznick A, Suri JS. Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features. Knowledge-Based Systems. 2012;33:73–82.
  8. Varadarajan AV, Poplin R, Blumer K, Angermueller C, Ledsam J, Chou K, Corrado GS, Peng L, Webster DR. Deep learning for predicting refractive error from retinal fundus images. Investigative Ophthalmology & Visual Science. 2018;59(7):2861–2868.
  9. Medeiros FA, Lisboa R, Weinreb RN, Liebmann JM, Girkin CA, Zangwill LM. Retinal nerve fiber layer thickness change detected with different strategies for glaucoma progression. American Journal of Ophthalmology. 2011;151(4):719–727.
  10. Weinreb RN, Aung T, Medeiros FA. The pathophysiology and treatment of glaucoma: a review. JAMA. 2014;311(18):1901–1911.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

12 views

Metrics

PlumX

Altmetrics

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER