Raspberry Pi-Based Driver Tiredness Monitoring and Alerting System for Truck
- Prof. Shital Amar Patil
- Miss. Prof. S.H.More
- 5291-5298
- Sep 15, 2025
- Education
Raspberry Pi-Based Driver Tiredness Monitoring and Alerting System for Truck
Prof. Shital Amar Patil, Miss. Prof. S. H. More
Dept. of E&TC Engg. Tatyasaheb Kore Institute of Enggineering & Technology, Warananagar
DOI: https://dx.doi.org/10.47772/IJRISS.2025.908000426
Received: 07 August 2025; Accepted: 15 August 2025; Published: 15 September 2025
ABSTRACT
This paper presents a real-time driver tiredness detection and alerting system using Raspberry Pi and computer vision techniques. The system detects the driver’s face from live video feed, identifies ocular regions, and monitors eye closure patterns. If signs of fatigue are detected, an audible alert is triggered. The system consists of three main modules: face detection, eye detection, and drowsiness classification. Haar Cascade classifiers are used for initial detection, followed by continuous tracking. The proposed method improves transportation safety by reducing accidents caused by fatigued driving, offering a cost-effective, non-intrusive solution.
Keywords: Raspberry Pi, OpenCV, Haar Cascade, Drowsiness Detection, Driver Safety
INTRODUCTION
Drowsy driving is a significant contributor to fatal road accidents worldwide. Recent studies estimate that approximately 21%—or one in every five—road accidents involve driver fatigue. The Global Status Report on Road Safety (2015), based on data from 180 countries, highlights a troubling upward trend in sleep-related traffic fatalities [1]. These statistics underscore the severe global consequences of driver fatigue, which not only causes loss of life but also inflicts lasting harm on families and communities.
The primary causes of such incidents include prolonged driving hours, alcohol or substance impairment, and general driver negligence. In response, real-time driver fatigue monitoring technologies have emerged as a proactive solution to this growing problem. By continuously assessing the driver’s state and issuing timely alerts when signs of fatigue are detected, these systems have the potential to prevent accidents before they occur.
Previous research identifies three main approaches to detecting drowsiness: physiological, visual, and performance-based measurements[2]. Among these, physiological and visual indicators have proven most reliable. Physiological methods—such as monitoring heart rate, pulse, or brain wave activity—can achieve high accuracy but require physical contact with the driver, often involving electrodes or wearable devices. This intrusiveness can affect comfort and alter natural driving behavior.
In contrast, ocular-based visual techniques use camera monitoring to track eye movements, offering a non-invasive alternative. Such methods are particularly well-suited for practical automotive applications, as they can detect patterns such as prolonged eyelid closure or abnormal blinking frequency.
The system proposed in this study employs computer vision algorithms, implemented on a Raspberry Pi using OpenCV, to monitor the driver’s eye state in real time. By measuring the duration of eye closures, it can detect early signs of fatigue and activate an in-vehicle alarm. In addition to developing this detection framework, the study evaluates how facial and eye detection algorithms perform under various conditions, with the aim of advancing traffic safety through continuous assessment of driver alertness.
LITERATURE SURVEY
Improving transportation safety, particularly in long-distance and interprovincial travel, requires effective monitoring of driver alertness. Recent studies have investigated various approaches, ranging from intrusive physiological measurements to non-invasive computer vision systems.
EEG-Based Approaches – Zhou et al. (2023) proposed an interpretability-guided channel selection method using a teacher–student network for EEG-based drowsiness detection. The teacher model was trained on full-head EEG data, while the student model used a reduced set of channels identified through class-activation mapping. This reduced hardware complexity and improved cross-subject generalizability. Strengths: lower electrode count, smaller model size. Limitations: still requires EEG equipment, which can interfere with natural driving behavior.
Facial Video and Deep Learning – Delwar et al. (2025) developed a convolutional neural network (CNN)-based model for real-time eye and facial landmark detection. Their method achieved high accuracy on benchmark datasets and offered a non-intrusive solution. Strengths: real-time operation, effective use of facial expressions. Limitations: performance degraded in poor lighting or when the driver’s face was partially occluded.
Hybrid CNN + OpenCV Systems – Sengar et al. (2024) introduced VigilEye, a real-time drowsiness detection framework combining OpenCV-based facial landmark extraction with CNN classification. The system demonstrated strong accuracy, sensitivity, and specificity across multiple datasets. Strengths: portable, open-source, and deployable in real vehicles. Limitations: susceptible to accuracy loss in challenging lighting conditions.
Physiological Signals from Skin Conductance (SC) – A 2023 study in Sensors explored SC signals measured via steering-wheel wearables. When combined with machine learning algorithms (SVM, Random Forest), SC correlated strongly with drowsiness. Hybrid setups combining SC with ECG, EEG, or facial analysis achieved accuracies exceeding 90%. Strengths: less invasive than EEG, potential integration into existing vehicle controls. Limitations: motion artifacts and reduced reliability in single-sensor setups.
Comprehensive Reviews on Prediction and Detection – A 2023 review in Transport Research categorized drowsiness detection into behavioral, physiological, vehicle-based, and hybrid approaches. It found multimodal systems—combining physiological and vehicle-based data—to be more robust than single-mode systems, with the added benefit of early fatigue prediction. Limitations: many advanced systems remain tested only in simulator environments.
Low-Cost Accident Detection and Alerting – Mulla and Bidwai [18] presented an economical vehicle monitoring system capable of accident spot identification and alerting. While primarily focused on crash detection, the concept could be extended to fatigue monitoring in low-resource settings.
Research Gaps and Opportunities
- Field Validation: Most systems are tested in controlled environments; few undergo real-world trials under varied driving conditions.
- Non-Invasive Physiological Sensors: Emerging technologies, such as ear-EEG and contactless SC via seatbelt sensors, offer deployment potential without affecting driver comfort.
- Personalization and Explainability: Adapting models to individual driver behavior can improve acceptance and reduce false alarms.
- System Integration: Combining facial video, SC wearables, and hybrid sensing may yield lightweight, real-time, multimodal systems ready for field deployment.
Overall, recent research trends point toward multimodal, lightweight, and non-intrusive drowsiness detection systems that balance accuracy, comfort, and real-world applicability. While EEG-based methods deliver exceptional accuracy, facial-video and SC-based hybrid approaches appear most promising for practical automotive integration. Future research should focus on early fatigue prediction, robust multimodal fusion, and deployment-ready solutions.
METHODOLOGY
Face Recognition
The proposed system employs the Viola–Jones algorithm (Paul Viola and Michael Jones, 2001) for face detection, implemented using the OpenCV library. This method is widely recognized for its ability to perform real-time object detection with high accuracy. Although primarily designed for face detection, it can be adapted for other object recognition tasks. The algorithm consists of four key components:
- Haar Features – Rectangular features that detect visual patterns in an image [7].
- Integral Image – A data structure enabling rapid computation of Haar features.
- AdaBoost – A machine learning algorithm that selects the most relevant features to improve detection efficiency.
- Cascade Classifier – A multi-stage classifier that quickly discards non-face regions, reducing processing time while maintaining accuracy.
Eye Detection
The first step in locating the eyes within the detected face is binarization, which converts a grayscale image into a binary format. Each pixel is assigned a value of either 1 (bright pixel) or 0 (dark pixel). This simplifies subsequent image processing tasks. Eye regions are then localized using the Hough Circle Transform, which identifies circular patterns corresponding to the pupil and iris.
System Integration and Workflow
The system’s functional flow is illustrated in Figure 1. A Raspberry Pi camera module continuously captures the driver’s face, which is then processed using OpenCV. The workflow is as follows:
- Face Detection: Haar Cascade classifier (haarcascade_frontalface_alt2.xml) identifies the face region in each frame.
- Eye Detection: Hough Circle Transform locates the eyes within the face.
- Eye State Analysis – The system determines whether the eyes are open or closed in each frame.
- Fatigue Scoring: The number of open eyes in each frame is stored in Eyes_total. A fatigue counter increments for each frame where both eyes are closed. If the counter reaches a threshold (≥4 consecutive frames), drowsiness is detected.
Alert Mechanism
Once the drowsiness condition is met, the Raspberry Pi activates a buzzer to alert the driver. The alert is both auditory (buzzer) and visual (on-screen “Drowsiness Alert” message), ensuring immediate driver awareness [9].
- System Behavior Under Different Conditions
- Normal Driving – Eye blinks reset the fatigue counter, preventing false alarms.
- Prolonged Eye Closure – Counter exceeds threshold → alarm triggered.
- Low Lighting – The system’s accuracy can be enhanced by infrared illumination.
- Head Movements – Classifier tuning minimizes false detections due to rapid head turns.
This methodology ensures real-time performance by combining efficient detection algorithms with lightweight processing, making it suitable for deployment in trucks and other commercial vehicles.
Fig. 1. Block Diagram of Proposed System
RESULT
Setup Raspberry Pi
- The necessary operating system must be stored on the SD card before we can begin using the Raspberry Pi.
- We must now install the OS to the SD card in order to store it there. Refer to Installing Operating System Image on SD card for information on how to install and save OS on SD card.
- Here, we set up the Raspbian operating system on the SD card.: Now, we have an SD card with installed OS and Raspberry Pi Board.
- Initially to use raspberry Pi we need computer monitor or Digital Display.
- We can directly connect Raspberry Pi to the Digital Display using HDMI cable.
- HDMI Cable Now, connect the Raspberry Pi to the Display/monitor and Power-On Raspberry Pi. We will get a Black command window asking for Login and Password.
Fig. 2. Actual code displayed on the screen of raspberry pi 4B
The actual code used in this project is displayed in the above figure. After compiling the code there is message available no any error. Then we run the code after running the code one small dialog box is opening. The dialog box displays the current status of the camera model and user can observe his or her photo in it. This will be displayed in the figure no. 3 below,
Fig. 3. Facial detection screen of raspberry pi 4B
The actual facial detection screen is shown in above figure. In that we can observe the face shape and the coding used for the detecting the eyes and the lips. This will be working fine and detecting the facial expression.
Fig. 4. Drowsiness detection screen of raspberry pi 4B
In the above diagram no. 4, it is the screen that will detect the drowsiness of the personal. As we can observe in that a person closes her eyes then one message is displayed on the screen Drowsiness Alert and showing the EAR value simultaneously.
Fig. 5. Yawn detection screen of raspberry pi 4B
The above figure 5. Display the yawn detection screen of the raspberry pi 4b. Once user Yawn it will display the message on the screen that user is feeling sleepy and the Yawn is detected.
Fig. 8. Car data analysis
Figure No. 8 illustrates the vehicle’s movement parameters—speed, direction angle, braking torque, and status—under different driving conditions recorded during system testing.
- Speed (Blue Bars): The vehicle’s speed varies across trials, ranging from 85 km/h to 101 km/h under normal conditions, with slight fluctuations when the driver is alert. In trial 5, the speed drops to 85 km/h due to the activation of the drowsiness alert and automatic braking sequence.
- Direction Angle (Orange Bars): This represents the steering deviation from a neutral position. Minor deviations are observed in most trials, except trial 5, where a sudden 20° steering change indicates driver inattention, triggering corrective measures.
- Braking Torque (Gray Bars): Braking torque values remain at zero during normal operation, indicating no emergency intervention. However, in trials 4 and 5, torque values rise to 155 Nm and 160 Nm respectively, corresponding to emergency braking events initiated when drowsiness was detected.
- Status (Yellow Bars): This is a binary indicator (0 = normal, 1 = alert condition). Status remains “0” throughout most trials, except trial 5, where it switches to “1” due to a detected fatigue event.
System Behavior Interpretation
During normal driving (trials 1–3 and 6–7), the system detects no fatigue, and all parameters remain within safe operating limits. In contrast, trial 5 shows a significant change in both steering angle and braking torque, triggered by the system’s fatigue detection module. The combination of increased braking torque and status change to “1” demonstrates successful activation of the alert mechanism and autonomous intervention to maintain vehicle safety.
This figure confirms that the system can detect driver drowsiness in real time and respond appropriately by applying braking force and maintaining lane stability, thus reducing the risk of accidents.
Every condition detected is displyed on the screen and simultaneously it alert on the speraker that will be attached with the system. It this way the entire project will work.
Future Scope
A promising starting point for improving road safety in the transportation industry is the suggested Raspberry Pi-based driver fatigue monitoring and alerting system. Nonetheless, there are numerous chances to enhance, expand, and incorporate this system into more comprehensive frameworks for intelligent transportation. Future improvements could include the following:
Combining Advanced AI Models
By examining real-time facial features, eye movements, and head posture, future iterations of the system can integrate deep learning models (such as CNN and LSTM) for more precise drowsiness detection. Large datasets can be used to train these models, and they can be tuned to function well on edge devices.
Fusion of Multimodal Sensors
Camera-based monitoring may be the mainstay of current systems. Nevertheless, merging information from several sensors—including accelerometers, steering behavior sensors, EEGs, and heart rate sensors.
Processing in real time and optimizing energy use
Due to the Raspberry Pi’s low processing capability, future study can focus on optimizing algorithm execution time and energy usage. Real-time performance might be enhanced by looking at specialist AI co-processors, such as Google Coral and NVIDIA Jetson Nano.
Mobile Apps and Voice-Based Communication
A companion smartphone application that would allow drivers to get feedback, alarm logs, and suggestions may be developed. Voice-based interaction might also be used to more organically communicate alerts.
CONCLUSION
This real-time drowsiness detection system can quickly identify tiredness in order to track the eyes of drivers and check for exhaustion. The system can distinguish between tiredness and a typical blink. Which can assist in keeping the driver from being drowsy while operating a motor vehicle? The technology can be further enhanced and applied commercially in the automobile industry. The knowledge gathered from the different photos taken can assist the system in determining the state of drowsiness. An warning is triggered by the real-time system as soon as the sleepy condition is detected. Implementing such a system in automobiles can lower the possibility of accidents caused by tiredness.
REFERENCES
- Alvarez Oviedo A, Mamani Villanueva JF, Echaiz Espinoza GA, Villanueva JMM, Salazar AO, Villarreal ERL. Design of a System for Driver Drowsiness Detection and Seat Belt Monitoring Using Raspberry Pi 4 and Arduino Nano. Designs. 2025; 9(1):11. https://doi.org/10.3390/designs9010011.
- Priyanka, Priyanka, and Jashanpreet Kaur. “Ant Colony Optimization Based Routing in IoT for Healthcare Services.” 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS). IEEE, 2018.
- Jaiswal, Kavita, et al. “IoT-cloud based framework for patient’s data collection in smart healthcare system using raspberry-pi.” 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA). IEEE, 2017.
- Lavanya, S., G. Lavanya, and J. Divyabharathi. “Remote prescription and I-Home healthcare based on IoT.” 2017 International Conference on Innovations in Green Energy and Healthcare Technologies (IGEHT). IEEE, 2017.
- Mumtaj, S. Y., and A. Umamakeswari. “Neuro fuzzy based healthcare system using iot.” 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS). IEEE, 2017.
- Subasi, Abdulhamit, et al. “IoT based mobile healthcare system for human activity recognition.” 2018 15th Learning and Technology Conference (L&T). IEEE, 2018.
- Murali Subramaniyam, Deep Singh, Dong, “Recent developments on driver’s health monitoring and comfort enhancement through IoT”, 2nd International conference on Advances in Mechanical Engineering (ICAME 2018) IOP Conf. Series: Materials Science and Engineering 402 (2018) 012064, https://doi.org/10.1088/1757-899X/402/1/012064.
- Velino J. Gonzalez, Josiah M. Wong, Emily M. Thomas, Alec Kerrigan, Lauren Hastings, Andres Posadas, Kevin Negy, Annie S. Wu, Santiago Ontañon, Yi-Ching Lee, Flaura K. Winston, “Detection of driver health condition by monitoring driving behavior through machine learning from observation”, Expert Systems with Applications, Volume 199, 2022, 117167, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2022.117167.
- Andres E. Campos-Ferreira, Jorge de J. Lozoya-Santos, Juan C. Tudon-Martinez, Ricardo A. Ramirez Mendoza, Adriana Vargas-Martínez, Ruben Morales-Menendez and Diego Lozano “Vehicle and Driver Monitoring System Using On-Board and Remote Sensors”, Sensors 2023, 23, 814. https://doi.org/10.3390/s23020814, https://www.mdpi.com/journal/sensors.
- Tianyue, Zheng., Zhe, Chen., Chao, Cai., Jun, Luo., Xu, Zhang. (2021). 9. V2iFi: in-Vehicle Vital Sign Monitoring via Compact RF Sensing. ar Xiv: Signal Processing, doi: 10.1145/3397321.
- “Driver Vital Signs Monitoring Using Millimeter Wave Radio.” IEEE Internet of Things Journal, undefined (2022). doi: 10.1109/jiot.2021.3128548.
- Hiroaki, Hayashi., Mitsuhiro, Kamezaki., Shigeki, Sugano. (2021). 22. Toward Health–Related Accident Prevention: Symptom Detection and Intervention Based on Driver Monitoring and Verbal Interaction. doi: 10.1109/OJITS.2021.3102125
- Fayssal, Hamza, Cherif., Lotfi, Hamza, Cherif., Mohammed, Benabdellah., Georges, Nassar. (2020). 21. Monitoring driver health status in real time. Review of Scientific Instruments, doi: 10.1063/1.5098308
- Rory Coyne, Michelle Hanlon, Alan F Smeaton, Peter Corcoran, Jane C Walsh, Understanding drivers’ perspectives on the use of driver monitoring systems during automated driving: Findings from a qualitative focus group study, Transportation Research Part F: Traffic Psychology and Behaviour, Volume 105, 2024, Pages 321-335, ISSN 1369-8478, https://doi.org/10.1016/j.trf.2024.07.015.
- Natalie Watson-Brown, Verity Truelove, Teresa Senserrick, Self-Regulating compliance to enhance safe driving behaviours, Transportation Research Part F: Traffic Psychology and Behaviour, Volume 105, 2024, Pages 437-453, ISSN 1369-8478, https://doi.org/10.1016/j.trf.2024.07.021.
- N. Stephens, B. Collette, A. Hidalgo-Munoz, A. Fort, M. Evennou, C. Jallais, Help-seeking for driving anxiety: Who seeks help and how beneficial is this perceived to be?, Transportation Research Part F: Traffic Psychology and Behaviour, Volume 105, 2024, Pages 182-195, ISSN 1369-8478, https://doi.org/10.1016/j.trf.2024.07.003.
- Shailaja Sanjay Mohite, Dr. Uttam D Kolekar, Mr. Juber Shaphi Mulla, Ms. Santoshi Bhakte, Prof. Priya Shinde, Patil Jaydip, Interference management and power scheduling in femtocell networks with the optimized power scheduling BiLSTM, Computers and Electrical Engineering, Volume 119, Part A, 2024, 109487, ISSN 0045-7906, https://doi.org/10.1016/j.compeleceng.2024.109487.
- M. S. Mulla, D. Gavade, S. S. Bidwai and S. S. Bidwai, “Research paper on airbag deployment and accident detection system for economic cars,” 2017 2nd International Conference for Convergence in Technology (I2CT), Mumbai, India, 2017, pp. 846-849, doi: 10.1109/I2CT.2017.8226248.
- Kale, D. R., & Mulla, J. M. S. AI IN HEALTHCARE: ENHANCING PATIENT OUTCOMES THROUGH PREDICTIVE ANALYTICS.