INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 130
LUNTIAN: Optimizing Crop Health Utilizing YOLOv8 Object
Detection Algorithm for Plant Disease Detection
Manuel Luis C. Delos Santos, Isagani M. Tano, Redentor G. Bucaling Jr, Christian B. Escoto, Harold M.
Lucero, Adelan P. Sistoso
Quezon City University, San Bartolome, Quezon City, Philippines
DOI: https://dx.doi.org/10.51584/IJRIAS.2025.101000009
Received: 06 October 2025; Accepted: 12 October 2025; Published: 27 October 2025
ABSTRACT
This study focuses on developing a user-friendly and cost-effective diagnostic system designed to assist
agricultural practitioners in monitoring plant health. The system integrates a machine learning model based on
YOLOv8 (You Only Look Once version 8) for accurate plant disease classification using image data. Remote
sensing techniques are employed to enable early disease detection, utilizing Raspberry Pi 5 equipped with soil
moisture, humidity, and temperature sensors, AI Chatbot along with a webcam for image-based plant disease
detection. Real-time data is transmitted to a web platform for visualization and analysis. The detection model
employs a Convolutional Neural Network (CNN) and YOLOv8 for high-accuracy classification, evaluated
using precision, recall, and mean average precision (mAP) to ensure robust performance across multiple plant
disease categories. A web-based application was also developed to allow real-time health monitoring, data
visualization, and storage of diagnostic results. Additionally, a database of disease symptoms and management
practices was established to support informed decision-making and promote sustainable crop management. The
YOLOv8 object detection algorithm effectively identified diseases like Mosaic Virus and Powdery Mildew,
with improved precision, recall, and mAP scores. The web platform enhanced user engagement, offering real-
time monitoring, data storage, and insights for informed decision-making.
KeywordsAI Chatbot, Object Detection, Plant Disease Detection, Smart Farming.
INTRODUCTION
Plants are vital to life on earth, providing food, oxygen, and habitats for many species. In the Philippines,
however, plant diseases pose a serious challenge to agriculture and natural ecosystems. These diseases, caused
by fungi, bacteria, and viruses, can damage crops, leading to economic losses and food shortages for local
farmers.
A key component of agricultural sustainability is plant disease management, which helps farmers safeguard
their crop yields, food safety, and financial stability. In the Philippines, where agriculture is a significant
economic sector and a source of income for many, bacterial, viral, and fungal illnesses are a big concern. The
crops may sustain significant damage from these diseases, resulting in lower yields, financial losses, and food
insecurity for farmers and the community.
Plant diseases are often treated by manual and widespread pesticide application, which is ineffective and
inaccurate. In addition to not fully resolving the issue, these methods can also be expensive for farmers and
detrimental to the environment. Recent developments in agricultural technology and plant science have
highlighted the importance of precise disease detection and focused plant disease management.
The precise identification and diagnosis of illnesses affecting plant crops are essential for creating appropriate
and precise management plans. Most farmers are unaware of the precise illnesses that are plaguing their fields,
and they don’t rely mostly on pesticides and are aware of the proper control measures to implement. A simple
understanding of the diseases that are prevalent in a region is necessary before identifying a successful
approach to illness management.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 131
By lowering losses and encouraging sustainable agricultural methods, accurate and effective plant disease
management has the potential to completely transform Philippine agriculture. But even with these advantages,
there are still difficulties, especially when it comes to training farmers, enhancing diagnostic tools, and dealing
with the financial and environmental effects of disease epidemics. Building a resilient and sustainable
agricultural future in the Philippines requires a deeper understanding of plant disease dynamics and efficient
management techniques as the country attempts to modernize its agricultural sector.
A. Objectives of the Study
This study aimed to develop a user-friendly and cost-effective diagnostic tool with the help of YOLOv8’s
object detection algorithm, utilizing its capacity and usability for plant disease detection.
Specifically, this study aims to:
1. Develop a web-based application for monitoring health diagnosis and recommend tips on how to treat
the disease.
2. Design a machine learning model for disease classification utilizing YOLOv8.
3. Implement remote sensing techniques for early detection of disease for selected plants.
4. Evaluate the impact of environmental conditions on disease incidence through the integration of a
single-board computer known commercially as Raspberry Pi 5, a soil moisture, a humidity, and
temperature sensors.
LITERATURE REVIEW
A. Plant Disease Detection Using YOLOv8
In their study [1] expressed that controlling the spread of infections and improving the quality of food crops
depend heavily on the early diagnosis of plant leaf diseases. Deep learning-based plant disease detection
techniques have recently outperformed state-of-the-art techniques. Therefore, to increase the effectiveness of
rice leaf disease detection, this study used a Convolutional Neural Network (CNN).
In the paper by [2] they pointed out that millions of Moroccan farmers rely on agriculture as their primary
source of income, providing them with a wide variety of crop types. However, farmers find it challenging to
accurately diagnose plant diseases due to a lack of resources and experience. As a result, attempting to salvage
sick crops frequently results in the waste of important time and resources. To address this problem, a ground-
breaking technique that employs the latest developments in deep learning and computer vision has been
introduced to detect plant illnesses in real-time.
As mentioned by [3] that reliable and accurate detection technique is necessary for managing and preventing
plant leaf diseases. The time-consuming procedure of identifying plant leaf diseases hurts crop quality and
productivity. This study aims to provide a deep-learning solution for the identification and segmentation of
plant leaf disease by using the PlantVillage and PlantDoc datasets to train the Ultralytics YOLOv8 model from
start to finish. An improvement on the YOLO series, the YOLOv8 model was created to boost detection speed
without compromising accuracy.
As expressed by [4] that the difficulties associated with the imprecise identification of vegetable diseases in
greenhouse plant environments using current network models, this study presents yolov8n-vegetable. To
increase its efficacy, the model integrates several enhancements and optimizations. First, partial C2f is
replaced by a new C2fGhost module. GhostConv, which is based on Ghost lightweight convolution, improves
detection performance by lowering the model's parameters. Second, to improve vegetable disease identification
in greenhouse environments, the Occlusion Perception Attention Module (OAM) is incorporated into the Neck
section to better preserve feature information after fusion.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 132
[5] said that Bangladesh's economy is based primarily on its agricultural sector, which includes important
crops including rice, corn, wheat, potatoes, and tomatoes. These crops are extremely susceptible to several leaf
diseases, which, if left unchecked, might seriously jeopardize crop productivity and food security. An
automated system that can precisely identify and classify leaf diseases is therefore desperately needed to
facilitate early intervention and control. The effectiveness of the most recent cutting-edge object detection
model, YOLOv8 (You Only Look Once), in outperforming earlier models for the automated identification and
classification of leaf diseases.
B. Plant Disease Detection Using AI and Deep Learning
[6] expounded that India's growing population and rising food needs have made agriculture a major industry
there. Therefore, it is necessary to increase crop productivity. Diseases brought on by bacteria, fungus, and
viruses are among the major factors influencing low crop yields. By using methods for detecting plant
diseases, this can be avoided and managed.
[7] clarified that one area where automation and the internet of things can have a significant impact is
agriculture and modern farming. To maintain a maximum crop output, it is crucial to keep plants healthy and
keep an eye on their surroundings to spot or detect diseases. In the field of advanced picture analysis, modern
agriculture has found great value in the application of cutting-edge technologies such as Artificial Intelligence
(AI), machine learning, and deep learning. In addition to monitoring and managing the environmental
conditions on farms, artificial intelligence also increases time efficiency and has the potential to detect plant
diseases.
According to [8], the main source of human energy generation and have nutritional, therapeutic, and other
benefits, plants are acknowledged as crucial. Plant diseases can damage leaves at any point during crop
production, causing significant losses in crop yield and market value. Consequently, identifying leaf disease is
essential in the farming sector.
As explained by [9], developing nations like India, agriculture is important, but food security is still a serious
concern. Plant diseases, transportation issues, and a lack of storage facilities cause the majority of harvests to
be squandered. In India, illnesses cause almost 15% of crops to be wasted, making it a big issue that needs to
be addressed. An automated system that can recognize these illnesses and assist farmers in taking the necessary
actions to eliminate crop loss is required. Farmers have been using the traditional method of using their own
eyes to identify plant illnesses, and not all farmers are able to do it in the same way. As artificial intelligence
advances, computer vision capabilities must be integrated into the agricultural sector.
C. Applications of YOLO Algorithm in Agriculture
According to [10], several digital instruments and technologies used in agriculture heavily rely on vision.
Because it automates the process of locating, recognizing, and detecting different things in expansive
agricultural landscapes, object detection is essential to digital farming. Thanks to its cutting-edge accuracy,
speed, and network size, the single-stage detection algorithm You Only Look Once (YOLO) has quickly
become well-liked in the agricultural industry. Yolo is used for a variety of agricultural jobs, such as
automation, robotics, sensing, monitoring, and surveillance, and it provides real-time detection performance
with good accuracy.
As indicated in the study of [11], in light of agricultural breakthroughs, this survey explores the revolutionary
potential of several yolo versions, ranging from YOLOv1 to the cutting-edge YOLOv10. The main goal is to
clarify how these state-of-the-art object detection models may revitalize and enhance several facets of
agriculture, from livestock management to crop monitoring. It seeks to accomplish several important goals,
such as identifying current agricultural issues, evaluating YOLO's little but steady progress, and investigating
its uses in agriculture.
In the article written by [12], it is imperative to optimize agricultural techniques considering the world's
expanding population. Weed infestation is a big problem that raises production costs and drastically lowers
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 133
crop yields. In this research, a novel approach for image identification and weed-crop classification tailored for
sesame fields is presented.
[13] expressed that a computational approach to pineapple ripening detection may boost agricultural output. To
boost agricultural output, fruit maturity can be predicted prior to harvest. When a product is traded outside of
the market, its value will rise due to the ripe fruit's quality and standard physical and chemical characteristics.
The tiny YOLOv4 model for determining the pineapple ripening period is examined and enhanced in this work.
In their study, [14] observed that a data-driven pest detection system that is motivated by the requirements of
the H2020 European project Pantheon for the precision farming of hazelnut orchards. In fact, in Precision
Agriculture (PA) settings, early pest detection is a crucial first step in creating successful crop defense plans.
Since real bugs have the potential to seriously impair hazelnut production, we concentrate on them among the
potential pests.
D. Plant Disease Detection Using YOLOv8 and Raspberry Pi
It has long been known that deep learning-based models may be used to automatically detect plant leaf
diseases. These techniques have been effectively applied in the field of agriculture, facilitating the quick and
precise diagnosis of several illnesses. Unresolved issues still include the unavailability of annotated data,
system unpredictability, and the absence of an effective model for real-time application [15].
On the paper of [16], they examined the use of YOLOv8 detection models for automated tomato leaf disease
identification using GPU and raspberry pi hardware. The study uses transfer learning techniques and
Convolutional Neural Networks (CNNs) to assess a dataset of photos from ten different disease groups. The
yolov8 models show 0.78-0.79 precision and 0.75-0.81 recall scores, according to the results. The Nano model
is appropriate for real-time applications since it can process a single inference on a raspberry pi in 0.7 seconds.
[17] reiterated that India, a country rich in biodiversity, must deal with how climate change is affecting
ecosystems and habitats. A vital fruit that is widely used in both Indian and western cuisines is the tomato, or
Solanum Lycopersicum. On the other hand, pest infestations, bacterial activity, microorganisms, and climate
changes can all cause disease in tomatoes. A tomato field's automation would be essential due to the need for
large manufacturing, which makes "hand-picking" impractical. This would need creative ways to diagnose and
categorize diseases in a large-scale manufacturing process. This study focuses on performance analysis while
examining several tomato identification techniques.
E. Plant Disease Monitoring Using Raspberry Pi
[18] focused on image processing methods. This involved several steps, from photographing the leaves to
diagnosing the problem with a Raspberry Pi. The camera and display device were connected via the raspberry
pi, after which the data is transmitted to the cloud. The obtained images are examined using a variety of
techniques, including acquisition, pre-processing, segmentation, and clustering.
[19] discussed that machine vision is a non-destructive technique that works well for tracking plant growth.
However, this becomes a difficult process in outdoor locations because of the changing sunlight and complex
backgrounds. To identify and count the leaves of ramie plants in a greenhouse, a low-cost camera system
utilizing a Raspberry Pi module and a NOIR (No Infrared Filter) camera is used in this work.
As described by [20] that a system for detecting and stopping the spread of plant diseases used Raspberry Pi
where image analysis was done using the K-Means clustering approach. It can be used in large harvest ranches
because it has many focal points, which allows it to naturally identify symptoms of illness whenever they
appear on plant leaves. Identifying leaf diseases is a crucial topic for pharmaceutical research since it offers the
benefit of field crop monitoring and automatically identifies disease symptoms using image processing
method.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 134
METHODOLOGY
The authors employed the combination of applied and quantitative research methods and designs in developing
the prototype and collecting data for analysis to find patterns, relationships, and interpret the results. The
prototype was designed to integrate multiple sensors, including soil moisture, temperature, and humidity
sensors to collect real-time environmental data, which is transmitted to a web-based monitoring system for
analysis and storage. The system leverages by a small single-board computer, commercially known as
Raspberry Pi 5 to process and transmit the gathered data to the web-based platform. The platform utilizes data
analysis tools and visual representations to provide insights into plant health and environmental conditions.
Fig. 1 Fishbone Diagram
Fig. 1 depicts the cause-and-effect or the potential causes of plant diseases which resulted to the development
of the prototype to optimize crop health utilizing YOLOv8 Algorithm which addresses all the factors and
causes that may affect its performance.
Fig. 2 Block Diagram
Fig. 2 illustrates the block diagram to monitor and manage agricultural conditions using a combination of
hardware and software components. It integrates sensors for temperature, humidity, and soil moisture, along
with YOLOv8 image detection for plant analysis. The data collected is processed by a single-board computer
and stored in a local database, enabling real-time monitoring and analysis through a web interface and
dashboard.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 135
Fig. 3 System Architecture Design
Fig. 3 displays a comprehensive architecture system design for an agricultural monitoring solution, centered on
the Raspberry Pi 5 computer. This single-board computer acts as the main processing unit, interfacing with an
array of sensors and peripheral devices to collect and process environmental data. Key input components
include a DHT22 sensor, which measures temperature and humidity levels, and a soil moisture sensor which
gauges the water content in soil. These sensors connect to the Raspberry Pi via a breadboard, enabling
prototyping and easy wiring adjustments. The system also incorporates any webcam, which is likely used for
visual monitoring or plant health analysis through image processing. Storage is handled by a Class 10 SD card,
inserted into the Raspberry Pi to provide the necessary space for the operating system, software, and data
logging. For user interaction, a standard keyboard and mouse are connected, while visual output is rendered on
a monitor through a mini HDMI to HDMI cable. To ensure efficient thermal management, a Raspberry Pi 5
active cooler is attached, preventing overheating during continuous operation. This combination of hardware
components forms a robust system capable of real-time data collection, monitoring, and possibly automation in
agricultural settings.
Fig. 4 Pin Diagram
Fig. 4 shows the basic setup for a Raspberry Pi 5 connected to a breadboard, a moisture sensor,
temperature/humidity, and ADS1115 and the Raspberry Pi 5, which is a green rectangular board on the left and
connected to multiple components with the use of jumper wires to connect every component. The moisture
sensor which is the black pentagon shape which analyzes the soil moisture. The temperature/humidity sensor
appears to be a square white color that is used to sense the temperature of the surroundings. The rectangular
blue color facilitates the power, ground, communication, and analog input functionality for the converter.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 136
Fig. 5 YOLOv8 Network Structure
Fig. 5 highlights the YOLOv8 network structure is streamlined, resulting in faster detection speeds and higher
detection accuracy. To balance model size and detection accuracy, this study optimized the YOLOv8n version,
which has a smaller volume and high accuracy.
A. Confusion Matrix
Fig. 6. Confusion - Matrix Normalized
Fig. 6 showcases the confusion matrix visualizing the classification performance of the YOLOv8 model across
all classes. Each cell represents the proportion of predictions for a given true class (rows) that were assigned to
each predicted class (columns). Higher diagonal values indicate better class-wise accuracy, while off-diagonal
values reveal common misclassifications.
B. Evaluation Indicators


(1)
Where P = Precision, measures the accuracy of the positive predictions made by the model as indicated in
equation (1).
True Positives (TP) signifies the algorithm's accurate detection of the actual number of existing disease targets.
In other words, the algorithm successfully and precisely labels real disease targets as such.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 137
False Positives (FP) represents the algorithm's erroneous labelling of non-existent disease targets as disease
instances. In essence, the algorithm incorrectly identifies non-disease areas as disease infected.


(2)
Where R = Recall measures the ability of the model to identify all the relevant positive instances in the data as
indicated in equation (2).
False Negatives (FN) indicates the algorithm's failure to accurately detect the actual number of existing disease
targets. Specifically, the algorithm falls short in correctly identifying real disease targets as disease instances.
Metric is often used for evaluating models in tasks like object detection or ranking. It calculates the average
precision across different recall levels and computes the mean of these precision values. It accounts for both
the precision at various levels of recall and ensures that both aspects are considered for evaluating
performance.
For a single query, the Average Precision (AP) is computed by considering precision at different recall
thresholds. The mean of APs across multiple queries gives the mean Average Precision (mAP).
K denotes the total number of categories. The mAP (mean Average Precision) is the average value of precision
for all categories, where the precision for each category is represented by its corresponding AP (Average
Precision) as indicated in equation (3).




(3)
Where mAP = mean Average Precision
Fig. 7 Precision-Confidence Curve
Fig. 7 illustrates the Precision-Confidence Curve obtained from YOLOv8 training. The curve demonstrates
how precision varies across different confidence thresholds. A steep drop in precision at lower confidence
levels suggests the presence of many false positives, while a flatter curve indicates consistent prediction
quality.
A Convolutional Neural Network (CNN) is a type of deep learning model specifically designed for processing
data that has a grid-like structure, such as images. It’s especially powerful for tasks like image classification,
object detection, and video analysis, but is also used in fields like natural language processing and medical
imaging.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 138
Fig. 8 Convolutional Neural Network
Fig. 8 shows the complete process of classifying leaf images using a Convolutional Neural Network (CNN). It
starts from capturing images of leaves in the crop field. These images are collected to form a dataset, which
goes through pre-processing steps like resizing, noise removal, and normalization to prepare the images for
training. The dataset is then split into a training set and a test set. For the training set, the images are used to
train a CNN model through multiple iterations, including 5-fold cross-validation to improve model accuracy
and avoid overfitting. The CNN architecture includes layers that extract features from the images (convolution
and pooling), flatten the features, and classify them using fully connected and softmax layers. Meanwhile, the
test set is used to evaluate the final performance of the trained model. The goal is to check how well the model
can classify new, unseen leaf images for example, identifying whether a leaf is healthy or has a certain
disease.
RESULTS AND DISCUSSIONS
A. The Functional Prototype
Fig. 9. The Prototype
Fig. 9 exhibits the prototype which has the dimensions of 200mm (7.6”) long, 150mm wide, and 100mm
height. This prototype featured two sensors for checking temperature, moisture, and humidity of the plants. A
mountable webcam for detection of plant diseases, additionally the Raspberry Pi 5 inside the case, serves as
the main brain of the system that makes the prototype more efficient and functional.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 139
B. The QR Code of the Web Application
Fig. 10 QR Code of AI Luntian
Fig. 10 shows the QR code to access the web-based application or via https://hypercoresolution.com/Luntian/.
Fig. 11 AI Luntian Web-Based Application
Fig. 11 explains the web-based application designed to analyze incoming data and deliver immediate diagnoses
of plant diseases called as AI Luntian. The Filipino term “Luntian” refers to the green color of the plants. The
system utilizes two sensors along with a camera that captures images of the affected plants, ensuring accurate
and detailed data collection. This combination allows for reliable and consistent diagnostic results.
Fig. 12 AI Luntian Chatbot
Fig 12 demonstrates the built-in intelligent Chatbot which is capable of answering a wide range of plant-
related inquiries. From identifying diseases and offering care tips to providing pest control suggestions and
real-time advice based on sensor and image data, AI Luntian serves as a responsive and knowledgeable virtual
assistantenhancing user experience and supporting effective plant health management.
C. Study Area and Pilot Testing
Throughout the testing phase, held at New Greenland Farm, located in Bagong Silangan, Quezon City, the
prototype undergone various dataset training ensuring its functionality, reliability, and performance. All the
hardware components that have been integrated were used and tested throughout the process to make sure that
the prototype has no flaws and issues.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 140
Fig. 13 Pilot Testing
Figure 13 highlights the components that have been used during the testing, the two sensors that work
effectively by accurately determining the humidity, moisture, and temperature that helps the detection of plants
more accurately. The webcam for plant disease detection identifies the disease of the plant. Some of the plants
that were used in the training and testing were tomato, eggplant, corn, pumpkin, sweet potato, and water
spinach which grew abundantly in the Philippines.
Fig. 14 Visualization of Detection Results
Fig. 14 displays the original plant disease images with boxes outlined around the identified regions of the
disease. Each box is labelled with the corresponding disease class for easy identification. The accurate
localization and identification of plant diseases in the images displayed demonstrate the effectiveness of the
YOLOv8 plant model in detecting various types of diseases in challenging agricultural settings. The plant
diseases that AI Luntian web-based application can detect are the following:
1. Anthracnose. A fungal disease that affects many plants, including vegetables, fruits, and trees.
2. Bacterial Soft Rot. Caused by several types of bacteria, but most commonly by species of gram-
negative bacteria, Erwinia, Pectobacterium, and Pseudomonas.
3. Bacterial Wilt. A plant disease caused by a bacteria that infects the water-conducting tissue of plants,
leading to wilting and often death.
4. Chlorosis. A condition that affects the green pigmentation of plants, primarily manifested through
yellowing leaves.
5. Early Blight. Caused by Alternaria solani and presents as dark brown spots on leaves and stems.
6. Mosaic Virus. Any virus that causes infected plant foliage to have a mottled appearance.
7. Powdery Mildew. A fungal diseases that cause white, powdery spots on leaves, stems, and flowers of
many plants.
8. Rice Blast. Is caused by the fungus Magnaporthe Oryzae. It can affect all above ground parts of a rice
plant such as leaf, collar, node, neck, parts of panicle, and sometimes leaf sheath.
9. Ringspot Virus. A plant virus that infects papaya and cucurbits. It causes symptoms such as yellowing,
mosaic, and ringspot on leaves and fruit.
D. The Training and Validation Model
The graphs as shown below clearly shows the model which is effectively learning during training and is also
able to generalize well on the validation of data. All the loss functions are decreasing, while the evaluation
metrics are increasing, which confirms that the model is progressing in the right direction. The improvements
in both precision and recall, along with the high values of mAP50 and mAP50-95, suggest that the model is
reliable and can be considered successful for object detection tasks.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 141
Fig. 15. The Line Graphs of Training and Validation Model
Fig. 15 demonstrates the performance evaluation metrics of the plant disease detection system throughout the
training process.
Fig. 16. Training Loss Metrics
Fig. 16 explains Training Loss Metrics in terms of bounding box, classification, and distribution focal loss.
Below are the discussions of each losses:
1. train/box_loss: The bounding box regression loss starts at approximately 0.20 and demonstrates a
consistent downward trend, reaching around 0.02 by the end of training. This metric measures how accurately
the model predicts the location and dimensions of diseased regions on plant leaves and stems. The steady
decline indicates that the model is successfully learning to localize disease symptoms with increasing precision
throughout the training iterations.
The significance of this metric in agricultural applications cannot be overstated, as precise localization enables
targeted treatment approaches. When farmers deploy the system in real-world scenarios, accurate bounding
box predictions allow for spot treatments rather than broad-spectrum applications, reducing chemical usage
and associated costs. The final value of 0.02 represents excellent localization performance, suggesting the
model can identify disease boundaries with high accuracy.
However, achieving such low training loss values requires careful monitoring to prevent overfitting. The
smooth curve indicates stable learning without erratic fluctuations, which is crucial for model reliability. This
consistent performance suggests the learning rate and batch size configurations were appropriately tuned for
optimal convergence.
For practical deployment, this low box loss translates to precise disease mapping capabilities. Farmers can rely
on the system to accurately identify not just the presence of disease, but exactly where treatment should be
applied. This precision is particularly valuable for high-value crops where targeted interventions can
significantly impact yield and quality.
The training progression shows the model efficiently learned spatial relationships between disease symptoms
and their boundaries. This foundational capability supports the overall detection pipeline, enabling subsequent
classification and severity assessment tasks that depend on accurate region identification.
2. train/cls_loss: The classification loss exhibits a dramatic reduction from 1.4 to approximately 0.25,
indicating strong learning of disease identification patterns. This metric reflects how well the model
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 142
distinguishes between different disease types, healthy plant tissue, and various background elements. The steep
initial decline followed by gradual stabilization suggests the model quickly grasped primary distinguishing
features before fine-tuning more subtle classification boundaries.
The final classification loss of 0.25, while showing significant improvement, indicates room for enhancement.
In real-world deployment, this performance level suggests the model will correctly identify most disease types
but may occasionally confuse similar-appearing conditions. This limitation is particularly relevant when
dealing with diseases that share visual characteristics during early stages of infection.
The training curve's shape reveals important insights about the dataset composition and model architecture.
The rapid initial improvement suggests the dataset contains well-defined examples of major disease categories,
while the gradual later improvement indicates the model is learning to distinguish between more challenging
cases. This progression is typical of robust classification learning.
To improve classification performance further, we could consider implementing class-specific data
augmentation techniques or exploring ensemble methods. The current performance level supports reliable
disease identification in most scenarios, but enhanced accuracy would increase farmer confidence in automated
recommendations and reduce the need for expert verification.
3. train/dfl_loss: The distribution focal loss demonstrates a more gradual decline from 0.98 to approximately
0.90, representing the model's learning of precise bounding box coordinate distributions. This metric is
particularly important for yolov8's anchor-free detection approach, as it helps optimize the predicted box
coordinates' probability distributions. The relatively modest reduction compared to other losses suggests this
aspect of learning requires more training iterations to achieve optimal performance.
The dfl component plays a crucial role in achieving sub-pixel accuracy in disease detection, which is essential
for early-stage disease identification. Small disease spots or initial infection sites require precise localization to
enable timely intervention before symptoms become visually obvious to human observers. The model's
performance in this area directly impacts its effectiveness as an early warning system.
The slower convergence rate for dfl loss indicates that fine-tuning box coordinate predictions is inherently
more challenging than general classification or rough localization tasks. This pattern is common in object
detection models and suggests the training duration was appropriate for achieving reasonable performance
without excessive overfitting to training data distribution characteristics.
For agricultural deployment, the dfl performance affects the system's ability to detect diseases at various scales
and growth stages. Better dfl performance would enable more accurate detection of small lesions, early
symptoms, and diseases affecting plant parts with complex geometries. The current performance level supports
reliable detection of established symptoms while providing moderate capability for early-stage identification.
Future improvements could focus on extending training duration specifically for dfl optimization or
implementing specialized loss weighting schemes. The current performance provides a solid foundation for
disease detection applications, though enhanced precision would improve the system's value for preventive
agricultural management strategies.
Fig. 17 Validation Loss Metrics
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 143
Fig. 17 describes Validation Loss Metrics in terms of bounding box, classification, and distribution focal.
Below are the discussions of each losses:
1. val/box_loss: It begins at approximately 0.35 and demonstrates significant fluctuations before stabilizing
around 0.17. This pattern indicates the model's generalization capability for spatial localization tasks when
presented with previously unseen plant disease images. The initial high values and subsequent volatility
suggest the model required several epochs to develop robust feature representations that transfer effectively to
new data.
The gap between training and validation box loss reveals important characteristics about the model's learning
behavior. While the training loss consistently decreases, the validation loss shows more erratic behavior, which
is typical in real-world machine learning scenarios. This pattern suggests the model achieves good
generalization performance despite some degree of overfitting to training data spatial characteristics.
The practical implications of this validation performance are significant for agricultural deployment. The
stabilized validation loss around 0.17 indicates the model should maintain reasonable localization accuracy
when encountering new plant varieties, lighting conditions, or image capture angles not present in training
data. This robustness is crucial for field deployment where environmental conditions vary considerably.
However, the persistent gap between training and validation box loss suggests potential areas for improvement.
Implementing additional regularization techniques, expanding the dataset diversity, or applying more
aggressive data augmentation could help reduce this gap and improve generalization performance. The current
performance level supports reliable deployment while indicating clear pathways for enhancement.
The fluctuation patterns in validation loss also provide insights into optimal model selection strategies. Rather
than selecting the model with lowest training loss, we should consider validation loss stability and overall trend
patterns. This approach helps ensure the deployed model operates consistently across diverse real-world
conditions.
2. val/cls_loss: It exhibits a distinctive pattern, peaking early in training before gradually declining to
approximately 0.55. This behavior indicates the model initially struggled with classification generalization
before developing more robust feature representations. The early peak suggests the model initially overfit to
training data characteristics before learning more generalizable classification patterns.
The substantial gap between training classification loss (0.25) and validation classification loss (0.55) raises
important considerations for real-world deployment. This difference indicates the model may encounter
classification challenges when faced with disease presentations not well-represented in the training dataset. In
agricultural settings, this could manifest as reduced accuracy when dealing with disease variants, unusual
environmental conditions, or different plant cultivars.
The validation classification performance directly impacts farmer confidence in automated disease
identification recommendations. With validation loss at 0.55, we can expect occasional misclassifications that
might lead to inappropriate treatment decisions. However, the downward trend suggests the model has learned
meaningful disease characteristics that transfer reasonably well to new data.
To address the training-validation gap, we could implement several strategies including cross-validation
techniques, expanded dataset collection, or ensemble methods that combine multiple model predictions. The
current performance level provides a solid foundation for agricultural applications while highlighting specific
areas requiring attention for optimal field deployment.
The classification validation results also inform the recommendation system design. Rather than providing
single disease predictions, we might implement confidence-based recommendations that flag uncertain
classifications for human expert review. This approach would leverage the model's strengths while mitigating
the impact of occasional classification errors.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 144
3. val/dfl_loss: It demonstrates initial volatility around 0.50 before stabilizing near 0.25, indicating the model's
learning progression for precise coordinate prediction on unseen data. The fluctuations suggest the model
required time to develop robust spatial understanding that generalizes beyond training examples. This metric's
behavior provides insights into the model's capability for precise disease boundary detection in real-world
agricultural applications.
The stabilization around 0.25 represents acceptable performance for precise localization tasks, though the gap
with training DFL loss indicates some degree of overfitting to training data spatial characteristics. In practical
deployment, this performance level suggests the model will provide reasonably accurate disease boundary
detection while potentially requiring human verification for critical treatment decisions.
The validation DFL performance has direct implications for precision agriculture applications. Accurate
boundary detection enables targeted spraying systems, selective harvesting decisions, and precise monitoring
of disease progression over time. The current performance supports these applications while indicating
potential for improvement through enhanced training strategies or architectural modifications.
The relationship between DFL training and validation performance reveals important insights about the
dataset's spatial diversity. The gap suggests the training data might not fully represent the spatial complexity
encountered in diverse agricultural environments. Expanding the dataset with images from various growth
stages, lighting conditions, and plant orientations could improve generalization performance.
For deployment considerations, the validation DFL performance informs the system's reliability specifications.
While current performance supports general disease detection applications, critical use cases requiring
maximum precision might benefit from ensemble approaches or human-in-the-loop verification systems until
we achieve enhanced validation performance.
Fig. 18 Performance Evaluation Metrics
Fig. 18 expounds Performance Evaluation Metrics in terms of precision and recall. Below are the discussions
of each results:
1. metrics/precision (b): The precision values fluctuate between 0.50 and 0.75 throughout training, indicating
that approximately 50-75% of the model's disease detections are correct identifications. This performance level
represents moderate precision that requires careful consideration for agricultural deployment scenarios. The
fluctuations suggest the model's precision performance varies with different disease types, image conditions,
and learning phases.
The precision metric directly impacts the practical utility of the disease detection system in farming operations.
Lower precision values mean more false positive detections, which could lead farmers to apply unnecessary
treatments, increasing operational costs and potentially harming beneficial insects or soil microorganisms.
However, the precision range we achieved provides a reasonable balance between detection sensitivity and
specificity for most agricultural applications.
The variability in precision performance throughout training reveals important insights about the model's
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 145
learning characteristics. The fluctuations indicate the model continues refining its discrimination capabilities,
learning to distinguish between actual disease symptoms and benign variations in plant appearance. This
ongoing refinement process is crucial for developing robust detection systems that perform reliably across
diverse agricultural conditions.
For real-world deployment, the precision performance suggests implementing confidence thresholds or multi-
stage verification systems. By requiring higher confidence scores for disease detections, we can improve
precision at the expense of some recall, allowing farmers to customize system behavior based on their specific
risk tolerance and management preferences.
The precision results also inform about the training strategy improvements. The fluctuating pattern suggests
the model could benefit from additional negative example mining, hard example training, or class-specific loss
weighting to achieve more stable and higher precision performance. These enhancements would improve
farmer confidence in automated disease detection recommendations.
2. metrics/recall (b): The recall performance demonstrates strong capabilities, reaching 0.80-0.85, meaning
the model successfully identifies 80-85% of actual disease cases present in validation images. This high recall
performance is particularly valuable for agricultural applications where missing diseased plants poses greater
risks than false alarms. The strong recall indicates the model has learned to recognize diverse disease
presentations and environmental conditions.
High recall performance is crucial for preventing disease spread in agricultural settings. When the system
identifies most diseased plants, farmers can implement timely interventions that prevent disease progression
and protect healthy plants. Missing diseased plants could result in rapid disease spread, particularly for highly
contagious conditions that affect entire crops or orchards.
The recall trends throughout training show consistent improvement with relatively stable performance in later
epochs. This pattern indicates the model developed robust feature representations that generalize well to
diverse disease presentations. The stability suggests the training approach successfully balanced learning rate,
batch size, and regularization parameters for optimal recall performance.
However, the gap between the precision and recall performance indicates an important trade-off in the model's
behavior. While we successfully detect most diseased plants, we also generate false alarms that could lead to
unnecessary treatments. This trade-off reflects common challenges in medical and agricultural diagnostic
systems where sensitivity and specificity must be carefully balanced.
For deployment optimization, strong recall performance provides a solid foundation for agricultural disease
management systems. We could implement adaptive thresholding systems that adjust sensitivity based on
disease severity, crop growth stage, or seasonal risk factors. This approach would leverage the model's strong
detection capabilities while providing farmers flexibility in managing false positive rates.
Fig. 19 Mean Average Precision (mAP) Metrics
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 146
Fig. 19 illustrates Mean Average Performance metrics. Below are the in-depth discussions:
1. metrics/mAP50 (b): The mean average precision at IOU threshold 0.5 reaches approximately 0.725,
representing strong overall detection performance that balances both precision and recall across all disease
classes. This metric provides a comprehensive evaluation of the model's capability to detect and classify plant
diseases with reasonable spatial accuracy. The steady improvement throughout training indicates consistent
learning progression without significant overfitting issues.
The mAP50 performance level places the model within acceptable ranges for practical agricultural
deployment. This metric integrates both detection accuracy and localization precision, providing a realistic
assessment of how the system would perform in real-world scenarios where perfect bounding box alignment is
less critical than reliable disease identification and approximate localization.
The training progression of mAP50 reveals important insights about the model's learning characteristics. The
smooth upward trend indicates stable learning without erratic fluctuations that might suggest training
instability or hyperparameter issues. This consistency suggests the training configuration was well-suited for
the complexity of the plant disease detection task.
For agricultural applications, mAP50 performance directly relates to system reliability and farmer confidence.
A score of 0.725 indicates the model provides consistent performance across diverse disease types and
environmental conditions, supporting automated decision-making processes while maintaining acceptable
accuracy levels for most farming operations.
The mAP50 results also inform the system design decisions regarding user interface and recommendation
algorithms. Strong performance enables automated alerts and treatment recommendations, while the remaining
performance gap suggests implementing confidence indicators and expert review mechanisms for critical
decisions affecting high-value crops.
2. metrics/map50-95 (b): The mean average precision across IOU thresholds 0.5 to 0.95 achieves
approximately 0.73, demonstrating robust detection performance even with strict spatial accuracy
requirements. This comprehensive metric evaluates the model's capability to provide precise disease
localization across varying degrees of overlap precision, which is essential for applications requiring exact
disease boundary identification.
The strong mAP50-95 performance indicates the model maintains consistent accuracy even when requiring
precise spatial alignment between predicted and actual disease boundaries. This capability is particularly
valuable for precision agriculture applications where exact treatment area identification can significantly
impact intervention effectiveness and resource utilization.
The relationship between mAP50 and mAP50-95 performance reveals important characteristics about the
model's spatial precision capabilities. The minimal gap between these metrics suggests the model provides
consistent localization accuracy across different precision requirements, indicating robust spatial learning that
generalizes well to diverse boundary detection scenarios.
For real-world deployment, strong mAP50-95 performance enables advanced agricultural applications
including automated spraying systems, disease progression monitoring, and precision harvesting decisions.
The high accuracy across strict IOU thresholds supports integration with robotic systems that require precise
spatial coordinates for effective operation.
The training progression of map50-95 demonstrates stable learning characteristics that support reliable
deployment. The consistent improvement without significant fluctuations indicates the model architecture and
training approach were well-suited for learning complex spatial relationships required for precise disease
detection and localization.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 147
E. Overall Assessment
The YOLOv8 training results demonstrated a well-performing plant disease detection system with particular
strengths in recall performance and spatial accuracy. The high recall values (0.80-0.85) indicate the model
successfully identifies most diseased plants, which is crucial for preventing disease spread in agricultural
environments. The strong map scores (0.725-0.73) demonstrate reliable overall performance that balances
detection accuracy with spatial precision requirements.
The precision performance (0.50-0.75), while acceptable, represents the most significant area for improvement
in the system. The moderate precision values suggest the model generates false positive detections that could
lead to unnecessary treatments in real-world deployment. However, this trade-off is often acceptable in
agricultural applications where missing diseased plants poses greater risks than over-treatment, particularly
given the potential for rapid disease spread.
The loss progression patterns reveal important insights about the model's learning characteristics and
generalization capabilities. The consistent decline in training losses indicates effective learning, while the gaps
between training and validation losses suggest some degree of overfitting that could be addressed through
enhanced regularization or dataset expansion strategies.
For practical deployment, the results support reliable disease detection capabilities suitable for most
agricultural applications. The strong recall performance enables effective disease surveillance and early
warning systems, while the moderate precision suggests implementing confidence-based recommendations or
multi-stage verification processes for critical treatment decisions.
Future improvements could focus on enhancing precision through techniques such as hard negative mining,
class-specific loss weighting, or ensemble methods that combine multiple model predictions. The current
performance provides a solid foundation for agricultural deployment while indicating clear pathways for
system enhancement and optimization.
CONCLUSIONS AND RECOMMENDATIONS
A. Conclusion
In retrospect, the study effectively achieved its main objectives of developing an integrated web-based
application and a user-friendly, reasonably priced diagnostic tool that could monitor plant health by utilizing
real-time environmental data and advanced image processing. A strong basis for data collection was
established by integrating the camera modules of the Raspberry Pi 5 with sensor technologies, particularly soil
moisture, temperature, and humidity sensors. Together, these elements provided the reliable, fast, and
consistent information required for the early identification of plant diseases, empowering agricultural
stakeholders to take preventative and proactive measures.
The YOLOv8 object detection algorithm's application greatly improved the categorization accuracy of visual
diseases. The system was able to detect and distinguish among various diseases, including mosaic virus,
powdery mildew, and anthracnose. The results of model training showed that while loss values gradually
decreased, important metrics including precision, recall, and Mean Average Precision (mAP) continuously
improved. These outcomes confirm the algorithm's ability to generalize across various data sets and prove its
usefulness in actual agricultural situations.
Additionally, using a web-based platform significantly increased user engagement and accessibility. Long-term
storing of diagnostic results, interactive graphical data displays, and centralized real-time monitoring were all
made possible by it. This enabled users to make informed judgments about crop care and disease prevention,
especially farmers, agricultural workers, and researchers. By providing timely, context-aware help with plant
health, disease symptoms, and suggested interventionsall based on the sensor and picture data the system
gatheredthe incorporation of AI Luntian, an intelligent conversational agent, greatly increased the system's
usefulness.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 148
Despite these positive outcomes, the technique has certain drawbacks. The variety and size of the training
dataset, together with the quality and resolution of the photos taken, all affect how effective it is. Under the
existing system, diseases that are asymptomatic in their early stages or have modest signs are nonetheless
undetected.
Furthermore, the accuracy of disease recognition might occasionally be hampered by environmental conditions
including lighting, occlusion, and picture noise. These limitations highlight the necessity of ongoing research
and development to improve the robustness and dependability of the system.
The study however, marked a significant breakthrough in precision agriculture. The system provided a scalable
solution that facilitates early intervention, resource management, and increased agricultural output by
combining web-based technologies, inexpensive hardware, and artificial intelligence. It also laid the
groundwork for future enhancements such as mobile app integration, offline functionality for remote locations,
support for a wider array of plant species and diseases, and the use of multispectral or thermal imaging to
detect pre-symptomatic or internal signs of plant stress.
Ultimately, by providing an intelligent monitoring platform that promotes data-driven decision-making and
supports resilient agricultural practices in the face of obstacles like climate change, pest outbreaks, and food
insecurity, this research advances the larger objective of sustainable farming. With further development, this
system has the potential to become an indispensable tool in the toolbox of modern, tech-enabled farmers
around the world.
B. Recommendations
1. In the Live Plant Disease Detection feature, once the "Capture" button is clicked, the captured image
along with the identified disease information should be automatically saved to the database, further
enhancing the functionality and efficiency of the web-based system.
2. Collect and add more plant datasets to improve the accuracy of disease detection.
3. Integrate a solar charging system to improve energy efficiency and ensure sustainability, especially in
remote or off-grid areas.
4. Establish partnerships with other farmers and agricultural industries to expand the reach and practical
application of the system.
5. Expand the detection system to support multiple plant types, enabling it to identify diseases in various
crops instead of focusing on a single species.
Encourage collaboration with educational institutions for testing, research, and further system development.
ACKNOWLEDGMENT
This study was virtually presented during the 6th International Conference on Education, Environment, and
Agriculture held last September 18-19, 2025, Warmadewa University, Bali, Indonesia.
REFERENCES
1. Trinh, D. C., Mac, A. T., Dang, K. G., Nguyen, H. T., Nguyen, H. T., & Bui, T. D. (2024). Alpha-
EIOU-YOLOv8: An Improved Algorithm for Rice Leaf Disease Detection. AgriEngineering, 6(1),
2024, pp. 302-317. https://doi.org/10.3390/agriengineering6010018.
2. Orchi, H., Sadik, M., Khaldoun, M., & Sabir, E. (2023, June). Real-time Detection of Crop Leaf
Diseases Using Enhanced YOLOv8 Algorithm. International Wireless Communications and Mobile
Computing (IWCMC), 2023 (pp. 1690-1696). IEEE.
https://doi.org/10.1109/IWCMC58020.2023.10182573.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN APPLIED SCIENCE (IJRIAS)
ISSN No. 2454-6194 | DOI: 10.51584/IJRIAS |Volume X Issue X October 2025
www.rsisinternational.org
Page 149
3. Qadri, S. A. A., Huang, N. F., Wani, T. M., & Bhat, S. A. Plant Disease Detection and Segmentation
using End-to-End YOLOv8: A Comprehensive Approach. IEEE 13th International Conference on
Control System, Computing and Engineering (ICCSCE), 2023, pp. 155-160. IEEE.
https://doi.org/10.1109/ICCSCE58721.2023.10237169.
4. Wang, X., Liu, J. Vegetable Disease Detection Using an Improved YOLOv8 Algorithm in the
Greenhouse Plant Environment. Sci Rep 14, 2024, 4261. https://doi.org/10.1038/s41598-024-54540-9.
5. Abid, M. S. Z., Jahan, B., Al Mamun, A., Hossen, M. J., & Mazumder, S. H. Bangladeshi Crops Leaf
Disease Detection Using YOLOv8. Heliyon, 10, 2024, (18),
https://doi.org/10.1016/j.heliyon.2024.e36694.
6. Jackulin, C., & Murugavalli, S. A Comprehensive Review on Detection of Plant Disease using Machine
Learning and Deep Learning Approaches. Measurement: Sensors, 24, 2022, 100441.
https://doi.org/10.1016/j.measen.2022.100441.
7. Alatawi, A. A., Alomani, S. M., Alhawiti, N. I., & Ayaz, M. Plant disease detection using AI based
VGG-16 model. International Journal of Advanced Computer Science and Applications, 2022, 13(4).
https://doi.org/10.14569/IJACSA.2022.0130484.
8. Sujatha, R., Chatterjee, J. M., Jhanjhi, N. Z., & Brohi, S. N. Performance of deep learning vs machine
learning in plant leaf disease detection. Microprocessors and Microsystems, 2021,
https://doi.org/10.1016/j.micpro.2020.103615.
9. Panchal, A. V., Patel, S. C., Bagyalakshmi, K., Kumar, P., Khan, I. R., & Soni, M.. Image-based Plant
Diseases Detection Using Deep Learning. Materials Today: Proceedings, 80, 2021, pp. 3500-3506.
https://doi.org/10.1016/j.matpr.2021.07.281.
10. Badgujar, C. M., Poulose, A., & Gan, H. Agricultural Object Detection with You Only Look Once
(YOLO) Algorithm: A Bibliometric and Systematic LiterSature Review. Computers and Electronics in
Agriculture, 223, 2024, 109090. https://doi.org/10.1016/j.compag.2024.109090.
11. Alif, M. A. R., & Hussain, M. YOLOv1 to YOLOv10: A Comprehensive Review of YOLO Variants
and their Application in the Agricultural Domain. arXiv Preprint, 2024,
arXiv:2406.10139. https://doi.org/10.48550/arXiv.2406.10139.
12. Sonawane, S., & Patil, N. N. Comparative performance analysis of YOLO object detection algorithms
for weed detection in agriculture. Intelligent Decision Technologies, (Preprint), 2024, pp. 1-13.
https://doi.org/10.3233/IDT-240978.
13. Cuong, N. H. H., Trinh, T. H., Meesad, P., & Nguyen, T. T.. Improved YOLO Object Detection
Algorithm to Detect Ripe Pineapple Phase. Journal of Intelligent & Fuzzy Systems, 43(1), 2022, pp.
1365-1381. https://doi.org/10.3233/JIFS-213251.
14. Lippi, M., Bonucci, N., Carpio, R. F., Contarini, M., Speranza, S., & Gasparri, A.. A YOLO-based Pest
Detection System for Precision Agriculture. 29th Mediterranean Conference on Control and Automation
(MED) 2022, (pp. 342-347). https://doi.org/10.1109/MED51440.2021.9480344.
15. Ahmad, B., Noon, S. K., Ahmad, T., Mannan, A., Khan, N. I., Ismail, M., & Awan, T. Efficient Real-
Time Detection of Plant Leaf Diseases Using YOLOv8 and Raspberry Pi. VFAST Transactions on
Software Engineering, 12(2), 250-259. https://doi.org/10.21015/vtse.v12i2.1869.
16. Kavaliauskas, M., & Sledevič, T. Identification of Tomato Leaf Disease using YOLOv8 Detection
Models on GPU and Raspberry Pi. IEEE Open Conference of Electrical, Electronic and Information
Sciences (eStream) 1501 MR, 2024. pp. (1-3). https://doi.org/10.1109/eStream61684.2024.10542533
17. Dinesh, R., Mohan, H., Kumar, A. S., Mathai, A., & Deepak, S. Autonomous IoT-Integrated Tomato
Plant Disease Detection: Harnessing YOLOv8 Algorithm and Micro-Navigation for Precision
Agriculture. IEEE Recent Advances in Intelligent Computational Systems (RAICS) 2024, (pp. 1-6).
https://doi.org/10.1109/RAICS61201.2024.10689940.
18. Aftab, S., Lal, C., Beejal, S. K., & Fatima, A. Raspberry Pi (Python AI) For Plant Disease Detection.
Int. J. Curr. Res. Rev, 14, 2022, pp (36-42) http://dx.doi.org/10.31782/IJCRR.2022.14307.
19. Soetedjo, A., & Hendriarianti, E. Plant Leaf Detection and Counting in a Greenhouse During Day and
Night Time using A Raspberry Pi NoIR Camera. Sensors, 21(19), 2021, 6659.
https://doi.org/10.3390/s21196659
20. Sankar, M., Mudgal, D. N., & Jalinder, M. M. Green Leaf Disease Detection Using Raspberry Pi. 1st
International Conference on Innovations in Information and Communication Technology (ICIICT),
2019, (pp. 1-6). IEEE. https://doi.org/10.1109/ICIICT1.2019.8741508.