International Journal of Research and Innovation in Applied Science (IJRIAS)

Submission Deadline-26th September 2025
September Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-03rd October 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Optical Satellite Imagery Classification Approaches for Vegetation Mapping

  • Mustapha Aliyu
  • Isa Yunusa Chedi
  • 1590-1597
  • Sep 23, 2025
  • Education

Optical Satellite Imagery Classification Approaches for Vegetation Mapping

1*Mustapha Aliyu, 2Isa Yunusa Chedi

1National Space Research & Development Agency, Obasanjo Space Centre, Musa Yar’Adua Way, Lugbe Abuja.

2National Oil Spill Detection & Response Agency, Abuja, Nigeria

*Corresponding Author

DOI: https://doi.org/10.51584/IJRIAS.2025.100800138

Received: 08 July 2025; Accepted: 14 July 2025; Published: 23 September 2025

ABSTRACT

This study reviews optical satellite image classification methods for identifying and classifying vegetation types, focusing on reliable techniques and methodologies. Traditional pixel-based approaches have limitations, particularly with high spatial resolution images, where the “salt-and-pepper effect” and “H-resolution problem” can occur. Object-based Image Analysis (OBIA) has emerged as a powerful alternative, grouping pixels into spectrally homogeneous objects and leveraging spatial and contextual features to improve accuracy. Supervised and unsupervised classification methods are recognized, with supervised classification offering greater precision and control, while unsupervised classification provides flexibility and exploratory data analysis.  Machine learning algorithms, such as Support Vector Machines (SVM), Random Forest (RF), and Artificial Neural Networks (ANN), have demonstrated superior applicability and achieved higher classification accuracy in vegetation research. Deep learning architectures, including Convolutional Neural Networks (CNN), U-Net, and ResNet, have proven highly effective in extracting complex features from high-dimensional data. The study highlights that no single method is universally superior, and the most effective approach is determined by the intrinsic properties of the data and the precise objectives of the classification endeavor. Neural Networks (NN) generally demonstrate the highest median overall accuracy, and deep learning models frequently achieve superior overall accuracy, while traditional machine learning algorithms remain widely adopted and deliver satisfactory results. This study contributes to the development of more accurate and efficient methods for vegetation identification and classification using optical satellite images, and has implications for remote sensing applications in environmental monitoring and management.

Keywords: Optical satellite image classification, Vegetation mapping, Object-based Image Analysis (OBIA), Machine learning (SVM, RF, ANN), Deep learning (CNN, U-Net, ResNet)

INTRODUCTION

The process of classifying vegetation from optical satellite imagery involves converting discernible spectral characteristics into identifiable vegetation types, a procedure commonly referred to as image classification (Kavzoglu et al., 2024). The selection of an appropriate methodology is critical, as it profoundly influences the accuracy and efficiency of vegetation identification (Lu et al., 2024). In this study, optical satellite images classification methods are reviewed, focusing on reliable techniques for identifying and classifying vegetation types.

Traditional classification methods have historically relied on a pixel-based approach, where each individual pixel within an image is classified independently based solely on its spectral signature (Li & Wan, 2015). This method proves effective for monitoring broad land use changes over short periods and for applications requiring complete data coverage. However, a significant drawback of pixel-based techniques, particularly when applied to high spatial resolution images (ranging from 1 to 10 meters), is the emergence of the “salt-and-pepper effect” and the “H-resolution problem” (Anderson, 2020). These issues arise because higher resolution imagery captures increased intra-class spectral variability, leading to individual pixels being misclassified relative to their neighboring pixels, creating a noisy appearance in the final map (Derksen, 2019).

To overcome these limitations, Object-based Image Analysis (OBIA), also known as Geographic Object-based Image Analysis (GEOBIA), has emerged as a powerful alternative (Blaschke et al., 2014). Instead of classifying individual pixels, OBIA groups spatially contiguous pixels into spectrally homogeneous “objects,” and then performs classification on these objects as the fundamental processing units. This approach effectively reduces local spectral variation caused by phenomena such as crown textures, gaps in vegetation, and shadows. It directly addresses this challenge by grouping pixels into meaningful, homogeneous objects, thereby enabling the effective utilization of spatial and contextual features (Kucharczyk et al., 2020).

Furthermore, OBIA explicitly leverages not only spectral values but also spatial properties, including the size and shape of objects, as features for classification, leading to substantial improvements in accuracy (Jafarbiglu, 2023). The typical OBIA workflow encompasses image segmentation (often using techniques like the Fractal Net Evolution Approach – FNEA), followed by feature generation and selection, and finally, classification using methods such as nearest neighbor algorithms (Rajbhandari, 2019).

MATERIALS AND METHOD

In this study, optical satellite images classification methods are reviewed, focusing on reliable techniques for identifying and classifying vegetation types.  A systematic literature review was conducted, utilizing Google Scholar as the primary search engine. The search was initially customized to retrieve publications from the past eight years; this is from 2018 to date. Forty- two studies were obtained within this time frame. However, few earlier studies i.e. five studies, were included where relevant. This approach enabled the identification of the most informative spectral bands and vegetation indices for vegetation identification.

Supervised and Unsupervised Classification

Within the realm of land use classification from satellite images, two primary methodological categories are recognized: supervised and unsupervised classification (Talukdar et al., 2020). Supervised classification involves the user providing training samples areas with known class labels, to guide the classification algorithm (Richards & Richards, 2022). This method offers greater precision and control over the output but can be time-intensive due to the need for careful training data selection and refinement. It requires the analyst to delineate expert-defined areas of known vegetation types, which are then used to train and calibrate the classification algorithms (Moraes et al., 2024). In contrast, unsupervised classification employs clustering algorithms, to identify natural groupings or classes within the data, without requiring prior user input. This approach offers greater flexibility and is particularly valuable for exploratory data analysis, as it can reveal unexpected patterns. While it may sometimes yield lower accuracy compared to supervised methods, its automated nature can be advantageous when time is a constraint (Wu, 2018). An advanced form of unsupervised clustering is the Iterative Self-Organizing Data Analysis Technique (ISODATA), which dynamically adjusts the number of clusters through split or merge operations and is widely applied in agricultural remote sensing for distinguishing crop types (Rivera Rivas et al., 2022).

Machine Learning Algorithms

Machine learning models have demonstrated superior applicability and achieved higher classification accuracy in vegetation research when compared to traditional quantitative approaches (Kasahun & Legesse, 2024). These algorithms offer robust solutions for handling complex datasets and intricate patterns inherent in remote sensing imagery (Miao et al., 2024).

  • Support Vector Machines (SVM): SVMs are extensively utilized for high-dimensional datasets and image classification. They exhibit strong performance even with limited training sets by identifying a hyperplane that maximizes the margin between different classes. SVMs effectively manage noise and outliers and frequently outperform other methods such as maximum likelihood classifiers (Chang et al., 2025). However, challenges persist in optimal parameter selection and difficulties in interdisciplinary application.
  • Random Forest (RF): RF is a powerful machine learning technique that constructs an ensemble of numerous decision trees. It is widely applied for image classification and land cover extraction due to its proficiency in managing high-dimensional and large datasets, thereby effectively mitigating the risk of overfitting (Jarocińska et al., 2023). Despite its strengths, RF can suffer from a lack of interpretability and sensitivity to parameter tuning (Luo et al., 2019).
  • Artificial Neural Networks (ANN): ANNs have played a significant role in remote sensing classification for many years (Giuffrida et al., 2020). The Spectral Characteristics and Artificial Neural Network (SCANN) method, which leverages specific spectral reflectance characteristics, demonstrated an impressive accuracy exceeding 94% for vegetation species determination (Chaity & van Aardt, 2024).
  • Maximum Likelihood (ML): A long standing and fundamental method in remote sensing, ML operates under the assumption that spectral signatures adhere to a normal distribution. It is effective for classifying land cover and vegetation, particularly in agricultural contexts (El-Omairi & El Garouani, 2023). However, its effectiveness hinges on accurate estimates of class means and covariances, which can be challenging with limited or noisy training data. Among common classification methods, ML generally exhibits the lowest median overall accuracy (approximately 86.00%) but can achieve high accuracy (95.93%) when used with RGB sensors (Cai & Koide, 2023).
  • K-Nearest Neighbors (K-NN): K-NN is a straightforward yet effective method that classifies data points by assigning them to the majority class of their nearest neighbors. Its performance is contingent on the chosen ‘k’ value and distance metric, and it can be computationally intensive for very large datasets. K-NN demonstrated a median accuracy of 90.19% in relevant studies (Cunningham & Delany, 2021; Zhang, 2021).
  • Decision Tree (DT): DT algorithms construct a hierarchical, tree-like structure to recursively partition the feature space, offering both interpretability and operational efficiency. A notable drawback is its susceptibility to overfitting, especially when dealing with noisy or high-dimensional data (Azam et al., 2023; Li et al., 2024).

Deep Learning Architectures

Deep learning methods are at the forefront of remote sensing analysis, offering high predictive accuracy by autonomously learning relevant data features in an end-to-end manner. These approaches have proven highly effective in extracting complex, nonlinear feature representations from high-dimensional data (Han et al., 2023).

Convolutional Neural Networks (CNN): CNNs are exceptionally effective in capturing spatial patterns and extracting a diverse range of vegetation properties from remote sensing imagery. They consistently outperform traditional shallow machine learning methods, particularly by exploiting the intricate spatial patterns present in very high spatial resolution data. CNNs have revolutionized image processing and represent a pivotal direction in deep learning research for remote sensing applications (Khan et al., 2018). It excels at learning complex spatial patterns from high-resolution data (Alzubaidi et al., 2021).

U-Net and ResNet Architectures:

  • U-Net: This architecture is a fully convolutional network built upon an encoder-decoder structure, widely adopted in remote sensing image segmentation due to its clear segmentation logic and high efficiency (Khan & Jung, 2024). U-Net can effectively delineate polygonal and fragmented forest areas, achieving high overall classification accuracy (e.g., 94.7% for forest vegetation classification) (Wagner et al., 2019). However, its capacity to extract deep abstract information from hyperspectral images can be limited, leading to issues such as uneven edges and misclassification (Bidari et al., 2024).
  • Res-UNet: This architecture enhances the U-Net by incorporating residual connections, which significantly improves the network’s feature learning capability by allowing for deeper network layers without degradation (Maqsood et al., 2025). This design facilitates superior integration of global features while preserving high-resolution semantic information, thereby improving the accuracy of edge segmentation for ground objects (Tan et al., 2024).
  • ResNet-18: This specific deep learning model utilizes linear spectral mixture analysis and spectral indices to extract pixels, demonstrating effective overall classification accuracy in Landsat 8 OLI images (Singh & Tyagi, 2021).

Stacked Autoencoders (SAE) and Deep Belief Networks (DBN):

  • Stacked Sparse Autoencoder (SAE): SAEs offer robust learning performance for extracting abstract and high-level feature representations from both spectral and spatial domains (Yan & Han, 2018). For instance, an SAE classifier trained for African land-cover mapping achieved an overall accuracy of 78.99%, surpassing the performance of Random Forest (RF), Support Vector Machine (SVM), and Artificial Neural Network (ANN) (Li et al., 2016).
  • Deep Belief Networks (DBN): DBNs are a widely studied deep learning architecture that emulates the hierarchical structure of the human brain, progressively extracting features from lower to higher levels of abstraction. DBN-based methods have been shown to outperform other approaches, yielding more homogeneous mapping results with well-preserved shape details (Ji et al., 2014).

Comparative analysis of various machine learning and deep learning algorithms commonly employed for vegetation classification from optical satellite imagery is presented in table 1. Their mechanistic approaches, strengths, limitations, typical accuracy ranges, and key applications are highlighted. In addition to the citations made in the discussion so far, De Kok et al., (1999) and Wang et al., (2023) also made some analyses in this regard.

Table 1: Comparison of Key Vegetation Classification Algorithms

Algorithm Type Mechanistic Approach Strengths Limitations Typical Accuracy Range (OA) Key Applications
K-means Unsupervised clustering, identifies patterns based on statistical similarities. Flexible, good for exploratory data analysis, identifies unexpected patterns. Sensitive to noise/outliers, determining optimal K is crucial. Lower accuracy than supervised. General classification, exploratory data analysis.
ISODATA Enhanced K-means; dynamically determines cluster numbers through split/merge operations. More flexible than K-means, widely used for distinguishing crop types. Can be combined with ML for refinement. Agricultural remote sensing, crop type distinction.
SVM Supervised, maximizes margin between classes in high-dimensional space. Strong performance even with small training sets, handles noise/outliers well. Parameter selection challenges, interdisciplinary application difficulties. High (often outperforms ML). High-dimensional datasets, image classification, land cover extraction.
RF Supervised, aggregates numerous decision trees. Robust, handles high-dimensional/large datasets, mitigates overfitting. Lack of interpretability, sensitivity to parameter selection. High (76.03% to 98%). Image classification, land cover extraction.
ANN/NN Supervised, learns features through interconnected layers. Revolutionized classification, effective feature extraction. Initial weight selection, slow convergence. Highest median (92.55%), up to 93.02% with multispectral. Agricultural identification, general classification.
ML Supervised, assumes spectral signatures follow normal distribution. Effective for land cover/vegetation classification. Requires accurate class means/covariances, challenging with limited/noisy data. Lowest median (86.00%), up to 95.93% with RGB sensors. Land cover, vegetation classification in agriculture.
K-NN Supervised, classifies based on majority class of nearest neighbors. Simple yet effective. Performance depends on K and distance metric, computationally costly for large datasets. Median 90.19%. General classification.
DT Supervised, recursively partitions feature space into tree-like structure. Interpretability, operational efficiency. Prone to overfitting with noisy/high-dimensional data. General classification.
CNN Deep learning, learns hierarchical spatial patterns through convolutional layers. Highly effective for spatial patterns, extracts wide array of properties, outperforms shallow ML. Requires substantial labeled data. High accuracy. Semantic segmentation, high-resolution data analysis.
U-Net Deep learning, encoder-decoder structure for semantic segmentation. Concise logic, excellent efficiency, identifies polygonal/fragmented areas. Limited deep abstract information extraction from hyperspectral, uneven edges. High (e.g., 94.7%). Remote sensing image segmentation, forest vegetation classification.
Res-UNet Deep learning, U-Net with residual connections. Enhanced feature learning, deeper layers, integrates global features, improves edge segmentation. Hyperspectral image vegetation classification.
SAE Deep learning, learns abstract, high-level features from spectral/spatial domains. Strong learning performance. 78.99% for land-cover mapping. Large-scale land-cover mapping.
DBN Deep learning, mimics hierarchical brain structure for feature extraction. Produces homogeneous mapping results with preserved shape details. Outperforms SVM, NN, SEM. Urban LULC mapping, remote sensing image classification.

CONCLUSION

Overall, Neural Networks (NN) generally demonstrate the highest median overall accuracy (92.55%) (Adejumobi et al., 2024) and lower variability among common classification methods, proving particularly effective in agricultural identification. They achieve optimal performance when applied to multispectral sensors, with a median accuracy of 93.02% (Hassan et al., 2024).

The selection of classification methodology is increasingly influenced by the specific characteristics of the data, such as resolution and dimensionality, and by the unique requirements of the application, rather than a singular “best” method (Du et al., 2020). While deep learning models, including Neural Networks and Convolutional Neural Networks, frequently achieve superior overall accuracy (Li et al., 2021), traditional machine learning algorithms like Random Forest and Support Vector Machines remain widely adopted and deliver satisfactory results in many contexts. The distinction between supervised classification, favored for its precision, and unsupervised classification, valued for exploratory analysis (Fazil et al., 2023), further underscores this adaptive approach. Moreover, evidence suggests that different algorithms may perform optimally in specific scenarios, and the inherent modularity within contemporary deep learning frameworks allows for significant flexibility in adapting architectures to diverse problems (Panzer & Gronau, 2024). This indicates that a universally superior method does not exist. Instead, the most effective approach is determined by the intrinsic properties of the data (e.g., high-dimensional hyperspectral data may benefit more from specialized deep learning architectures like U-Net, whereas simpler tasks might be adequately addressed by RF or SVM) and the precise objectives of the classification endeavor (e.g., prioritizing speed over accuracy, or the need for model interpretability) (Haidarh et al., 2025). This trend highlights a growing emphasis on developing tailored solutions and integrating hybrid approaches to leverage the strengths of various methodologies.

REFERENCES

  1. Adejumobi, P. O., Ojo, J. A., Adejumobi, I. O., Adebisi, O. A., & Ayanlade, S. O. (2024). Development of a sorting system for mango fruit varieties using convolutional neural network. International Journal of Computational Science and Engineering, 28(1), 87-99. https://doi.org/10.1504/IJCSE.2025.143466
  2. Anderson, R. O. D. (2020). High Resolution Remote Sensing of Eelgrass (Zostera marina) in South Slough, Oregon University of Oregon].
  3. Azam, Z., Islam, M. M., & Huda, M. N. (2023). Comparative analysis of intrusion detection systems and machine learning-based model analysis through decision tree. IEEE Access, 11, 80348-80391.
  4. Bidari, I., Chickerur, S., & Kadam, S. (2024). Enhancing change detection in hyperspectral images: A semi-supervised approach with U-Net and attention mechanism. In Computer Science Engineering (pp. 71-80). CRC Press.
  5. Blaschke, T., Hay, G. J., Kelly, M., Lang, S., Hofmann, P., Addink, E., Feitosa, R. Q., Van der Meer, F., Van der Werff, H., & Van Coillie, F. (2014). Geographic object-based image analysis–towards a new paradigm. ISPRS journal of photogrammetry and remote sensing, 87, 180-191.
  6. Cai, X., & Koide, H. (2023). New Perspectives on Data Exfiltration Detection for Advanced Persistent Threats Based on Ensemble Deep Learning Tree. WEBIST,
  7. Chaity, M. D., & van Aardt, J. (2024). Exploring the limits of species identification via a convolutional neural network in a complex forest scene through simulated imaging spectroscopy. Remote Sensing, 16(3), 498.
  8. Chang, B., Li, F., Hu, Y., Yin, H., Feng, Z., & Zhao, L. (2025). Application of UAV remote sensing for vegetation identification: a review and meta-analysis. Front Plant Sci, 16, 1452053. https://doi.org/10.3389/fpls.2025.1452053
  9. Cunningham, P., & Delany, S. J. (2021). K-nearest neighbour classifiers-a tutorial. ACM computing surveys (CSUR), 54(6), 1-25.
  10. De Kok, R., Schneider, T., Baatz, M., & Ammer, U. (1999). Object based image analysis of high resolution data in the alpine forest area. Joint Workshop for ISPRS WG I/1, I/3 AND IV/4, Sensors and Mappinhg from Space, Hanover, Germany,
  11. Derksen, D. (2019). Contextual classification of large volumes of satellite imagery for the production of land cover maps over wide areas Université Paul Sabatier-Toulouse III].
  12. Du, P., Bai, X., Tan, K., Xue, Z., Samat, A., Xia, J., Li, E., Su, H., & Liu, W. (2020). Advances of four machine learning methods for spatial data handling: A review. Journal of Geovisualization and Spatial Analysis, 4, 1-25.
  13. El-Omairi, M. A., & El Garouani, A. (2023). A review on advancements in lithological mapping utilizing machine learning algorithms and remote sensing data. Heliyon, 9(9).
  14. Fazil, A. W., Hakimi, M., Akbari, R., Quchi, M. M., & Khaliqyar, K. Q. (2023). Comparative analysis of machine learning models for data classification: An in-depth exploration. Journal of Computer Science and Technology Studies, 5(4), 160-168.
  15. Giuffrida, G., Diana, L., de Gioia, F., Benelli, G., Meoni, G., Donati, M., & Fanucci, L. (2020). CloudScout: A Deep Neural Network for On-Board Cloud Detection on Hyperspectral Images. Remote Sensing, 12(14), 2205. https://www.mdpi.com/2072-4292/12/14/2205
  16. Haidarh, M., Mu, C., Liu, Y., & He, X. (2025). Exploring traditional, deep learning and hybrid methods for hyperspectral image classification: A review. Journal of Information and Intelligence. https://doi.org/https://doi.org/10.1016/j.jiixd.2025.04.002
  17. Han, W., Zhang, X., Wang, Y., Wang, L., Huang, X., Li, J., Wang, S., Chen, W., Li, X., & Feng, R. (2023). A survey of machine learning and deep learning in remote sensing of geological environment: Challenges, advances, and opportunities. ISPRS journal of photogrammetry and remote sensing, 202, 87-113.
  18. Hassan, N., Musa Miah, A. S., & Shin, J. (2024). Residual-Based Multi-Stage Deep Learning Framework for Computer-Aided Alzheimer’s Disease Detection. Journal of Imaging, 10(6), 141. https://www.mdpi.com/2313-433X/10/6/141
  19. Jafarbiglu, H. (2023). Quantitative Adjustment of Sun-View Geometry in Areal Remote Sensing. University of California, Davis.
  20. Jarocińska, A., Marcinkowska-Ochtyra, A., & Ochtyra, A. (2023). An Overview of the Special Issue “Remote Sensing Applications in Vegetation Classification”. Remote Sensing, 15(9), 2278. https://www.mdpi.com/2072-4292/15/9/2278
  21. Ji, N., Zhang, J.-S., & Zhang, C.-X. (2014). A sparse-response deep belief network based on rate distortion theory. Pattern Recognition, 47, 3179–3191. https://doi.org/10.1016/j.patcog.2014.03.025
  22. Kasahun, M., & Legesse, A. (2024). Machine learning for urban land use/ cover mapping: Comparison of artificial neural network, random forest and support vector machine, a case study of Dilla town. Heliyon, 10(20), e39146. https://doi.org/https://doi.org/10.1016/j.heliyon.2024.e39146
  23. Kavzoglu, T., Tso, B., & Mather, P. M. (2024). Classification methods for remotely sensed data. CRC press.
  24. Khan, B. A., & Jung, J.-W. (2024). Semantic segmentation of aerial imagery using u-net with self-attention and separable convolutions. Applied Sciences, 14(9), 3712.
  25. Khan, S., Rahmani, H., Shah, S. A. A., Bennamoun, M., Medioni, G., & Dickinson, S. (2018). A guide to convolutional neural networks for computer vision.
  26. Kucharczyk, M., Hay, G. J., Ghaffarian, S., & Hugenholtz, C. H. (2020). Geographic Object-Based Image Analysis: A Primer and Future Directions. Remote Sensing, 12(12), 2012. https://www.mdpi.com/2072-4292/12/12/2012
  27. Li, G., & Wan, Y. (2015). A new combination classification of pixel-and object-based methods. International Journal of Remote Sensing, 36(23), 5842-5868.
  28. Li, H., Song, J., Xue, M., Zhang, H., & Song, M. (2024). A survey of neural trees: Co-evolving neural networks and decision trees. IEEE Transactions on Neural Networks and Learning Systems.
  29. Li, W., Haohuan, F., Le, Y., Peng, G., Duole, F., Congcong, L., & and Clinton, N. (2016). Stacked Autoencoder-based deep learning for remote-sensing image classification: a case study of African land-cover mapping. International Journal of Remote Sensing, 37(23), 5632-5646. https://doi.org/10.1080/01431161.2016.1246775
  30. Li, Z., Liu, F., Yang, W., Peng, S., & Zhou, J. (2021). A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Transactions on Neural Networks and Learning Systems, 33(12), 6999-7019.
  31. Lu, Z., Liu, G., Song, Z., Sun, K., Li, M., Chen, Y., Zhao, X., & Zhang, W. (2024). Advancements in Technologies and Methodologies of Machine Learning in Landslide Susceptibility Research: Current Trends and Future Directions. Applied Sciences, 14(21), 9639.
  32. Luo, Y., Tseng, H.-H., Cui, S., Wei, L., Ten Haken, R. K., & El Naqa, I. (2019). Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling. BJR| Open, 1(1), 20190021.
  33. Maqsood, R., Abid, F., Rasheed, J., Osman, O., & Alsubai, S. (2025). Optimal Res-UNET architecture with deep supervision for tumor segmentation. Frontiers in Medicine, 12, 1593016.
  34. Miao, S., Wang, C., Kong, G., Yuan, X., Shen, X., & Liu, C. (2024). Utilizing active learning and attention-CNN to classify vegetation based on UAV multispectral data. Sci Rep, 14(1), 31061. https://doi.org/10.1038/s41598-024-82248-3
  35. Moraes, D., L., C. M., & and Caetano, M. (2024). Training data in satellite image classification for land cover mapping: a review. European Journal of Remote Sensing, 57(1), 2341414. https://doi.org/10.1080/22797254.2024.2341414
  36. Panzer, M., & Gronau, N. (2024). Designing an adaptive and deep learning based control framework for modular production systems. Journal of Intelligent Manufacturing, 35(8), 4113-4136.
  37. Rajbhandari, S. (2019). Methodological framework for ontology-driven geographic object-based image analysis (O-GEOBIA) University of Tasmania].
  38. Richards, J. A., & Richards, J. A. (2022). Supervised classification techniques. Remote sensing digital image analysis, 263-367.
  39. Rivera Rivas, A., Pérez-Godoy, M., Elizondo, D., Deka, L., & Del Jesus, M. J. (2022). Analysis of clustering methods for crop type mapping using satellite imagery. Neurocomputing, 492. https://doi.org/10.1016/j.neucom.2022.04.002
  40. Singh, M., & Tyagi, K. D. (2021). Pixel based classification for Landsat 8 OLI multispectral satellite images using deep learning neural network. Remote Sensing Applications: Society and Environment, 24, 100645.
  41. Talukdar, S., Singha, P., Mahato, S., Pal, S., Liou, Y.-A., & Rahman, A. (2020). Land-use land-cover classification by machine learning classifiers for satellite observations—A review. Remote Sensing, 12(7), 1135.
  42. Tan, C., Chen, T., Liu, J., Deng, X., Wang, H., & Ma, J. (2024). Building Extraction from Unmanned Aerial Vehicle (UAV) Data in a Landslide-Affected Scattered Mountainous Area Based on Res-Unet. Sustainability, 16(22), 9791.
  43. Wagner, F. H., Sanchez, A., Tarabalka, Y., Lotte, R. G., Ferreira, M. P., Aidar, M. P., Gloor, E., Phillips, O. L., & Aragao, L. E. (2019). Using the U‐net convolutional network to map forest types and disturbance in the Atlantic rainforest with very high resolution images. Remote Sensing in Ecology and Conservation, 5(4), 360-375.
  44. Wang, X., Hu, Z., Shi, S., Hou, M., Xu, L., & Zhang, X. (2023). A deep learning method for optimizing semantic segmentation accuracy of remote sensing images based on improved UNet. Scientific Reports, 13(1), 7600. https://doi.org/10.1038/s41598-023-34379-2
  45. Wu, Q. (2018). 2.07 – GIS and Remote Sensing Applications in Wetland Mapping and Monitoring. In B. Huang (Ed.), Comprehensive Geographic Information Systems (pp. 140-157). Elsevier. https://doi.org/https://doi.org/10.1016/B978-0-12-409548-9.10460-9
  46. Yan, B., & Han, G. (2018). Effective feature extraction via stacked sparse autoencoder to improve intrusion detection system. IEEE Access, 6, 41238-41248.
  47. Zhang, S. (2021). Challenges in KNN classification. IEEE Transactions on Knowledge and Data Engineering, 34(10), 4663-4675.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

[views]

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER