INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 9998
www.rsisinternational.org
Heuristic-Based Approaches in Fuzzy Clustering: A Comprehensive
Review
Mohammad Babrdel Bonab
1*
, Noor Azeera Binti Abdul Aziz
1
, Too Chian Wen
1
, Hoo Meei Hao
1
, Chia
Kai Lin
1
, Khalaf Zager Alsaedi
2
, Chua Kein Huat
3
1
Centre for Artificial Intelligence and Computing Applications, Universiti Tunku Abdul Rahman,
Malaysia
2
Physic Department, College of Science, University of Misan, Ministry of Higher Education of Iraq, Iraq
3
Centre for Railway Infrastructure and Engineering, Universiti Tunku Abdul Rahman, Selangor,
Malaysia
* Corresponding Author
DOI: https://doi.org/10.47772/IJRISS.2025.903SEDU0767
Received: 18 December 2025; Accepted: 24 December 2025; Published: 30 December 2025
ABSTRACT
Fuzzy clustering has emerged as a powerful technique for analyzing complex, uncertain, and high-dimensional
data across diverse application domains, including pattern recognition, bioinformatics, image analysis, and
decision support systems. Unlike classical clustering, which assigns each data instance to a single cluster, fuzzy
clustering allows partial membership, thereby capturing inherent ambiguity in real-world datasets. This review
provides a comprehensive examination of heuristic-based fuzzy clustering algorithms. We begin by outlining
the fundamental concepts of clustering, fuzzy set theory, and the principles of fuzzy clustering. Subsequently,
we discuss the evolution of core algorithms, including Fuzzy C-Means (FCM) and Possibilistic C-Means (PCM),
and highlight significant modifications derived from altering distance metrics, objective functions, and
optimization strategies. Particular emphasis is placed on heuristic and metaheuristic enhancementssuch as
genetic algorithms, particle swarm optimization, and artificial immune systemsthat address the limitations of
classical approaches, including sensitivity to initialization, susceptibility to noise and outliers, and premature
convergence. Recent contributions in hybrid fuzzy clustering are also reviewed, with attention to their strengths,
weaknesses, and potential applications. Finally, we synthesize insights from the literature to categorize the
persistent disadvantages of existing methods and identify promising directions for future research, including
adaptive fuzzifiers, noise-resilient models, and integration with evolutionary computation. This study not only
consolidates advances in heuristic-based fuzzy clustering but also provides guidance for researchers aiming to
design more robust, scalable, and application-driven clustering algorithms.
INTRODUCTION
Clustering is one of the most fundamental techniques in data mining and machine learning. It aims to partition
a dataset into groups (clusters) such that data points within the same cluster exhibit high similarity, while points
in different clusters are maximally dissimilar. This similarity or dissimilarity is generally quantified through
distance or similarity metrics. Unlike supervised classification, clustering is an unsupervised learning process in
which the classes are not predefined but discovered from the data itself. As such, clustering plays a central role
in exploratory data analysis, knowledge discovery, and decision support systems. Typical applications include
identifying meaningful patterns in large databases, anomaly detection, customer segmentation, bioinformatics,
and image recognition.
Classical clustering methods, such as k-means, enforce a hard partitioning scheme, where each data point belongs
exclusively to one cluster. While effective in simple cases, this rigid assignment does not reflect the ambiguity
and uncertainty inherent in many real-world datasets. For example, in medical diagnosis, a patient’s symptoms
may partially correspond to multiple disease categories; in document clustering, a paper may be relevant to
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 9999
www.rsisinternational.org
multiple research topics. To capture such overlapping relationships, fuzzy clustering was introduced. In fuzzy
clustering, each data instance can belong to multiple clusters with varying degrees of membership, thereby
modeling uncertainty more realistically.
The concept of fuzzy clustering originated with Dunn [1] in 1974, who proposed one of the first fuzzy clustering
algorithms based on Euclidean distance and an objective function. Bezdek [2, 3] later generalized this approach
and popularized the Fuzzy C-Means (FCM) algorithm, which has since become the most widely used method.
Subsequent contributions by Gustafson and Kessel [6] introduced adaptive distance metrics through fuzzy
covariance matrices, enabling the identification of clusters with different shapes. Krishnapuram and Keller [7,
8] further extended the field by integrating possibilistic clustering, which improved robustness to noise and
outliers. Over the past decades, these foundational contributions have inspired a rich variety of algorithmic
refinements and applications across disciplines.
Despite its versatility, fuzzy clustering suffers from several well-documented limitations. Classical algorithms
such as FCM and PCM are sensitive to initialization, prone to convergence at local optima, and highly influenced
by noise and outliers. Additionally, the need to predefine the number of clusters and the lack of flexible validity
indices present further challenges. To address these issues, researchers have explored heuristic and metaheuristic
approaches that augment or replace traditional optimization schemes. Techniques such as genetic algorithms,
particle swarm optimization, ant colony optimization, and artificial immune systems have been applied to
improve clustering robustness, accelerate convergence, and enhance solution quality. These heuristic-based
methods provide flexible global search capabilities that are particularly effective in avoiding premature
convergence and handling high-dimensional or noisy data.
In light of the growing body of research, a comprehensive review of heuristic-based fuzzy clustering algorithms
is timely and necessary. While several surveys on fuzzy clustering exist [4, 5, 9], most focus on classical
algorithms or general improvements without systematically addressing heuristic integration. The present study
aims to fill this gap by (i) reviewing the historical development of fuzzy clustering, (ii) analyzing heuristic-based
enhancements in terms of their methodology and performance, (iii) categorizing the persistent disadvantages of
fuzzy clustering algorithms, and (iv) identifying promising directions for future research. In particular, we
highlight how heuristic approaches mitigate challenges such as initialization sensitivity, noise robustness, and
scalability.
This review contributes to the literature by consolidating advancements in heuristic-based fuzzy clustering and
providing a structured taxonomy of existing methods. It offers both theoretical insights and practical guidance
for researchers and practitioners seeking to design more robust, adaptive, and application-oriented clustering
algorithms.
Basic Concepts
A clear understanding of the underlying principles of clustering and fuzzy set theory is essential before
discussing heuristic-based fuzzy clustering algorithms. This section briefly reviews the foundations of clustering,
the theory of fuzzy sets, and the principles of fuzzy clustering.
Clustering
Clustering is a fundamental task in unsupervised learning aimed at grouping data objects such that intra-cluster
similarity is maximized and inter-cluster similarity is minimized. Unlike supervised classification, clustering
does not rely on predefined class labels; instead, it discovers inherent structures in the data. Clustering methods
are widely employed in data mining, image analysis, information retrieval, and bioinformatics due to their ability
to reveal hidden patterns and relationships.
Several families of clustering methods exist, each exploiting different assumptions about data distribution and
structure:
Partitioning methods (e.g., k-means, k-medoids) assign data points into a predefined number of clusters,
typically optimized using distance measures such as Euclidean distance.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10000
www.rsisinternational.org
Hierarchical methods build nested partitions in a bottom-up (agglomerative) or top-down (divisive)
manner, producing dendrograms that represent multi-level data structures.
Density-based methods (e.g., DBSCAN) identify clusters as dense regions separated by sparse areas,
effectively capturing non-convex structures and handling noise.
Model-based methods assume a probabilistic model of the data, such as Gaussian mixture models, and use
likelihood maximization for clustering.
Grid- or mesh-based methods partition the data space into finite cells and perform clustering within these
cells, which is particularly effective for large datasets.
Each approach offers advantages and limitations. For instance, partitioning methods are efficient but sensitive
to initialization, while density-based methods excel at noise handling but may struggle with varying densities
[10, 11].
The Theory of Fuzz Sets
The theory of fuzzy sets, first introduced by Zadeh in 1965 [12], provides a mathematical framework for
modeling uncertainty, vagueness, and partial truth. Unlike classical set theory, in which an element either
belongs to a set or does not (membership values of 0 or 1), fuzzy set theory allows elements to belong to a set
with varying degrees of membership in the interval [0, 1].
Formally, a fuzzy set A in a universe of discourse X is characterized by a membership function μA(x): X → [0,
1], which assigns to each element x a grade of membership [13, 14, 15]. This enables a flexible representation
of imprecise concepts such as “tall person” or “high temperature.”
Key properties of fuzzy sets include:
Gradual boundaries: Sets do not have sharp boundaries, allowing overlap between categories.
Linguistic variables: Fuzzy sets enable reasoning with natural language terms (e.g., “low,” “medium,”
“high”).
Non-probabilistic uncertainty: Unlike probability theory, fuzzy sets model vagueness rather than
randomness [16-18].
These characteristics make fuzzy set theory a natural foundation for clustering methods intended to capture
uncertainty in real-world data [19-21].
The Fuzzy Clustering
Fuzzy clustering extends traditional clustering by assigning each data point a membership degree in multiple
clusters. Instead of a hard assignment, fuzzy clustering allows soft partitions that better reflect the ambiguous
nature of many datasets [22].
The fundamental principle is that a cluster is treated as a fuzzy set, and the membership degree of each data point
represents its closeness to the cluster prototype. For example, in fuzzy c-means (FCM), the membership of data
point x in cluster j depends on the relative distance of x to cluster center cⱼ compared with other centers [23].
Fuzzy clustering is widely applied in domains where data categories are inherently overlapping, such as image
segmentation, market research, and medical diagnosis. Its strengths include flexibility and interpretability, while
its challenges include sensitivity to initialization, noise, and parameter settings [24].
The Basic Algorithms of Fuzzy Clustering
Fuzzy clustering algorithms aim to discover fuzzy models from deterministic data by relaxing the rigid
boundaries imposed by classical clustering. Among the earliest contributions, Dunn [25, 26] introduced a fuzzy
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10001
www.rsisinternational.org
generalization of k-means in 1974, which was later refined and popularized by Bezdek [2, 3]. This resulted in
the Fuzzy C-Means (FCM) algorithm, which remains the cornerstone of fuzzy clustering research. Subsequent
developments, such as the GustafsonKessel algorithm [6] and the possibilistic extensions by Krishnapuram and
Keller [7, 8], further broadened the applicability of fuzzy clustering to data with complex structures, noise, and
outliers.
In this section, we provide an overview of the two most fundamental fuzzy clustering approachesFuzzy C-
Means (FCM) and Possibilistic C-Means (PCM)and discuss their underlying principles, optimization
mechanisms, and limitations.
Fuzzy C-Means (FCM) Algorithm
The FCM algorithm is the most widely used fuzzy clustering method and can be viewed as a soft extension of
the classical k-means algorithm. Instead of assigning each data point strictly to one cluster, FCM introduces a
membership matrix U = [uᵢⱼ], where uᵢⱼ [0,1] represents the degree of membership of data point x in cluster j.
The memberships are constrained such that the sum of memberships for each data point across all clusters equals
one, i.e.,


󰇛󰇜
where is the number of clusters and is the number of data points.
The objective function of FCM is formulated as:

󰇛󰇜





󰇛󰇜
where  is the fuzzifier parameter controlling the degree of fuzziness, 󰳗 is the centroid of cluster , and
 denotes the Euclidean norm. Minimizing this objective function ensures that data points closer to a centroid
receive higher membership values, while distant points are assigned smaller memberships.
The optimization is performed iteratively using alternating updates:
Update memberships:

󰇧

󰇨
󰇛󰇜

󰇛󰇜
Update cluster centroids:






󰇛󰇜
Repeat until convergence, usually defined by the change in memberships or centroids falling below a
threshold.
Advantages of FCM:
Simplicity and ease of implementation.
Ability to capture overlapping clusters through soft assignments.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10002
www.rsisinternational.org
Fast convergence for moderately sized datasets.
Limitations of FCM:
Assumes spherical clusters due to reliance on Euclidean distance.
Sensitive to initialization of cluster centers.
Prone to convergence at local minima.
Highly sensitive to noise and outliers, since every data point must contribute to all clusters.
Despite these limitations, FCM has served as the foundation for numerous extensions, including variants with
alternative distance measures, kernel-based generalizations, and noise-handling modifications.
Possibilistic C-Means (PCM) Algorithm
To address the sensitivity of FCM to noise and outliers, Krishnapuram and Keller [7] proposed the Possibilistic
C-Means (PCM) algorithm. PCM relaxes the normalization constraint on membership values, thereby allowing
data points to have low or even negligible memberships in all clusters. This modification enables PCM to handle
atypical or outlier points more effectively.
The PCM objective function is defined as:

󰇛

󰇜






󰇛

󰇜

󰇛󰇜
where
is a positive constant that determines the scale of cluster . The second term penalizes solutions where
all membership values approach zero, ensuring that meaningful memberships are maintained.
Key differences from FCM:
Memberships are independent across clusters, eliminating the constraint that they must sum to one.
Outliers and distant data points naturally receive low memberships, reducing their influence on cluster
centroids.
Advantages of PCM:
Robustness to noise and outliers.
Flexibility in handling atypical data distributions.
Limitations of PCM:
Sensitive to the choice of scale parameters ηⱼ.
Susceptible to coincident clustering, where multiple clusters converge to the same location.
Potential instability if parameter tuning is inadequate.
3.3 Historical Variants and Extensions
Beyond FCM and PCM, several early extensions have enriched the fuzzy clustering landscape:
GustafsonKessel (GK) Algorithm [6]: Replaced Euclidean distance with an adaptive Mahalanobis-type
distance, enabling detection of clusters with different shapes and orientations.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10003
www.rsisinternational.org
Fuzzy Shell Clustering [31]: Designed for non-convex data structures, capable of identifying circular or
shell-like clusters.
Hybrid Fuzzy-Possibilistic Algorithms [63]: Combined features of FCM and PCM to balance robustness
against noise with interpretability of memberships.
These variants paved the way for the heuristic-based approaches discussed later in this review, where
evolutionary and swarm intelligence methods further improved initialization robustness, scalability, and
convergence properties.
In summary, FCM and PCM form the backbone of fuzzy clustering research. FCM introduced the notion of soft
partitions through normalized memberships, while PCM enhanced robustness by removing normalization
constraints and addressing noise sensitivity. Both algorithms, however, exhibit significant shortcomings in
scalability, initialization dependence, and sensitivity to data distribution. These limitations have motivated the
development of heuristic and metaheuristic strategies, which will be reviewed in subsequent sections.
Algorithms Resulting from Modifications to the Distance Function
In classical Fuzzy C-Means (FCM) and Possibilistic C-Means (PCM), cluster similarity is measured using the
Euclidean norm. While computationally efficient, this metric implicitly assumes spherical and isotropic clusters
of roughly equal size. However, real-world datasets often exhibit clusters with diverse shapes, orientations, and
densities. Reliance on Euclidean distance may therefore lead to poor representation of elongated, anisotropic, or
non-convex structures.
To overcome these limitations, several extensions to fuzzy clustering have been proposed by modifying the
distance function or similarity measure. This section discusses three influential families of approaches: (i)
GustafsonKessel algorithm, (ii) fuzzy shell-clustering algorithms, and (iii) kernel-based fuzzy clustering. A
generalization known as fuzzy relational clustering is also noted.
The GustafsonKessel (GK) Algorithm
The GustafsonKessel algorithm, proposed in 1979 [6], extended the FCM framework by introducing an
adaptive distance metric based on covariance matrices. Instead of restricting clusters to spherical shapes, GK
employs a Mahalanobis-like distance that allows identification of ellipsoidal clusters with varying orientations.
The modified distance measure for cluster is:

󰇛
󰇜

󰇛
󰇜󰇛󰇜
where
is a positive-definite matrix associated with cluster j, typically constrained by det(
) = 1 to avoid
degenerate solutions. This constraint ensures that clusters vary in shape but not in overall volume.
Advantages:
Detects clusters with different orientations and anisotropic structures.
More flexible than Euclidean-based FCM in image processing and pattern recognition tasks.
Limitations:
Sensitive to initialization, similar to FCM.
Requires estimation of covariance-like matrices, which increases computational cost.
Susceptible to poor performance in high-dimensional spaces without dimensionality reduction.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10004
www.rsisinternational.org
Applications: GK has been widely applied in image segmentation, geospatial analysis, and medical imaging
where elliptical clusters naturally arise.
Fuzzy Shell-Clustering Algorithms
While GK extended FCM to ellipsoidal clusters, fuzzy shell-clustering approaches were developed to identify
clusters with non-convex, shell-like, or manifold structures. In many domains, especially computer vision and
pattern recognition, meaningful patterns appear as geometric contours such as circles, ellipses, or hyperplanes
[32]. Traditional FCM fails in such cases because it minimizes distances to cluster centroids rather than to cluster
boundaries[33-35].
The general form of a shell-clustering distance function is:


󰇛󰇜
where
is the center and
the radius of the shell (for circular or spherical shells). This measures how far a
point lies from the ideal boundary rather than from the centroid. Variants include:
Fuzzy C-Shells (FCS): Specialized for detecting circular clusters.
Fuzzy C-Spherical Shells (FCSS): Extension to higher-dimensional spherical shells.
Fuzzy C-Varieties (FCV): Designed for detecting lines, planes, and hyperplanes.
Fuzzy C-Quadric Shells (FCQS): Generalized to capture parabolas, hyperbolas, and other quadratic
surfaces.
Adaptive Fuzzy C-Elliptotypes (AFCE): Assigns different line segments to separate clusters, useful for
elongated boundaries.
Advantages:
Suitable for image contour extraction and shape recognition.
Capable of handling non-convex structures that are beyond the reach of Euclidean clustering.
Limitations:
Computationally intensive due to nonlinear distance functions.
Requires prior knowledge of expected cluster shapes (e.g., circles vs. hyperplanes).
Less effective when clusters deviate from assumed geometric forms.
Applications: Shell clustering methods are extensively used in image segmentation, edge detection, and
structural pattern recognition tasks such as handwriting or fingerprint analysis.
Kernel-Based Fuzzy Clustering
Kernel methods provide a powerful framework for handling data that are not easily represented in vector spaces
or that contain nonlinear cluster structures. In kernel-based fuzzy clustering, the data are implicitly mapped into
a higher-dimensional feature space through a kernel function, and clustering is then performed in this
transformed space [36-38].
Common kernels include:
Polynomial kernel: 
󰇛

󰇜
Gaussian RBF kernel: 
󰇛

󰇜
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10005
www.rsisinternational.org
Sigmoid kernel: 
󰇛

󰇜
The kernel-based distance measure replaces Euclidean distances with kernel-induced similarities:

󰇛
󰇜


󰇛󰇜
where is the kernel function.
Advantages:
Capable of capturing highly nonlinear and non-convex cluster structures.
Applicable to complex data types, such as sequences, graphs, and trees.
Flexible choice of kernel enables tailoring to domain-specific requirements.
Limitations:
Performance heavily depends on the choice of kernel and its parameters.
Risk of overfitting in high-dimensional feature spaces.
Computational overhead due to kernel matrix computation.
Applications: Kernel-based fuzzy clustering has been applied in bioinformatics (gene expression analysis), web
mining, social network analysis, and multimedia retrieval [39-41].
Fuzzy Relational Clustering
A generalization of distance-based clustering is fuzzy relational clustering, in which the algorithm operates
directly on a dissimilarity matrix rather than requiring explicit feature vectors. This approach is valuable when
only pairwise similarities are available, as in graph clustering or relational databases.
Advantages:
Applicable to non-vectorial data such as networks, linguistic data, or symbolic patterns.
Avoids the need for explicit feature extraction.
Limitations:
Quality depends heavily on the definition of the dissimilarity matrix.
Computationally demanding for large datasets.
Modifications to the distance function significantly expand the applicability of fuzzy clustering. The Gustafson
Kessel algorithm extends FCM to ellipsoidal clusters; shell-clustering methods capture geometric contours;
kernel-based approaches uncover nonlinear structures; and fuzzy relational clustering generalizes the
methodology to non-vectorial data. Each approach addresses specific limitations of classical Euclidean-based
clustering, but introduces new challenges in terms of parameter tuning, computational complexity, and
robustness.
These algorithms collectively illustrate the versatility of fuzzy clustering frameworks and serve as precursors to
more advanced heuristic-based methods, which leverage global search and optimization strategies to further
enhance performance.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10006
www.rsisinternational.org
Algorithms Resulted from the Alteration of Objective Function
Beyond modifying distance metrics, another major avenue of research in fuzzy clustering has focused on altering
the objective function itself. The choice of objective function fundamentally determines the optimization
landscape, influencing robustness, convergence, and clustering quality. Classical Fuzzy C-Means (FCM)
optimizes an objective that balances membership degrees and squared Euclidean distances. While effective, this
formulation suffers from sensitivity to initialization, noise, and the need to specify the number of clusters in
advance.
To address these issues, researchers have proposed a variety of modifications to the objective function, leading
to improved resilience against noise, refined fuzzification, automatic determination of cluster numbers, and
enhanced versions of the Possibilistic C-Means (PCM) framework. This section reviews four major categories:
(i) noise-handling variants, (ii) fuzzifier-based variants, (iii) cluster-number determination methods, and (iv)
enhanced PCM variants.
Noise-Handling Variants
Noise and outliers are a persistent challenge in clustering, as they can distort cluster centers and degrade
performance. To mitigate these effects, several algorithms introduce noise-aware objective functions.
Noise Clustering (NC): Dave [42] proposed adding a dedicated “noise cluster” to absorb outliers [43, 44]. The
objective function becomes:

󰇛

󰇜







󰇛󰇜
where δ is a noise distance parameter. This formulation explicitly models outliers as belonging to a separate
cluster, preventing them from influencing cluster prototypes.
Robust Estimator Variants: Algorithms such as robust FCM [45] replace squared distances with robust loss
functions (e.g., Huber loss) in the objective function, reducing the impact of extreme values.
Weight-Based Methods: Some approaches assign adaptive weights to each data point in the objective, down-
weighting suspected outliers [46].
Advantages:
Substantially improves robustness to noise.
Retains interpretability of fuzzy memberships.
Limitations:
Additional parameters (e.g., δ) must be tuned.
Risk of misclassifying borderline points as noise.
Applications: Medical imaging (to handle speckle noise), anomaly detection, and intrusion detection systems.
Fuzzifier-Based Variants
The fuzzifier parameter m in FCM controls the degree of fuzziness in the partition. The classical choice is 
, but its optimal value depends on dataset characteristics. To improve flexibility, researchers have integrated
adaptive or revised fuzzifiers directly into the objective function [47-50].
Adaptive Fuzzifiers: Klawonn and Höppner [47,48] proposed modifications where m varies depending on
data density or cluster separation. For dense regions, a smaller m yields crisper partitions; for ambiguous
regions, a larger m allows softer assignments.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10007
www.rsisinternational.org
High-Contrast Variants: Rousseeuw et al. [49] suggested a fuzzifier that increases separation between
memberships for high-contrast clusters, reducing overlap.
Entropy-Regularized Objectives: Some formulations add an entropy term to the objective function,
encouraging diversity in memberships while preventing overly crisp partitions.
Advantages:
Provides adaptability to different datasets.
Balances crispness and fuzziness dynamically.
Limitations:
Requires calibration of additional parameters.
May increase computational cost.
Applications: Image segmentation (where fuzziness levels vary across regions), document clustering, and
bioinformatics.
Cluster-Number Determination Variants
A significant limitation of classical FCM and PCM is the need to predefine the number of clusters c. In real-
world applications, this number is often unknown. To overcome this, researchers have integrated validity indices
and adaptive mechanisms directly into the objective function [51-55].
Cluster Validity Index (CVI)-Integrated Objectives: Algorithms such as those using the Xie-Beni index
[120] or PBM index [117] include terms that evaluate separation and compactness, guiding optimization
toward the “best” number of clusters.
Entropy-Based Variants: Sahbi and Nozha [51] introduced entropy regularization, allowing the algorithm
to suppress unnecessary clusters by penalizing redundancy.
Evolutionary Hybrid Methods: Some metaheuristic-enhanced FCM variants (discussed later in Section 6)
evolve both cluster centers and the optimal number of clusters simultaneously.
Advantages:
Eliminates reliance on external cluster validation.
Provides more automated and data-driven clustering.
Limitations:
Increases computational burden.
May still converge to suboptimal solutions if the dataset structure is highly irregular.
Applications: Market segmentation, sensor data analysis, and exploratory research where the true number of
clusters is unknown.
Variants of Possibilistic C-Means (PCM)
Although PCM improved robustness by removing the membership normalization constraint, it introduced new
issues such as coincident clusters and sensitivity to parameter settings. To address these, several objective-
function modifications have been proposed.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10008
www.rsisinternational.org
Cluster Repulsion Terms: Timm and Kruse [56, 57] introduced repulsion terms into the objective, discouraging
clusters from collapsing into the same location. The modified objective is:

󰇛

󰇜

󰇛

󰇜




󰇛󰇜
where λ controls the strength of repulsion.
Hybrid PCM-FCM Models: Pal et al. [58, 59] proposed blending PCM’s robustness with FCM’s normalized
memberships, yielding hybrid objective functions that combine both possibilistic and fuzzy terms. These models
benefit from outlier insensitivity while maintaining meaningful partitioning.
Regularization-Based Variants: Some improvements add penalty terms to avoid trivial solutions where all
memberships approach zero, ensuring stable cluster formation [60].
Advantages:
Greater robustness to noise and outliers.
Reduces coincident cluster problems.
Balances interpretability and resilience.
Limitations:
Sensitive to penalty parameter choices.
Higher computational cost.
Applications: Image segmentation in noisy environments, speech signal analysis, and fault detection in
industrial processes.
Objective-function modifications represent one of the most active areas of fuzzy clustering research. By
introducing noise-handling mechanisms, adaptive fuzzifiers, cluster-number optimization, and enhanced PCM
formulations, these approaches address many of the weaknesses of classical methods.
Noise-handling variants improve robustness by explicitly modeling outliers or weighting data points.
Fuzzifier-based variants enhance flexibility in balancing crispness and fuzziness.
Cluster-number determination approaches automate the discovery of optimal cluster counts.
PCM extensions combine robustness with structural interpretability.
Despite these advances, challenges remain. Many algorithms introduce additional parameters that require careful
tuning, and computational costs increase with more complex objectives. Furthermore, while modifications often
improve performance in specific contexts, no single objective function universally outperforms others across
domains.
Overall, these developments have laid the foundation for heuristic and metaheuristic strategies, which further
enhance clustering by performing global optimization over objective functions. These will be discussed in the
next section.
Related works
This paper tries to categories the disadvantage of fuzzy clustering algorithms based on reviewed literature.
Actually, basic fuzzy clustering algorithms have been proven effective to data analyzing. However, the following
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10009
www.rsisinternational.org
disadvantages still there are for them. sensitivity to the initialization, get stuck into local optimality, sensitivity
to noise, lack of flexible validity metric, coincident clusters problem, sensitive to data behavior, premature
convergence and the number of clusters are the most common disadvantage of these algorithms. As noted in the
previous sections, in the last three decades many modifications and improvements have been realized in the
existing algorithms of fuzzy clustering field. Also new algorithms based on the former ones or by blending of
these algorithms with other ones (including other fuzzy clustering algorithms or other algorithms such as the
evolutionary algorithms) have been suggested. A part of the subjective literature in the previous section which
was outlined for general classification of issues and another part resulted from the review of former studies will
be presented in the following.
The papers in this study are selected randomly between published papers in the last decade. In general, the papers
can be divided into three categories of applicable and algorithmic papers and hybrid of them. This division
originates from the fact that some papers only address to the use of an algorithm in a special context. In the brief
review of applied papers, we reach at a general categorization of these papers that are considered as follows:
Papers that their main purpose is to define the far-to-center data or in other words outlier data. Papers which
purely use clustering algorithms for the extraction of similar groups and papers that use hybrid approach with
other algorithms.
This case can be taken as a prerequisite for classification. Three classes have been considered for algorithmic
papers:
Class 1: papers which are the hybrid of fuzzy clustering algorithms with other algorithms (including other fuzzy
clustering algorithms or other algorithms such as the evolutionary algorithms)
Class 2: papers which focus on the study of optimum number of clusters (validation)
Class 3: papers that revolve around the modification or improvement of existing articles.
The papers of class 1 form up 35.97% of reviewed papers from 2004 to 2024, are related to the hybrid of fuzzy
clustering algorithms with other algorithms. The class 2 papers on the study of optimum number of clusters
(validation) constitute 34.53% of all papers and the class 3 papers on the modification or improvement of existing
articles form up 25.18% of them. In the meanwhile, 4.32% of papers belonged to the both groups of "hybrid
(mixed)" and "modified or improved" versions of existing algorithms. A summary of applicable and algorithmic
investigated repapers have been presented in the Appendix 1, 2 and 3. In the table 1, the percent of the reviewed
articles based on basic fuzzy algorithms have been reported that illustrated the percent of Possibilistic K-Means
algorithm and Fuzzy C-Means algorithm and articles that have used the both of them. Also this table has been
divided in three mentioned class.
Table 1. Percent of the reviewed articles based on basic fuzzy algorithms
Class 1
Class 2
Class 3
FCM
36
64.29 %
45
93.75 %
25
71.43 %
106
76.26 %
PCM
15
26.78 %
3
6.25 %
10
28.57 %
28
20.14 %
FCM & PCM
5
8.93 %
0
0
0
0
5
3. 6 %
Total
56
100 %
48
100 %
35
100 %
139
100 %
Generally, the papers can be divided based on the annually published to demonstrate growing trend of fuzzy
clustering algorithm. This division originates from the fact that some papers only address to the use of an
algorithm in fuzzy clustering.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10010
www.rsisinternational.org
Percentage of reviewed fuzzy clustering papers
Fitting Function
Figure 2. Fuzzy clustering reviewed papers on yearly basis in class 1
Percentage of reviewed fuzzy clustering papers
Fitting Function
Figure 3. Fuzzy clustering reviewed papers on yearly basis in class 2
Percentage of reviewed fuzzy clustering papers
Fitting Function
Figure 4. Fuzzy clustering reviewed papers on yearly basis in class 3
Figures indicate that the number of fuzzy clustering recently have been increased that shows the importance of
fuzzy clustering algorithms. Table 2 demonstrates the number of fuzzy clustering algorithm that has been
published in the past decade.
0
5
10
15
20
0
5
10
15
20
2006 2008 2010 2012 2014 2016 2018
0
5
10
15
20
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
0
2
4
6
8
10
12
14
16
18
20
2006 2008 2010 2012 2014 2016 2018
0
5
10
15
20
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
# Articles
% Percentage
0
5
10
15
20
25
2006 2008 2010 2012 2014 2016 2018
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10011
www.rsisinternational.org
Table 2. Percent of the reviewed articles based on journal and conference
Class 1
Class 2
Class 3
Total
Journal
18
32.14 %
25
52.08 %
29
82.86 %
72
51.8 %
Conference
38
67.86 %
23
47.92 %
6
17.14 %
67
48.2 %
Total
56
100 %
48
100 %
35
100 %
139
100 %
In the class 1, the papers mostly have focused on the, sensitivity to initial clustering centers and initialization
(19.39%), trapping to a local optimum of the cost function (17.35%), sensitive to noise and outliers (17.35%),
lack of flexible similarity metric (11.22%), coincident clusters problem(11.22%), sensitive to data behavior
(10.21%), optimum number of clusters (9.18%), and premature convergence (4.08%) weaknesses.
Figure 5. The most highlighted disadvantage in fuzzy algorithm for first group
In Class 2, papers on the study of optimum number of clusters (validation) constitute 38.27% of all papers; the
sensitivity to behavior of data is included 22.22% of papers in this class. the rest of papers are consists of sensitive
to noisy data (19.75%), sensitive to cluster overlapping (7.41%), sensitive to initialization (6.17%), sensitive to
separation measure (3.71%), lack of stability in cluster validation index (2.47) respectively.
0
2
4
6
8
10
12
14
16
18
20
The most highlighted disadvantage in fuzzy algorithm in first group
Number of articles
Percentage
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10012
www.rsisinternational.org
Figure 6. The most highlighted disadvantage in fuzzy algorithm for second group
In class 3 papers on the modification or improvement of existing articles concentrated on the sensitive to the
initialization (29.69%), sensitive to noise and outliers (18.75%), sensitive to behavior of data (15.62%), gets
stuck in a local minimum (14.06%), lack of flexible similarity measure (10.94%), stability problems (6.25%),
sensitivity of performance to distance metric (4.69%).
Figure 7. The most highlighted disadvantage in fuzzy algorithm for third group
38%
22%
20%
6%
7%
4%
3%
The most highlighted disadvantage of fuzzy algorithm in second group
optimum number of cluster
sensitive to behavior of data
sensitive to noisy data
sensitive to initialization
sensitive to cluster overlapping
sensitive to separation measure
lack of stability in cluster validation
index
0
5
10
15
20
25
30
sensitivity of
performance to
distance metric
sensitive to noise and
outliers
need to determine the
cluster number in
different
gets stuck in a local
minimum
sensitive to the
selection of initial
parameters
great number of rules
scales sensitive to
behavior of data
The most highlighted disadvantage of fuzzy algorithm in third group
Number of articles
Percentage
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10013
www.rsisinternational.org
CONCLUSION
The fuzzy clustering is resulted from the integration of fuzzy approach in the clustering context for making it
more applicable and matched with the real world. In this study we addressed to the review of typical clustering
algorithms and somehow to the clustering of these algorithms. The most popular and applicable methods are
summarized in the Table 3. The research content of this paper is about the introduction of techniques for
discovering the fuzzy models from deterministic data. In future works some techniques which have been
developed for fuzzy data may be studied. The use of meta-heuristic methods for the betterment of fuzzy
clustering results also can be candidate in the list of future works.
Table 3. The most highlighted algorithms in fuzzy clustering
The Algorithm Name
The (first) introducer
year
Kernel Based Algorithm
Wu & Xie & Yu
2003
Fuzzy Shell Clustering
Klawonn & Kruse & Timm
1997
PCM
Krishnapuran & Keller
1993
Gustafson - Kessel
Gustafson & Kessel
1979
Fuzzy C-Means
Dunn
1974
ISODATA
Ball & Hall
1965
ACKNOWLEDGMENT
The authors of “review on heuristic based fuzzy clustering Algorithms” would like to thank Universiti Tunku
Abdul Rahman for research university grant with number of (6200/M33) and (8012/000). The authors are also
grateful to thanks Centre for Artificial Intelligence and Computing Applications (CAICA) and Center for Power
System and Electricity (CPSE).
APPENDICES
Appendix 1. The most highlighted papers in class 1
No
Authors
Class
Algorithm /
Approach
Description
Advantage
FCM or PCM
Disadvantage
1
[61 ]
Class 1
FCM-CGA
algorithm
Optimal Fuzzy C-
Means Clustering
with Optimal Fuzzy
C-Means Clus it
works as a local
search engine
tering,
i) find suitable
number of clusters
ii) find suitable
the location of its
prototypes
i) get stuck into
local optimality
ii) should be
determined the
number of
clusters
2
[62 ]
Class 1
GMRFCM genetic
based Fuzzy e-
means Clustering
Algorithm
Significantly
reduce, the
initialization
sensitivity, the
iterative times
required to
i) suitable large
data set and high-
dimensional
ii) strong global
and local
i) sensitivity to
initialization
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10014
www.rsisinternational.org
converge, and to
obtain a better
partition of a
dataset into k
classes .
searching
capability
ii) premature
convergence
3
[63 ]
Class 1
fuzzy-possibilistic
c-means (FPCM)
derive the first-
order necessary
conditions for
extrema of the
PFCM objective
function
it solves the noise
sensitivity defect
of FCM,
eliminates the row
sum constraints of
FPCM and
overcomes the
coincident
clusters problem
of PCM
i) sensitivity to
noise
ii) coincident
clusters problem
4
[64 ]
Class 1
Fuzzy ants
clustering algorithm
it is a promising
approach to find a
partition based on
the number of
clusters actually
to optimize a
fuzzy partition
validity metric
i) get stuck into
local optimality
ii) lack of reliable
validity metric
5
[65 ]
Class 1
KPFCM - kernel
possibilistic fuzzy c-
means model
it is A fuzzy
clustering method,
based on
kernel methods
i) lack of flexible
similarity metric
ii) sensitive to
data behavior
6
[66 ]
Class 1
Artificial Immune
Network Fuzzy
Clustering
Algorithm (Ainfcm)
a new evolutionary
approach to fuzzy
clustering
introduced
according to the
application of
artificial immune
principles
i) to explore
global optimum
through clone and
mutation and
renew processes
of the immune
network
i) trapped into
local optimality
ii) Premature
convergence
7
[67 ]
Class 1
Gustafson-Kessel
PFCM (G-KPFCM)
an improvement of
the Possibilistic
FCM algorithm
usage of the
Gustafson-Kessel
algorithm within
the Possibilistic
Fuzzy C-Means
algorithm
i) inability to
distinguish
various forms
ii) sensitive to
noise and outliers
8
[68 ]
Class 1
Fuzzy Models
Based On Noise
Cluster And
Possibilistic
Clustering
Based on a
switching
regression model
and a T-S fuzzy
model
i) to identify
processes of
nonlinear plants
ii) to deal with
noisy data
i) sensitive to
noise and outliers
ii) probabilistic
constraint
algorithm
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10015
www.rsisinternational.org
9
[69 ]
Class 1
PFCM with
Weighted Objects
proposed a family
of algorithms
instead of a single
method
i) clustering with
weighted objects
ii) systematic
development of
algorithms for
weighted objects
i) don’t support
multi-objective
ii) Lack of
flexible
similarity metric
10
[70 ]
Class 1
Graded Possibility
Algorithm
a method to obtain
a soft transition
from the
possibilistic to the
probabilistic
models
i) to use the
uncertainty model
for memberships
ii) to obtain a soft
transition
i) Lack of
flexible
similarity metric
11
[71 ]
Class 1
Hybrid Algorithm
Of Fuzzy Kernel
Clustering And
Artificial Immune
the algorithm learns
memory and
affinity maturation
in natural immune
system, from the
mechanism of
immunocyte clone,
operates on
antibody
i) to obtain global
optima quickly
ii) to solve the
flaws of kernel
clustering and the
fuzzy c-means
perfectly
i) trapped into
local optimality
ii) sensitive to
initialization
12
[72 ]
Class 1
ROUGHFUZZY
PCM (RFPCM)
the algorithm
comprises a
judicious
integration of the
principles of fuzzy
sets and rough
i) to efficient
selection of
cluster prototypes
i) sensitivity to
noisy data
ii) coincident
clusters problem
13
[73 ]
Class 1
QPSO and FCM
the algorithm
incorporates Fuzzy
C-Means into the
Quantum-behaved
particle swarm
optimization
i) avoids
depending on the
initialization
values
ii) higher
convergent
capability of the
global optimizing
i) local minimum
problem
ii) depending on
the initialization
values
14
[74 ]
Class 1
PSO-FCM
algorithm
hybridized
clustering approach
for segmentation
using PSO
i) To improve the
result of classic
FCM algorithm
i) sensitive to
initial cluster
centers
ii) The number
of clusters should
be determined
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10016
www.rsisinternational.org
15
[75 ]
Class 1
Evolutionary FCM
Approach
the algorithm
integrates FCM and
feature selection
i) To find clusters
in high
dimensional
dataset
i) no good for
high dimensional
datasets
16
[76 ]
Class 1
Robust FPCM
algorithm
the algorithm
present a new
metric for kernel in
the space of data to
replace the
Euclidean norm
metric in Fuzzy-
Possibilistic C-
Means
i) to increase the
performance
ii) introduce new
metric
i) sensitive to
noise
ii) no robust
metric
17
[77 ]
Class 1
Gaussian Kernel-
Based FCM (G-
KFCM)
Gaussian kernel-
based FCM with a
correction of spatial
bias
i) more efficien
and robustness
i) sensitive to
data behavior
ii) no robust to
outliers and noise
18
[78 ]
Class 1
GA-SA-FCM
Clustering
a new kind of
Hybrid Genetic
Algorithm is
proposed on the
base of the
combined with SA
and FCM
Algorithm
i) high efficiency
of the method
ii) high
recognition
accuracy of the
method
i) sensitivity to
initial clustering
centers
ii) trapping to
local optimum
19
[79 ]
Class 1
GECIM: The
Generalized
Clustering Model
proposed a linear
combination of the
Possibilistic C-
Means, Fuzzy C-
Means, and HCM
objective functions
i) to reveal the
properties
ii) accurate
partitioning
i) sensitivity to
outliers
ii) coincident
cluster prototype
iii) quicker
convergence
20
[80 ]
Class 1
RENFCM : Rough-
Enhanced FCM
Algorithm
improved hybrid
algorithm named
rough- enhanced
FCM
i) to speed up the
segmentation
process
ii) more robust to
the noises
i) sensitivity to
the noises
21
[81 ]
Class 1
Hybrid C-Means
Clustering Model
a novel hybrid c-
means algorithmic
scheme for all three
conventional
clustering models
i) to avoid the
noise sensitivity
ii) to avoid
coincident
clusters
i) sensitive to
noise
ii) coincident
cluster
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10017
www.rsisinternational.org
22
[82 ]
Class 1
FCM-FPSO
Algorithm
a hybrid method of
fuzzy clustering
based on fuzzy PSO
and Fuzzy C-
Means
i) easy to
implement
ii) to obtain global
optima
i) easily trapped
in local optima
ii) sensitive to
initialization
23
[83 ]
Class 1
IAFSA-FCM
Method
Presented two
novel methods for
data clustering
i) avoid sinking
into local solution
ii) diminish the
sensitivity to the
isolated points
and the initial
parameters
i) initialization
value problem
ii) local
minimum
problem
24
[84 ]
Class 1
generalized GPCM
an efficient global
optimization
method
i) to select feasible
region
ii) to find optimal
solution
i) coincident
clusters problem
ii) sensitive to
noise
25
[85 ]
Class 1
EPFCM Algorithm
a fuzzy clustering
with evolutionary
programming
i) To increase the
convergence
speed
ii) not so sensitive
to initial cluster
centers
i) trapping into
the local optima
ii) sensitive to
noise
26
[86 ]
Class
1, 3
MoDEFC technique
fuzzy clustering
technique with a
modified
differential
evolution algorithm
i) to increase
efficient
ii) to find the
number of clusters
automatically
i) multi-objective
optimization
problems
ii) sensitive to
noise
iii) number of
cluster should be
determined
27
[87 ]
Class 1
QPSO-FCM
algorithm
new hybrid
algorithm based on
the gradient descent
of FCM
i) higher
convergent
capability
ii) to find optimal
solution
i) local minimum
problems
ii) depending on
the initialization
values
28
[88 ]
Class 1
AGFCM Algorithm
new algorithm is
proposed by
analyzing cluster
validity function
i) to obtain best
cluster number
ii) to initialize
cluster center
i) cluster center
should be
initialized before
classification
ii) cluster number
should be
determined
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10018
www.rsisinternational.org
29
[89 ]
Class 1
kernel allied FCM
(KAFCM)
a combination of
fuzzy-clustering
with FCM and
NPCM
i) good for high-
dimensional
feature space
ii) better
performance
i) coincident
clusters problem
ii) sensitive to
noise
30
[90 ]
Class 1
improved
unsupervised
possibilistic
clustering (IUPC)
a novel clustering
technique which
called “improved
unsupervised
possibilistic
clustering”
i) higher
convergent
capability
ii) find optimal
solution
i) coincident
clusters problem
ii) local
minimum
problems
31
[91 ]
Class 1
FCM-FPSO
Algorithm
a combination of
fuzzy clustering
with fuzzy PSO
and FCM
i) easy to
implement
ii) to find optimal
solution
i) sensitive to
initialization
ii) trapping in
local optima
32
[92 ]
Class 1
Possibilistic
Exponential Fuzzy
Clustering
(PXFCM)
a new idea of
clustering based on
Exponential
objective function
i) capability to
detect the outliers
ii) no create
coincidence
clusters
i) sensitive to
noise and outliers
33
[93 ]
Class 1
sample weighted
possibilistic FCM
(SWPFCM)
a new method
based on
combination
sample weighting
and a suitable for
noise environment
i) deal with noise
data
ii) produce less
clustering time
i) sensitive to
outlier faults
ii) initialization
value problem
34
[94 ]
Class 1
akFCM and akPCM
an approximation
method for FCM
and PCM
algorithms
i) to reduce
computational
complexity
ii) to reduce
memory
requirement
i) lack of flexible
similarity metric
ii) lack of good
validity metric
35
[95 ]
Class 1
FPCM Algorithm
to cluster the drape
property data and
reject noise points
i) to increase
performance
ii) better accuracy
i) sensitive to
noise and outliers
36
[96 ]
Class 1
F-EARFC
Algorithm
a fuzzy extension of
an evolutionary
based algorithm for
relational
clustering
i) to increase
performance
ii) better accuracy
i) the number of
clusters isn’t
clear in advance
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10019
www.rsisinternational.org
37
[97 ]
Class 1
Extended Gaussian
kernel version of
fuzzy c-means
Propose a new
mathematical
initialization
centers for initial
cluster centers
using a new
prototypes learning
method
i) to find annular-
shaped
ii) to minimize the
iteration of
algorithms
i) sensitive noise
data
ii) Lack of good
validity metric
iii) no good for
non-compactly
filled
38
[98 ]
Class 1
Possibilistic and
Fuzzy Possibilistic
C-Means
a new algorithm to
solves the problems
of both FCM and
PCM algorithms
i) to improve the
performance of
clustering
i) easily struck at
local minima
ii) sensitive to
noises
iii) initialization
and the
coincident
clustering
problem
39
[99 ]
Class 1
KFCM-HACO
Algorithm
a new clustering
method based on
kernelized fuzzy c-
means algorithm
and a recently
proposed ant based
optimization
algorithm
i) to improve the
clustering
performance
ii) to find global
optimum
i) lack of prior
knowledge for
optimum
parameters of the
kernel functions
ii) trapping into
local minima
iii) sensitive to
initialization
40
[100 ]
Class 1
chaotic particle
swarm fuzzy
clustering (CPSFC)
a novel CPSFC
algorithm based on
gradient method
and chaotic particle
swarm
i) exploiting the
searching
capability
ii) to accelerate
convergence
i) getting stuck at
locally optimal
ii) initialization
value problem
41
[101 ]
Class 1
IGA-NWFCM
Algorithm
a new fuzzy
clustering
algorithm to find
the suitable
structures for
cluster from data
set applied on
intrusion detection
method
i) to identify
anomaly intrusion
ii) to solve high
dimensional
multi-class
problem
iii) to obtain
global optimal
value
i) no suitable for
any prototype
ii) trapping into
local minima
42
[102 ]
Class 1
Variable string
length Artificial Bee
Colony (VABC)
a novel version of
ABC automatic
fuzzy based on
clustering
technique
i) to improve
performance
ii) to find global
optimum
i) getting stuck at
locally optimal
ii) initialization
value problem
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10020
www.rsisinternational.org
43
[103 ]
Class 1
AFSA Algorithm
density function
and average
information
entropy are
employed to
determine the
initial clustering
center and number
of clusters
i) to specify
number of clusters
ii) to determine
initial clustering
center
i) cannot
specified the
number of
clusters
ii) sensitivity to
initialization
value
44
[104 ]
Class 1
MVDFCM and
PEFCM
dealing with the
effective quadratic
entropy FCM using
the combination of
regularization
function, quadratic
terms, and kernel
distance functions
i) to deal with
complex datasets
ii) to reduce the
number iterations
i) no standard
objective
function
ii) measurement
uncertainty
45
[105 ]
Class 1
RFCMK and
TEFCM
a robust FCM for
automatic image
segmentations
i) to reduce the
computational
complexity
ii) to minimize the
objective
functions
i) sensitivity to
initial cluster
centers
ii) no standard
objective
function
46
[106 ]
Class
1, 3
PSO-FCM
algorithm
designed to be
accessible through
parallel
computation and
support
multidimensional
feature data
i) to find global
optimum
ii) good for large
clustering number
i) converging to a
local minimum of
the objective
function
ii) lead to
undesired results
47
[107 ]
Class
1, 3
GARFPCM
A new of genetic
algorithm based
RFPCM
i) obtain better
clustering quality
i) have problem
with random
initialization and
unstable for the
reason that
48
[108 ]
Class 1
F-ICA algorithm
a new FICA
method has been
proposed for fuzzy
clustering
algorithm to
rectifying Fuzzy C-
Means
i) to find optimal
solution
i) converges to
local optimum
solution
ii) highly
depends on the
initial state
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10021
www.rsisinternational.org
49
[109 ]
Class 1
SASWFCM
algorithm
weighted Fuzzy C-
Means method
based on SA
algorithm
i) on the objective
function and the
clustering center
function makes
certain weighting
process
ii) ability of
finding global
optimum
solutions to
compute the
initial value
clusters
i) disequilibrium
problems
ii) needs to
determined initial
number of
clusters by the
manual control
50
[110 ]
Class 1
GMKIT2-FCM
method
to combine
different
information in the
classification
problem
i) determine the
coefficients of the
multiple kernel
ii) automatically
find the optimal
cluster numbers
i) the number of
cluster need to be
specified
ii) sensitive to
initial cluster
centers
51
[111 ]
Class 1
Kernelized Fuzzy
Possibilistic C-
Means (KFPCM)
Kernelized Fuzzy
Possibilistic C-
Means based on the
Kernel-Induced
Distance Measure
i) to achieve better
clustering
outcomes
ii) effective for
high dimensional
data
i) no useful to
handle high-
dimensional
datasets
ii) no flexible
distance measure
52
[112 ]
Class 1
Multi-PFKCN
Clustering Method
Multi-PFKCN,
based on neural
network using
possibilistic-fuzzy
clustering
algorithm
i) to obtain the
optimum number
of clusters
ii) to handle noise
problem
i) sensitive to
noises
ii) the number of
cluster need to be
determined
53
[113 ]
Class 1
PCM Using Fuzzy
Relations
a new approach for
objective function-
based fuzzy
clustering to
dealing with noises
and coincident
clusters
i) to handle noisy
data
ii) to deal with
coincident
clusters
i) coincident
clusters
ii) no flexible
objective
function
iii) sensitive to
with noises
54
[114 ]
Class 1
PCRM clustering
algorithms
to apply the PCM
clustering method
to the fuzzy c-
i) to alleviate the
noisy data
i) sensitive to
noise data
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10022
www.rsisinternational.org
regression models
FCRM
55
[115 ]
Class 1
Fuzzy GES
algorithm
a new method
based on adaptation
of the recently
proposed Grouping
Evolution Strategy
for unsupervised
fuzzy clustering
i) to find global
optimum
ii) to specify the
true clustering
number
i) the number of
cluster need to be
determined
ii) no guarantees
to find optimal
result
Appendix 2. The most highlighted papers in class 2
No
Authors
Class
Algorithm/Approach
Description
Advantage
Disadvantage
1
[116 ]
Class
2
vOS validity index
the fuzzy c-means
algorithm with
ability of Cluster
Validity Index
(CVI)
i) to determine
the optimal
number of
clusters
ii) to obtain
optimum cluster
i) sensitive to
cluster
overlapping
ii) sensitive to
separation
measure
2
[117 ]
Class
2
PBM-index
A new CVI to
obtain its
maximum value
when the data is
correctly
clustered
i) to increase
confidence
ii) to obtain
number of
clusters
i) sensitive to
behavior of data
ii) the number of
cluster need to
be determined
3
[118 ]
Class
2
new FMLE Algorithm
A new fuzzy
clustering validity
index, which is
suitable for
overlapping
clusters
i) to detect
different shape,
ii) to detect
different density
and orientation
i) sensitivity to
cluster
overlapping
4
[119 ]
Class
2
PCAES index
a new validity
index for fuzzy
clustering called a
partition
coefficient and
exponential
separation index
i) to increase
confidence
ii) well-
separated cluster
i) sensitive to
noisy data
ii) sensitive to
separation
measure
5
[120 ]
Class
2
Xie-Beni index and
Kwon index
a new fuzzy
clustering
validation index
based on the
Kwon index and
Xie-Beni index
i) to destroy the
monotonically
decreasing
tendency
ii) to avoid the
numerical
i) lack of
stability in
cluster
validation index
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10023
www.rsisinternational.org
instability of
validation index
ii) the number of
cluster should
be determined
6
[121 ]
Class
2
PBMF index
a new validation
index for or
clustering a
dataset into an
unknown number
of clusters
i) to increase
performance
ii) to determine
the number of
clusters
i) the number of
cluster should
be determined
7
[122 ]
Class
2
aggregation operator
of the membership
degrees based
fuzzy clustering
validity index
based on the
aggregation of the
resulting
membership
degrees
i) ability to select
a correct number
of clusters
i) the number of
cluster should
be determined
8
[123 ]
Class
2
a self-adaptive kernel
clustering (SAKC)
algorithm and efficient
cluster validity index
and
a new validation
index to describe
the between-
cluster and within-
cluster similarities
i) to increase
performance
ii) to increase
effectiveness
i) lack of fix
cluster
validation index
ii) sensitive to
behavior of data
9
[124 ]
Class
2
FVQ index
a new cluster
validity index
quantization-
dequantization
criterion for fuzzy
clustering
i) to obtain
correct number
of clusters
ii) to determine
corresponding
partitioning
i) cannot
specified the
number of
clusters
ii) sensitive to
initialization
10
[125 ]
Class
2
fundamental concepts
of cluster validity
introduce the
fundamental
concepts of cluster
validity, and
presents a review
of fuzzy cluster
validity
i) to discover
distribution of
patterns
ii) interesting
correlations
i) need to know
the number of
classes
ii) sensitive to
behavior of data
11
[126 ]
Class
2
all possible pairs of
fuzzy clusters calculate
the average value of
the relative degrees of
sharing
cluster validity
index based on a
similarity measure
of fuzzy clustering
for validation of
G-K method
i) to determine
the degree of
correlation of
clusters
ii) to find
optimal number
of clusters
i) the number of
cluster should
be determined
12
[127 ]
Class
2
cviFF new index
a new cvi is
proposed for the
validation of a
previously
i) to determine
the optimum
number of
clusters
i) need to know
the number of
classes
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10024
www.rsisinternational.org
proposed IFC
algorithm
ii) to utilizes
membership
values
ii) sensitive to
behavior of data
13
[128 ]
Class
2
Distinguishableness
and Non-
Distinguishableness
A new cluster
validity index for
fuzzy clustering
which is
independent of
clustering
methods
i) to determine
the optimum
number of
clusters
i) need to know
the number of
classes
14
[129 ]
Class
2
AIBFC:Agglomerative
Integrated Adaptive
Bayesian Fuzzy
Clustering
a properly
incorporated with
Bayesian decision
rule for fuzzy
competitive
learning structure
i) to find optimal
number of
clusters
ii) handles
outliers
iii) useful for
clustering data of
complex
structure
i) the number of
cluster should
be determined
ii) sensitive to
outlier data
15
[130 ]
Class
2
measures variation and
separation
a new validity
index has been
used to search for
the optimal
number of clusters
i) to find the
optimal clusters
ii) to find
optimum number
of clusters
i) sensitive to
outlier data
ii) need to know
the number of
classes
16
[131 ]
Class
2
modified Fuzzy Gap
statistic (MFGS)
to apply on fuzzy
k-means
clustering and it is
a modified Fuzzy
Gap statistic
i) estimation of
the optimal
number of
clusters
ii) robustness
against noise
i) the number of
cluster should
be determined
ii) sensitive to
outlier data
17
[132 ]
Class
2
ratio-type validity
for the validation
of a previously
proposed
improved fuzzy
clustering, two
new cluster
validity criterions
are introduced
i) to find patterns
of datasets
ii) to find
optimal number
of clusters
i) need to know
the number of
classes
18
[133 ]
Class
2
lFORI Indexes
a new separation
measure and a
measure of
overlap of clusters
i) to find optimal
cluster numbers
ii) robust in
noisy
environments
i) problem of
finding the
optimal number
of clusters
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10025
www.rsisinternational.org
iii) sensitive to
the fuzzifier
exponent
ii) sensitive to
outlier data
19
[134 ]
Class
2
new intra-cluster
similarity index
a new intra-cluster
similarity index to
assess the intra-
cluster similarity
of obtained
partitions from
Fuzzy C-Means
i) to find out the
optimal number
of clusters
i) problem of
finding the
optimal number
of clusters
20
[135 ]
Class
2
validity index IFV
an uncertainty
factor in the fuzzy
partition process
based on a validity
index for spatial
fuzzy clustering
i) to identify the
correct cluster
number
i) need to know
the number of
classes
21
[136 ]
Class
2
extended partition
entropy and inter-class
similarity (EPESIM)
a cluster validity
index based on the
combination of
extended partition
entropy and inter-
class similarity
i) free from
heavy distance
computing
ii) prominent
results under
various kind of
situations
i) sensitive to
behavior of data
ii) need to heavy
distance
computing
22
[137 ]
Class
2
CVO validity index
a new the intra-
cluster variation
and inter-cluster
for validity index
i) to determine
the optimal
number of
clusters
i) problem of
finding the
optimal number
of clusters
23
[138 ]
Class
2
FI and CoC indices
a fuzzy cluster
validity indices
can be applied for
the objects of
mixed features
i) to determine
the optimum
number of
clusters
i) problem of
finding the
optimal number
of clusters
24
[139 ]
Class
2
New Validity Index
a fuzzy CVI based
on the Shannon
entropy and fuzzy
variation theory
i) ability of
determining the
optimal class
ii) ideal for the
compact and
good-isolated
datasets
i) sensitive to
behavior of data
ii) measuring
problem of
similarity
between
clustering
25
[140 ]
Class
2
Co-Association
Matrices
a possible fuzzy
framework for
applying
traditional and
novel partition
similarity
i) ability of
determining the
optimal class
i) measuring
problem of
similarity
between
clustering
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10026
www.rsisinternational.org
measures to fuzzy
clustering
ii) sensitive to
behavior of data
26
[141 ]
Class
2
novel validity index
for the fuzzy c-means
a robust validity
index for Fuzzy c-
Means algorithm
i) to obtain
optimum number
of clusters
ii) better
performance
i) problem of
obtaining the
optimum
number of
clusters
ii) sensitive to
initial cluster
centers
27
[142 ]
Class
2
novel validity index
a new validity
index for the
subtractive
clustering
algorithm
i) to find the
optimum number
of clusters
i) need to know
the number of
classes
28
[143 ]
Class
2
compactness and
separation measures
a separation
measure and a
compactness
measure with a
new validity index
employs
i) based on the
compactness and
separation
measures
ii) superior
effectiveness and
reliability
i) overlapping
problem
ii) sensitive to
behavior of data
29
[144 ]
Class
2
MPE-DMFP index
a new validity
index combined of
two metrics, the
summation of the
distances between
the means of the
fuzzy clusters and
the modified
partition entropy
index
i) to observe the
behavior of data
ii) to obtain the
optimal number
of clusters
i) need to know
the number of
clusters
ii) sensitive to
behavior of data
30
[145 ]
Class
2
VS Validity Index
validation of the
fuzzy clustering
partitions
generated by the
FCM with a new
validity index
i) to specify
clusters with
different sizes
and densities
ii) more robust to
the noise data
i) sensitive to
noise data
ii) sensitive to
behavior of data
31
[146 ]
Class
2
Fukuyama-Sugeno
Validity Index
a novel algorithm
for fuzzy
partitional
clustering using
the Fukuyama-
Sugeno index
i) to obtain more
accurate
clustering results
ii) to eliminate
the outliers
i) overlapping
leads to poor
clustering
results
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10027
www.rsisinternational.org
ii) sensitive to
outlier data
32
[147 ]
Class
2
MPO Index
a robust cluster
validity for FCM
consists of two
terms, separation
measure and
compactness
i) to obtain the
optimal number
of clusters
ii) robust against
outlier and noise
data
i) sensitive to
outlier and noise
data
i) need to know
the number of
clusters
33
[148 ]
Class
2
platform of cluster
validity analysis
CVAP
a new validity
index based on
membership
degree and
applications
i) for particular
dataset, find
appropriate
clustering
methods
ii) to find the
optimal number
of clusters
i) need to know
the number of
clusters
ii) sensitive to
behavior of data
34
[149 ]
Class
2
RPCM algorithm
a new similarity
criteria based
robust PCM
i) to find optimal
number of
clusters
ii) robust to
outliers and
noise
i) sensitive to
the selection of
initial
parameters
ii) sensitive to
outliers and
noise
iii) need to
know about
cluster numbers
35
[150 ]
Class
2
Pattern Distances
Ratio (PDR)
a new validity
index for fuzzy
clustering based
on Pattern
Distances Ratio
and some
modifications
improving
i) to determine
optimum number
of clusters
ii) to specify
appropriate
partitions
i) need to know
the number of
clusters
ii) sensitive to
behavior of data
36
[151 ]
Class
2
MDN index
a new cluster
validity index
based on two
factors
i) to find
appropriate
clustering
ii) stable and
adaptive
i) sensitive to
initial
parameters
ii) sensitive to
behavior of data
37
[152 ]
Class
2
measure of clustering
quality
the generalized
index is applicable
to both fuzzy and
crisp partitions
i) effectiveness
and adaptability
i) sensitive to
behavior of data
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10028
www.rsisinternational.org
38
[153 ]
Class
2
CVI index
a CVI for fuzzy
clustering
obtained from
interval type-2
FCM
i) to obtain the
optimum number
of clusters
ii) to find
appropriate
clustering
i) need to know
the number of
partitions
ii) sensitive to
behavior of data
39
[154 ]
Class
2
Cluster Validity Index
(CVI)
compares 30
cluster validity
indices in an
experimental
work in many
different
environments
with different
attributes
i) most
interesting for
noisy and
overlapped data
i) sensitive to
noise
ii) overlapping
problem
40
[155 ]
Class
2
cluster validity index
an enhanced fuzzy
clustering
algorithm related
to α-cut interval
descriptions of
fuzzy numbers
and a new cluster
validity index
i) to obtain the
optimum number
of clusters
i) problem of
obtaining the
optimal cluster
numbers
41
[156 ]
Class
2
Cluster Validation In
Fcm-Type Co-
Clustering
a novel index for
validating fuzzy
co-cluster
partitions based
on the geometrical
features of two
fuzzy
memberships
i) to find
appropriate
clustering
i) need to know
the number of
clusters
ii) overlapping
problem
42
[157 ]
Class
2
reduce sensitivity of
validity index
relied on
membership with
a new non-
distance validity
index
i) recognize
overlapping
clusters
ii) insensitive to
noisy items
i) within cluster
problem just
measure
compactness
ii) separation
problem
iii) sensitive to
noisy data
43
[158 ]
Class
2
SM-index
a new VCI for the
type-2 Fuzzy C-
Means called SM-
Index
i) to obtain the
optimum cluster
numbers
i) need to know
the cluster
numbers
44
[159 ]
Class
2
UPCMDR algorithm
a new possibilistic
algorithm named
unsupervised
i) more robust to
noises
i) sensitive to
noisy data
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10029
www.rsisinternational.org
PCM with data
reduction ability
ii) to improve
clustering
efficiency
ii) sensitive to
behavior of data
45
[160 ]
Class
2
index WGLI
a new validity
index based on
improved bipartite
modularity of
bipartite network
and the
membership
degrees
i) to obtain the
membership
degree of
samples
ii) to obtain the
optimum number
of clusters
i) need to know
the number of
clusters
46
[161 ]
Class
2
WLI clustering
validity index
a new index of
clustering
validity, WLI, for
centroid-based
partitional
clustering
i) more accurate
and satisfactory
performance
ii) insensitive to
noisy data
i) sensitive to
noisy data
ii) sensitive to
behavior of data
47
[162 ]
Class
2
validity index CS
a novel robust
validity index that
appraises the
partition fitness
generated by SC
methos
i) robust to
outliers and
noise
ii) to evaluate
actual cluster
centers
i) sensitive to
noisy and outlier
data
ii) sensitive to
behavior of data
48
[163 ]
Class
2
Pattern Distances
Ratio (PDR)
a fuzzy clustering
validity index for
a cluster number
selection
procedure and
Pattern Distances
Ratio
i) to find the
optimal number
of clusters
ii) robust to
outliers and
noise
i) need to know
the number of
clusters
ii) sensitive to
noise
Appendix 3. The most highlighted papers in class 3
No
Authors
Class
Algorithm/Approach
Description
Advantage
Disadvantage
1
[57 ]
Class
3
extension to PCM
an approach for
analysis of
possibilistic fuzzy
cluster which is
based on cluster
centers repelling as
well as data
attracting cluster
centers
i) to find
appropriate
clustering
i) to obtain global
optima
i) suffers from
stability
problems
ii) gets-stuck in
local optimum
2
[164 ]
Class
3
ACE and FMLE
An introduction to
the foundations of
the broad field of
fuzzy clustering
i) to find a good
fuzzy partition
ii) to find best
cluster prototypes
i) sensitive to the
initial parameters
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10030
www.rsisinternational.org
iii) handling noise
and outliers
ii) sensitive to
outliers and noise
3
[165 ]
Class
3
possibilistic
clustering algorithm
(PCA)
a new possibilistic
clustering
algorithm, and
solved the problem
for validating the
clusters obtained
by PCA
i) robust to
outliers and noise
ii) to improve the
efficiency of
clustering
i) performance of
clustering
depends heavily
on the parameters
4
[166 ]
Class
3
FCM+
an improved FCM
algorithm is
proposed to cluster
the association
rules
i) to categorize
the association
rules
ii) to discover
meaningful
itemsets
i) great number
of rules
ii) sensitive to
behavior of data
5
[167 ]
Class
2, 3
DGAFCM algorithm
a novel weighed
FCM based on
double coding GA
i) suitable for
numeric data
ii) stable and
adaptive
i) features-
weighting
clustering
problem
6
[168 ]
Class
2, 3
Possibilistic Fuzzy
Clustering with
Repulsion
combines the
partitioning
property of the
fuzzy c-means
clustering
algorithm with the
robust noise
insensibility of the
possibilistic fuzzy
c-means clustering
algorithm
i) robust to noise
and outliers
ii) to intuitive
interpretation of
the membership
values
i) sensitive to
noisy
environments
ii) sensitive to
parameter of
algorithm
7
[169 ]
Class
3
FFCM algorithm
an improved FCM
with more faster
computation and
accurate results
i) robust to
outliers and noise
ii) to reduce
executive time
i) sensitive to
noisy
environments
8
[170 ]
Class
3
feature-weight FCM
algorithm (FW-
FCM)
an appropriate
assignment of
feature-weight to
improve the
performance of
FCM
i) to find
appropriate
clustering
ii) to improve the
performance
i) sensitivity of
performance to
distance metric
9
[171 ]
Class
3
GIFP-FCM
algorithm
a new objective
function and a
novel membership
constraint function
is constructed
i) robust to noise
and outliers
ii) robustness and
convergence
i) sensitivity to
the initialization
ii) sensitive to
noisy data
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10031
www.rsisinternational.org
10
[172 ]
Class
3
MFPCM algorithm
a Modified
possibilistic
clustering method
introduced to
obtain more
accurate clustering
i) to gain better
quality clusters
i) sensitivity to
the initialization
11
[173 ]
Class
3
FCMAWA
algorithm
a modified FCM
algorithm obtained
by modifying the
objective function
of conventional
FCM
i) more robust to
noise
i) sensitive to
noise data
12
[174 ]
Class
3
new FCM algorithm
a new distance to
replace the
Euclidean distance
in fuzzy c-means
clustering
algorithm
i) result is robust
ii) can be easily
computed
i) sensitivity to
the initialization
ii) sensitive to
behavior of data
13
[175 ]
Class
1, 3
kernel-based
clustering
algorithms
especially KFCM-K
an improved FCM
algorithm aiming at
many problems in
Fuzzy C Means
algorithm
i) to obtain global
optimal solutions
ii) to select rule of
initial cluster
centers
i) sensitive to
initial conditions
ii) gets stuck in a
local minimum
14
[176 ]
Class
3
Improved FCM
algorithm
a generic
comparative
analysis of fuzzy
clustering and
kernel-based fuzzy
clustering
i) to emphasis the
parameter
selection
ii) to understand
the performance
i) sensitive to
values of the
kernel
parameters
ii) sensitive to
behavior of data
15
[177 ]
Class
3
IFCM algorithm
A traditional
approach to
segmentation of
magnetic
resonance
i) to improve the
performance
ii) to determine
the optimum
value of degree of
attraction
i) sensitive to
noisy
environments
ii) sensitive to
behavior of data
16
[178 ]
Class
3
IFPCM : Improved
fuzzy possibilistic
clustering method
an improved fuzzy
possibilistic
clustering based on
the conventional
PCM
i) to gain better
quality results for
clustering
i) sensitivity to
the initial values
ii) gets stuck into
the local optima
17
[179 ]
Class
3
CIRDWFCM and
CDRDWFCM
FCM tries to obtain
the memberships
by optimizing an
objective function
i) to gain better
results for
clustering
i) sensitive to
behavior of data
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10032
www.rsisinternational.org
ii) to improve the
quality of
clustering
ii) sensitive to
noise and
anomaly
18
[180 ]
Class
3
IKFCM algorithm
clustering
algorithm called
improved kernel
based fuzzy c-
means clustering
algorithm
i) to improve the
performance
ii) to obtain global
optimal solutions
i) gets stuck in a
saddle point
or local
minimum
ii) sensitive to
data behavior
19
[181 ]
Class
3
PTFEC:possibilistic
type of fuzzy
entropy clustering
a possibilistic type
of fuzzy entropy
clustering, based
on fuzzy entropy
clustering and
possibilistic c-
means clustering
i) insensitive to
noises
ii) better
clustering
accuracy
i) sensitive to
noises data
ii) sensitive to
data behavior
20
[182 ]
Class
3
Modified Fuzzy
Possibilistic
Clustering Method
(MFPCM)
a modified
possibilistic
clustering
algorithm for fuzzy
clustering is
proposed based on
the conventional
FCM
i) to obtain better
quality of results
ii) to recognize
context patterns
i) sensitivity to
the initial values
ii) gets stuck in a
local minimum
21
[183 ]
Class
3
Modified fuzzy C-
Means (MFCM)
a modified fuzzy
cmeans algorithm
by the particle
swarm
optimization
algorithm based on
FCM
i) to obtain global
optimal solutions
ii) better
clustering
accuracy
i) easy gets stuck
into the local
optimum
ii) sensitivity to
the initialization
22
[104 ]
Class
3
MVDFCM and
PEFCM
an alternative
generalization of
FCM clustering
techniques in order
to deal with the
complicated
datasets
i) to initialize the
cluster centers
ii) to find
optimum cluster
centers
i) measurement
uncertainty
ii) sensitivity to
the initial values
23
[184 ]
Class
3
TLBO algorithm
to overcome cluster
centres
initialization, the
teaching learning
based optimization
algorithm is
proposed
i) to find optimum
cluster centers
ii) to find better
clustering
accuracy
i) sensitivity to
tune the initial
centres
ii) sensitivity to
the initial values
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10033
www.rsisinternational.org
24
[185 ]
Class
3
possibilistic model
for clustering LR
fuzzy data
To propose
clustering models
for fuzzy LR2 data
following the
possibilistic and
fuzzy methods
i) more robust to
noise
ii) useful for
coincident
clusters problem
i) sensitive to
noises data
ii) lack of
flexible
similarity
measure
25
[186 ]
Class
1, 3
Unsupervised-kernel
possibilistic
clustering method
(UKPC)
a new clustering
algorithm inspired
by the UPC with
kernel to improve
the UPC
performance
i) robust to the
outliers and noise
ii) able to detect
the clusters with
different non-
convex structures
and shapes
i) sensitive to
noisy
environments
ii) sensitive to
behavior of data
26
[187 ]
Class
3
rough-fuzzy c-
means (RFCM)
rough-fuzzy c-
means algorithm
for clustering
Microarray Gene
Expression Data
i) to find optimum
cluster centers
ii) to obtain global
optimal solutions
i) local minima
problems
ii) sensitivity to
the initial values
27
[107 ]
Class
1, 3
rough-fuzzy PCM
(GARFPCM)
A genetic
algorithm based
rough-fuzzy PCM
is introduced
i) to minimizing
the objective
function
ii) to obtain global
optimal solutions
i) because of
random
initialization, it is
unstable
ii) gets stuck in a
local minimum
28
[188 ]
Class
3
modified
possibilistic fuzzy c-
means clustering
algorithm (MPFCM)
A modified PCM
algorithm
introduced with the
name of MPFCM
i) better ability to
express the data
structure
ii) lower
computation
complexity
i) initialization
sensitivity
problems
ii) need to
determine the
cluster number in
different scales
29
[189 ]
Class
1, 3
GIT2FCM:genetic-
based interval type 2
FCM clustering
a genetic-based
interval type 2
fuzzy c-means
clustering ,which
automatically find
the optimal number
of clusters
i) to find the
optimal number
of clusters
automatically
to find better
clustering
accuracy
i) need to
determine the
number of
clusters
ii) sensitivity to
the initial values
30
[190 ]
Class
3
siibFCM algorithm
a new clustering
approach siibFCM
i) more flexible
and stable to
i) cluster-size
sensitivity
problem
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10034
www.rsisinternational.org
proposed based on
integrity of cluster
random
initialization
ii) for the distance
between clusters
has much bigger
tolerance
ii) sensitive to
behavior of data
31
[191 ]
Class
3
enhanced interval
type-2 FCM
algorithm
an enhanced
interval type-2
FCM algorithm is
introduced in order
to reduce the
calculation time
and accelerate the
convergence
i) to find optimum
cluster centers
ii) efficient to
handle the
uncertainties well
i) uncertainties
problem
ii) sensitivity to
the initial cluster
centers
32
[192 ]
Class
1, 3
variable-wise kernel
fuzzy clustering
methods
new kernel-based
fuzzy clustering
algorithms is
proposed where
dissimilarity
measures are
achieved as
summation of
Euclidean
distances between
data
i) use adaptive
distances that
changes at each
iteration of
algorithm
ii) able to
introduce various
cluster
interpretations
and fuzzy
partition
i) sensitive to
behavior of
datum
ii) unable to
detect of clusters
with different
non-convex
structures and
shapes
33
[193 ]
Class
3
REFCM method
a new fuzzy
method based on
fuzzy C-means
algorithm and the
relative entropy to
maximize the
dissimilarity
between clusters
i) good ability in
noise detection
ii) assignment of
suitable
membership
degrees for
observations
i) sensitive to
noise and outlier
ii) descriptive
complexity
problem
34
[194 ]
Class
3
MFCM-TCSC
algorithm
a multi-center
FCM based on
spectral clustering
and transitive
closure
i) to handle non-
traditional curved
clusters
i) better ability to
express the data
structure
i) sensitive to the
initial prototypes
ii) cannot handle
non-traditional
curved clusters
35
[195 ]
Class
3
SRFPCM method
an amended rough
FCM for optical
remote sensing
images and
synthetic aperture
radar
i) robust to noise
and outliers
ii) to deal with
incompleteness,
vagueness and
uncertainty
i) sensitive to
noisy
environments
ii) too many
parameters need
to be adjusted
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10035
www.rsisinternational.org
REFERENCES
1. C. Dunn, "A fuzzy relative of the ISODATA process and its use in detecting compact well-separated
clusters," 1973.
2. J. C. Bezdek, Pattern recognition with fuzzy objective function algorithms: Kluwer Academic Publishers,
1981.
3. J. C. Bezdek, "Fuzzy Mathematics in Pattern Classification," Applied Math. Center, Cornell University,
1973.
4. M.-S. Yang and C.-H. Ko, "On a class of fuzzy c-numbers clustering procedures for fuzzy data," Fuzzy
Sets and Systems, vol. 84, pp. 49-60, 11/25/ 1996.
5. M. S. Yang, "A survey of fuzzy clustering," Mathematical and Computer Modelling, vol. 18, pp. 1-16,
12// 1993.
6. D. E. Gustafson and W. C. Kessel, "Fuzzy clustering with a fuzzy covariance matrix," in Decision and
Control including the 17th Symposium on Adaptive Processes, 1978 IEEE Conference on, 1978, pp. 761-
766.
7. R. Krishnapuram and J. M. Keller, "A possibilistic approach to clustering," Fuzzy Systems, IEEE
Transactions on, vol. 1, pp. 98-110, 1993.
8. R. N. Dave and R. Krishnapuram, "Robust clustering methods: a unified view," Fuzzy Systems, IEEE
Transactions on, vol. 5, pp. 270-293, 1997.
9. D. Dubois and H. Prade, Possibility theory: Springer, 1988.
10. M. B. Bonab, S. Z. M. Hashim, S. M. Shamsuddin, and N. Gharaei, "A Robust Hyper-Heuristic
Algorithm For Clustering," in International Science Postgraduate Conference (ISPC2016), 4th
International Conference on, 2016, pp. 515-524.
11. M. B. Bonab, "Modified K Modified K-Means Algorithm for Genetic Clustering Means Algorithm for
Genetic Clustering Means Algorithm for Genetic Clustering," IJCSNS, vol. 11, p. 24, 2011.
12. L. A. Zadeh, "Fuzzy sets," Information and Control, vol. 8, pp. 338-353, 6// 1965.
13. L. A. Zadeh, "Outline of a New Approach to the Analysis of Complex Systems and Decision Processes,"
Systems, Man and Cybernetics, IEEE Transactions on, vol. SMC-3, pp. 28-44, 1973.
14. L. A. Zadeh, "Fuzzy algorithms," Information and Control, vol. 12, pp. 94-102, 2// 1968.
15. E. T. Lee and L. A. Zadeh, "Note on fuzzy languages," Information Sciences, vol. 1, pp. 421-434, 10//
1969.
16. L. A. Zadeh, "The concept of a linguistic variable and its application to approximate reasoningII,"
Information Sciences, vol. 8, pp. 301-357, // 1975.
17. L. A. Zadeh, "The concept of a linguistic variable and its application to approximate reasoning-III,"
Information Sciences, vol. 9, pp. 43-80, // 1975.
18. L. A. Zadeh, "The concept of a linguistic variable and its application to approximate reasoningI,"
Information Sciences, vol. 8, pp. 199-249, // 1975.
19. L. A. Zadeh, "Quantitative fuzzy semantics," Information Sciences, vol. 3, pp. 159-176, 4// 1971.
20. L. A. Zadeh, "Similarity relations and fuzzy orderings," Information Sciences, vol. 3, pp. 177-200, 4//
1971.
21. L. A. Zadeh, "Probability measures of Fuzzy events," Journal of Mathematical Analysis and
Applications, vol. 23, pp. 421-427, 8// 1968.
22. F. Klawonn and F. Höppner, "What Is Fuzzy about Fuzzy Clustering? Understanding and Improving the
Concept of the Fuzzifier," in Advances in Intelligent Data Analysis V. vol. 2810, M. R. Berthold, H.-J.
Lenz, E. Bradley, R. Kruse, and C. Borgelt, Eds., ed: Springer Berlin Heidelberg, 2003, pp. 254-264.
23. I. Gath and A. B. Geva, "Unsupervised optimal fuzzy clustering," Pattern Analysis and Machine
Intelligence, IEEE Transactions on, vol. 11, pp. 773-780, 1989.
24. M. Babrdelbonb, S. Z. M. H. M. Hashim, and N. E. N. Bazin, "Data Analysis by Combining the Modified
K-Means and Imperialist Competitive Algorithm," Jurnal Teknologi, vol. 70, 2014.
25. J. C. Dunn, "A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-
Separated Clusters," Journal of Cybernetics, vol. 3, pp. 32-57, 1973/01/01 1973.
26. J. C. Bezdek†, "Cluster validity with fuzzy sets," 1973.
27. R. O. Duda and P. E. Hart, Pattern classification and scene analysis vol. 3: Wiley New York, 1973.
28. G. H. Ball and D. J. Hall, "ISODATA, a novel method of data analysis and pattern classification," DTIC
Document1965.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10036
www.rsisinternational.org
29. M. B. Bonab, Y. H. Tay, S. Z. Mohd Hashim, and K. T. Soon, "An Efficient Robust Hyper-Heuristic
Algorithm to Clustering Problem," Cham, 2019, pp. 48-60.
30. D. Dubois and H. Prade, "Possibility theory. 1988," ed: Plenum Press, New York.
31. F. Klawonn, R. Kruse, and H. Timm, "Fuzzy Shell Cluster Analysis," in Learning, Networks and
Statistics. vol. 382, G. Della Riccia, H.-J. Lenz, and R. Kruse, Eds., ed: Springer Vienna, 1997, pp. 105-
119.
32. M. Yaghini, M. Ranjpour, and F. Yousefi, "A Survey of Fuzzy Clustering Algorithms," presented at the
The 3rd Iran Data Mining Conference, 2009.
33. J. Keller, R. Krisnapuram, and N. R. Pal, Fuzzy models and algorithms for pattern recognition and image
processing vol. 4: Springer, 2005.
34. J. C. Bezdek and S. K. Pal, Fuzzy models for pattern recognition vol. 56: IEEE Press, New York, 1992.
35. Z. Chi, H. Yan, and T. Pham, Fuzzy algorithms: with applications to image processing and pattern
recognition vol. 10: World Scientific, 1996.
36. V. Vapnik, The nature of statistical learning theory: springer, 2000.
37. V. N. Vapnik, The nature of statistical learning theory: Springer-Verlag New York, Inc., 1995.
38. B. Schölkopf and A. J. Smola, Learning with kernels: support vector machines, regularization,
optimization, and beyond: MIT press, 2002.
39. Z.-d. Wu, W.-x. Xie, and J.-p. Yu, "Fuzzy c-means clustering algorithm based on kernel method," in
Computational Intelligence and Multimedia Applications, 2003. ICCIMA 2003. Proceedings. Fifth
International Conference on, 2003, pp. 49-54.
40. D.-Q. Zhang and S.-C. Chen, "Clustering Incomplete Data Using Kernel-Based Fuzzy C-means
Algorithm," Neural Processing Letters, vol. 18, pp. 155-162, 2003/12/01 2003.
41. D. Q. Zhang and S. C. Chen, "Kernel-based fuzzy and possibilistic c-means clustering," in Proceedings
of the International Conference Artificial Neural Network, 2003, pp. 122-125.
42. R. N. Dave, "Characterization and detection of noise in clustering," Pattern Recognition Letters, vol. 12,
pp. 657-664, 11// 1991.
43. N. N. Davé and S. Sen, "On generalizing the noise clustering algorithms," Proc. of The 7th IFSA world
congress, vol. 3, pp. 205-210, 1997.
44. R. Dave and S. Sen, "Generalized noise clustering as a robust fuzzy cm-estimators model," in Fuzzy
Information Processing Society-NAFIPS, 1998 Conference of the North American, 1998, pp. 256-260.
45. H. Frigui and R. Krishnapuram, "A robust algorithm for automatic extraction of an unknown number of
clusters from noisy data," Pattern Recognition Letters, vol. 17, pp. 1223-1232, 10/25/ 1996.
46. A. Keller, "Fuzzy clustering with outliers," in Fuzzy Information Processing Society, 2000. NAFIPS.
19th International Conference of the North American, 2000, pp. 143-147.
47. F. Klawonn and F. Höppner, "An alternative approach to the fuzzifier in fuzzy clustering to obtain better
clustering results," in Proc. 3rd Eusflat Conference, 2003, pp. 730-734.
48. F. Klawonn and F. Höppner, "What is fuzzy about fuzzy clustering? Understanding and improving the
concept of the fuzzifier," in Advances in Intelligent Data Analysis V, ed: Springer, 2003, pp. 254-264.
49. P. J. Rousseeuw, E. Trauwaert, and L. Kaufman, "Fuzzy clustering with high contrast," Journal of
Computational and Applied Mathematics, vol. 64, pp. 81-90, 11/30/ 1995.
50. F. Klawonn, "Fuzzy clustering: insights and a new approach," Mathware & soft computing, vol. 11, pp.
125-142, 2008.
51. H. Sahbi and B. Nozha, "Validity of Fuzzy Clustering Using Entropy Regularization," in Fuzzy Systems,
2005. FUZZ '05. The 14th IEEE International Conference on, 2005, pp. 177-182.
52. H. Frigui and R. Krishnapuram, "Clustering by competitive agglomeration," Pattern Recognition, vol.
30, pp. 1109-1119, 7// 1997.
53. H. Frigui and R. Krishnapuram, "A robust competitive clustering algorithm with applications in computer
vision," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 21, pp. 450-465, 1999.
54. M. Halkidi, Y. Batistakis, and M. Vazirgiannis, "Clustering validity checking methods: part II,"
SIGMOD Rec., vol. 31, pp. 19-27, 2002.
55. M. Halkidi, Y. Batistakis, and M. Vazirgiannis, "Cluster validity methods: part I," SIGMOD Rec., vol.
31, pp. 40-45, 2002.
56. H. Timm and R. Kruse, "A modification to improve possibilistic fuzzy cluster analysis," in Fuzzy
Systems, 2002. FUZZ-IEEE'02. Proceedings of the 2002 IEEE International Conference on, 2002, pp.
1460-1465.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10037
www.rsisinternational.org
57. H. Timm, C. Borgelt, C. Döring, and R. Kruse, "An extension to possibilistic fuzzy cluster analysis,"
Fuzzy Sets and Systems, vol. 147, pp. 3-16, 10/1/ 2004.
58. N. R. Pal, K. Pal, and J. C. Bezdek, "A mixed c-means clustering model," in Fuzzy Systems, 1997.,
Proceedings of the Sixth IEEE International Conference on, 1997, pp. 11-21 vol.1.
59. N. R. Pal, K. Pal, J. M. Keller, and J. C. Bezdek, "A new hybrid c-means clustering model," in Fuzzy
Systems, 2004. Proceedings. 2004 IEEE International Conference on, 2004, pp. 179-184 vol.1.
60. R. N. Dave and S. Sen, "Generalized noise clustering as a robust fuzzy c-M-estimators model," in Fuzzy
Information Processing Society - NAFIPS, 1998 Conference of the North American, 1998, pp. 256-260.
61. E. R. Hruschka, R. J. G. B. Campello, and L. N. de Castro, "Evolutionary search for optimal fuzzy c-
means clustering," in Fuzzy Systems, 2004. Proceedings. 2004 IEEE International Conference on, 2004,
pp. 685-690 vol.2.
62. D. Yun-Ying, Z. Yun-Jie, and C. Chun-Ling, "Multistage random sampling genetic-algorithm-based
fuzzy c-means clustering algorithm," in Machine Learning and Cybernetics, 2004. Proceedings of 2004
International Conference on, 2004, pp. 2069-2073 vol.4.
63. N. R. Pal, K. Pal, J. M. Keller, and J. C. Bezdek, "A Possibilistic Fuzzy c-Means Clustering Algorithm,"
Fuzzy Systems, IEEE Transactions on, vol. 13, pp. 517-530, 2005.
64. L. O. Hall and P. M. Kanade, "Swarm Based Fuzzy Clustering with Partition Validity," in Fuzzy Systems,
2005. FUZZ '05. The 14th IEEE International Conference on, 2005, pp. 991-995.
65. W. Xiao-Hong and Z. Jian-Jiang, "Possibilistic Fuzzy c-Means Clustering Model Using Kernel
Methods," in Computational Intelligence for Modelling, Control and Automation, 2005 and International
Conference on Intelligent Agents, Web Technologies and Internet Commerce, International Conference
on, 2005, pp. 465-470.
66. L. Li and X. Wenbo, "An Immune-Inspired Evolutionary Fuzzy Clustering Algorithm Based on
Constrained Optimization," in Intelligent Systems Design and Applications, 2006. ISDA '06. Sixth
International Conference on, 2006, pp. 966-970.
67. B. Ojeda-Magafia, R. Ruelas, M. Corona-Nakamura, and D. Andina, "An improvement to the
possibilistic fuzzy c-means clustering algorithm," in Automation Congress, 2006. WAC'06. World, 2006,
pp. 1-8.
68. I. Ohyama, Y. Suzuki, S. Saga, and J. Maeda, "Fuzzy Modeling Based on Noise Cluster and Possibilistic
Clustering," in Adaptive and Learning Systems, 2006 IEEE Mountain Workshop on, 2006, pp. 225-230.
69. S. Miyamoto, R. Inokuchi, and Y. Kuroda, "Possibilistic and Fuzzy c-Means Clustering with Weighted
Objects," in Fuzzy Systems, 2006 IEEE International Conference on, 2006, pp. 869-874.
70. F. Masulli and S. Rovetta, "Soft transition from probabilistic to possibilistic fuzzy clustering," Fuzzy
Systems, IEEE Transactions on, vol. 14, pp. 516-527, 2006.
71. Q. Jiang and M. Jia, "Novel Hybrid Clustering Algorithm Incorporating Artificial Immunity into Fuzzy
Kernel Clustering for Pattern Recognition," in Control Conference, 2007. CCC 2007. Chinese, 2007, pp.
592-596.
72. P. Maji and S. K. Pal, "Rough Set Based Generalized Fuzzy <formula formulatype="inline"> <img
src="/images/tex/276.gif" alt="C"> </formula> -Means Algorithm and Quantitative Indices," Systems,
Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 37, pp. 1529-1540, 2007.
73. W. Hao, Y. Shiqin, X. Wenbo, and S. Jun, "Scalability of Hybrid Fuzzy C-Means Algorithm Based on
Quantum-Behaved PSO," in Fuzzy Systems and Knowledge Discovery, 2007. FSKD 2007. Fourth
International Conference on, 2007, pp. 261-265.
74. C. Wei and F. Kangling, "A hybridized clustering approach using particle swarm optimization for image
segmentation," in Audio, Language and Image Processing, 2008. ICALIP 2008. International Conference
on, 2008, pp. 1365-1368.
75. S. Di Nuovo and V. Catania, "An evolutionary fuzzy c-means approach for clustering of bio-informatics
databases," in Fuzzy Systems, 2008. FUZZ-IEEE 2008. (IEEE World Congress on Computational
Intelligence). IEEE International Conference on, 2008, pp. 2077-2082.
76. Y. Zhou, Y. e. Li, and S. Xia, "Robust Fuzzy-Possibilistic C-Means Algorithm," in Intelligent
Information Technology Application, 2008. IITA '08. Second International Symposium on, 2008, pp.
669-673.
77. M.-S. Yang and H.-S. Tsai, "A Gaussian kernel-based fuzzy c-means algorithm with a spatial bias
correction," Pattern Recognition Letters, vol. 29, pp. 1713-1725, 9/1/ 2008.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS |Volume IX Issue XXVI December 2025 | Special Issue on Education
Page 10038
www.rsisinternational.org
78. S.-h. Liu and H.-f. Hou, "A combination of mixture Genetic Algorithm and Fuzzy C-means Clustering
Algorithm," in IT in Medicine & Education, 2009. ITIME '09. IEEE International Symposium on, 2009,
pp. 254-258.
79. Szila, x, L. gyi, D. Iclanzan, S. M. Szilagyi, D. Dumitrescu, et al., "A generalized c-means clustering
model using optimized via evolutionary computation," in Fuzzy Systems, 2009. FUZZ-IEEE 2009. IEEE
International Conference on, 2009, pp. 451-455.
80. Z. Wei, L. Cheng, and Z. Yu-zhu, "A new hybrid algorithm for image segmentation based on rough sets
and enhanced fuzzy c-means clustering," in Automation and Logistics, 2009. ICAL '09. IEEE
International Conference on, 2009, pp. 1212-1216.
81. Szila, x, L. gyi, S. M. Szilagyi, and Z. Benyo, "A unified approach to c-means clustering models," in
Fuzzy Systems, 2009. FUZZ-IEEE 2009. IEEE International Conference on, 2009, pp. 456-461.
82. H. Izakian, A. Abraham, and V. Snasel, "Fuzzy clustering using hybrid fuzzy c-means and fuzzy particle
swarm optimization," in Nature & Biologically Inspired Computing, 2009. NaBIC 2009. World Congress
on, 2009, pp. 1690-1694.
83. C. Yongming, J. Mingyan, and Y. Dongfeng, "Novel Clustering Algorithms Based on Improved
Artificial Fish Swarm Algorithm," in Fuzzy Systems and Knowledge Discovery, 2009. FSKD '09. Sixth
International Conference on, 2009, pp. 141-145.
84. Q. Fuheng, M. SiLiang, and H. Yating, "Generalized Possibilistic C-Means Clustering Based on
Differential Evolution Algorithm," in Intelligent Systems and Applications, 2009. ISA 2009.
International Workshop on, 2009, pp. 1-4.
85. H. Dong, Y. Dong, C. Zhou, G. Yin, and W. Hou, "A fuzzy clustering algorithm based on evolutionary
programming," Expert Systems with Applications, vol. 36, pp. 11792-11800, 11// 2009.
86. U. Maulik and I. Saha, "Modified differential evolution based fuzzy clustering for pixel classification in
remote sensing imagery," Pattern Recognition, vol. 42, pp. 2135-2149, 9// 2009.
87. W. Hao, L. Danyun, and C. Yayun, "A New Scalability of Hybrid Fuzzy C-Means Algorithm," in
Artificial Intelligence and Computational Intelligence (AICI), 2010 International Conference on, 2010,
pp. 55-58.
88. L. Zhu, S. Qu, and T. Du, "Adaptive fuzzy clustering based on Genetic algorithm," in Advanced
Computer Control (ICACC), 2010 2nd International Conference on, 2010, pp. 79-82.