Minimum spanning tree (MST)-based clustering algorithms are widely used to detect clusters with diverse densities and irregular shapes. However, most algorithms require the entire dataset to construct an MST, which leads to significant computational overhead. To alleviate this issue, our proposed algorithm R-MST utilizes representative points instead of all sample points for constructing MST. Additionally, based on the density and nearest neighbor distance, we improved the representative point selection strategy to enhance the uniform distribution of representative points in sparse areas, enabling the algorithm to perform well on datasets with varying densities. Furthermore, traditional methods for eliminating inconsistent edges generally require prior knowledge about the number of clusters, which is not always readily available in practical applications. Therefore, we propose an adaptive method that employs mutual neighbors to identify inconsistent edges and determine the optimal number of clusters automatically. The experimental results indicate that the R-MST algorithm not only improves the efficiency of clustering but also enhances its accuracy.
Citation: Hui Du, Depeng Lu, Zhihe Wang, Cuntao Ma, Xinxin Shi, Xiaoli Wang. Fast clustering algorithm based on MST of representative points[J]. Mathematical Biosciences and Engineering, 2023, 20(9): 15830-15858. doi: 10.3934/mbe.2023705
Minimum spanning tree (MST)-based clustering algorithms are widely used to detect clusters with diverse densities and irregular shapes. However, most algorithms require the entire dataset to construct an MST, which leads to significant computational overhead. To alleviate this issue, our proposed algorithm R-MST utilizes representative points instead of all sample points for constructing MST. Additionally, based on the density and nearest neighbor distance, we improved the representative point selection strategy to enhance the uniform distribution of representative points in sparse areas, enabling the algorithm to perform well on datasets with varying densities. Furthermore, traditional methods for eliminating inconsistent edges generally require prior knowledge about the number of clusters, which is not always readily available in practical applications. Therefore, we propose an adaptive method that employs mutual neighbors to identify inconsistent edges and determine the optimal number of clusters automatically. The experimental results indicate that the R-MST algorithm not only improves the efficiency of clustering but also enhances its accuracy.
[1] | X. Xue, J. Chen, Matching biomedical ontologies through compact differential evolution algorithm with compact adaption schemes on control parameters, Neurocomputing, 458 (2021), 526–534. https://doi.org/10.1016/j.neucom.2020.03.122 doi: 10.1016/j.neucom.2020.03.122 |
[2] | X. Xue, Y. Wang, Ontology alignment based on instance using NSGA-Ⅱ, J. Inf. Sci., 41 (2015), 58–70. https://doi.org/10.1177/0165551514550142 doi: 10.1177/0165551514550142 |
[3] | D. S. Silva, M. Holanda, Applications of geospatial big data in the Internet of Things, Trans. GIS, 26 (2022), 41–71. https://doi.org/10.1111/tgis.12846 doi: 10.1111/tgis.12846 |
[4] | T. Xu, J. Jiang, A graph adaptive density peaks clustering algorithm for automatic centroid selection and effective aggregation, Expert Syst. Appl., 195 (2022), 116539. https://doi.org/10.1016/j.eswa.2022.116539 doi: 10.1016/j.eswa.2022.116539 |
[5] | F. U. Siddiqui, A. Yahya, F. U. Siddiqui, A. Yahya, Partitioning clustering techniques, in Clustering Techniques for Image Segmentation, Springer, (2022), 35–67. https://doi.org/10.1007/978-3-030-81230-0_2 |
[6] | F. U. Siddiqui, A. Yahya, F. U. Siddiqui, A. Yahya, Novel partitioning clustering, in Clustering Techniques for Image Segmentation, Springer, (2022), 69–91. https://doi.org/10.1007/978-3-030-81230-0_3 |
[7] | C. K. Reddy, B. Vinzamuri, A survey of partitional and hierarchical clustering algorithms, in Data Clustering, Chapman and Hall/CRC, (2018), 87–110. https://doi.org/10.1201/9781315373515-4 |
[8] | S. Zhou, Z. Xu, F. Liu, Method for determining the optimal number of clusters based on agglomerative hierarchical clustering, IEEE Trans. Neural Networks Learn. Syst., 28 (2016), 3007–3017. https://doi.org/10.1109/TNNLS.2016.2608001 doi: 10.1109/TNNLS.2016.2608001 |
[9] | E. C. Chi, K. Lange, Splitting methods for convex clustering, J. Comput. Graphical Stat., 24 (2015), 994–1013. https://doi.org/10.1080/10618600.2014.948181 doi: 10.1080/10618600.2014.948181 |
[10] | M. Ester, H. P. Kriegel, J. Sander, X. Xu, A density-based algorithm for discovering clusters in large spatial databases with noise, in kdd, 96 (1996), 226–231. |
[11] | P. Bhattacharjee, P. Mitra, A survey of density based clustering algorithms, Front. Comput. Sci., 15 (2021), 1–27. https://doi.org/10.1007/s11704-019-9059-3 doi: 10.1007/s11704-019-9059-3 |
[12] | A. Rodriguez, A. Laio, Clustering by fast search and find of density peaks, Science, 344 (2014), 1492–1496. https://doi.org/10.1126/science.1242072 doi: 10.1126/science.1242072 |
[13] | S. Sieranoja, P. Fränti, Fast and general density peaks clustering, Pattern Recognit. Lett., 128 (2019), 551–558. https://doi.org/10.1016/j.patrec.2019.10.019 doi: 10.1016/j.patrec.2019.10.019 |
[14] | A. Joshi, E. Fidalgo, E. Alegre, L. Fernández-Robles, SummCoder: An unsupervised framework for extractive text summarization based on deep auto-encoders, Expert Syst. Appl., 129 (2019), 200–215. https://doi.org/10.1016/j.eswa.2019.03.045 doi: 10.1016/j.eswa.2019.03.045 |
[15] | J. Xie, R. Girshick, A. Farhadi, Unsupervised deep embedding for clustering analysis, in International Conference on Machine Learning, PMLR, (2016), 478–487. https://doi.org/10.48550/arXiv.1511.06335 |
[16] | M. Gori, G. Monfardini, F. Scarselli, A new model for learning in graph domains, in Proceedings. 2005 IEEE International Joint Conference on Neural Networks, IEEE, (2005), 729–734. https://doi.org/10.1109/IJCNN.2005.1555942 |
[17] | C. Wang, S. Pan, R. Hu, G. Long, J. Jiang, C. Zhang, Attributed graph clustering: A deep attentional embedding approach, preprint, arXiv: 1906.06532. |
[18] | R. Jothi, S. K. Mohanty, A. Ojha, Fast approximate minimum spanning tree based clustering algorithm, Neurocomputing, 272 (2018), 542–557. https://doi.org/10.1016/j.neucom.2017.07.038 doi: 10.1016/j.neucom.2017.07.038 |
[19] | J. C. Gower, G. J. Ross, Minimum spanning trees and single linkage cluster analysis, J. R. Stat. Soc. C, 18 (1969), 54–64. https://doi.org/10.2307/2346439 doi: 10.2307/2346439 |
[20] | C. T. Zahn, Graph-theoretical methods for detecting and describing gestalt clusters, IEEE Trans. Comput., 100 (1971), 68–86. https://doi.org/10.1109/T-C.1971.223083 doi: 10.1109/T-C.1971.223083 |
[21] | O. Grygorash, Y. Zhou, Z. Jorgensen, Minimum spanning tree based clustering algorithms, in 2006 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'06), IEEE, (2006), 73–81. https://doi.org/10.1109/ICTAI.2006.83 |
[22] | A. C. Müller, S. Nowozin, C. H. Lampert, Information theoretic clustering using minimum spanning trees, in Joint DAGM (German Association for Pattern Recognition) and OAGM Symposium, Springer, (2012), 205–215. https://doi.org/10.1007/978-3-642-32717-9_21 |
[23] | M. Gagolewski, M. Bartoszuk, A. Cena, Genie: A new, fast, and outlier-resistant hierarchical clustering algorithm, Inf. Sci., 363 (2016), 8–23. https://doi.org/10.1016/j.ins.2016.05.003 doi: 10.1016/j.ins.2016.05.003 |
[24] | Y. Ma, H. Lin, Y. Wang, H. Huang, X. He, A multi-stage hierarchical clustering algorithm based on centroid of tree and cut edge constraint, Inf. Sci., 557 (2021), 194–219. https://doi.org/10.1016/j.ins.2020.12.016 doi: 10.1016/j.ins.2020.12.016 |
[25] | G. Mishra, S. K. Mohanty, A fast hybrid clustering technique based on local nearest neighbor using minimum spanning tree, Expert Syst. Appl., 132 (2019), 28–43. https://doi.org/10.1016/j.eswa.2019.04.048 doi: 10.1016/j.eswa.2019.04.048 |
[26] | F. Şaar, A. E. Topcu, Minimum spanning tree‐based cluster analysis: A new algorithm for determining inconsistent edges, Concurrency Comput. Pract. Exper., 34 (2022), e6717. https://doi.org/10.1002/cpe.6717 doi: 10.1002/cpe.6717 |
[27] | H. A. Chowdhury, D. K. Bhattacharyya, J. K. Kalita, UIFDBC: Effective density based clustering to find clusters of arbitrary shapes without user input, Expert Syst. Appl., 186 (2021), 115746. https://doi.org/10.1016/j.eswa.2021.115746 doi: 10.1016/j.eswa.2021.115746 |
[28] | R. C. Prim, Shortest connection networks and some generalizations, Bell Syst. Tech. J., 36 (1957), 1389–1401. https://doi.org/10.1002/j.1538-7305.1957.tb01515.x doi: 10.1002/j.1538-7305.1957.tb01515.x |
[29] | F. Ros, S. Guillaume, Munec: a mutual neighbor-based clustering algorithm, Inf. Sci., 486 (2019), 148–170. https://doi.org/10.1016/j.ins.2019.02.051 doi: 10.1016/j.ins.2019.02.051 |
[30] | D. Steinley, Properties of the hubert-arable adjusted rand index, Psychol. Methods, 9 (2004), 386. https://doi.org/10.1037/1082-989X.9.3.386 doi: 10.1037/1082-989X.9.3.386 |
[31] | P. A. Estévez, M. Tesmer, C. A. Perez, J. M. Zurada, Normalized mutual information feature selection, IEEE Trans. Neural Networks, 20 (2009), 189–201. https://doi.org/10.1109/TNN.2008.2005601 doi: 10.1109/TNN.2008.2005601 |
[32] | M. Sato-Ilic, On evaluation of clustering using homogeneity analysis, in IEEE International Conference on Systems, Man and Cybernetics, IEEE, 5 (2000), 3588–3593. https://doi.org/10.1109/ICSMC.2000.886566 |
[33] | P. Fränti, Clustering datasets, 2017. Available from: https://cs.uef.fi/sipu/datasets. |
[34] | P. Fränti, S. Sieranoja, K-means properties on six clustering benchmark datasets, Appl. Intell., 48 (2018), 4743–4759. https://doi.org/10.1007/s10489-018-1238-7 doi: 10.1007/s10489-018-1238-7 |
[35] | D. Dua, C. Graff, UCI Machine Learning Repository, 2017. Available from: https://archive.ics.uci.edu/ml. |
[36] | J. B. Kruskal, On the shortest spanning subtree of a graph and the traveling salesman problem, Proc. Am. Math. Soc., 7 (1956), 48–50. |