Research article

A robust alternating least squares K-means clustering approach for times series using dynamic time warping dissimilarities

  • Received: 22 October 2023 Revised: 29 December 2023 Accepted: 22 January 2024 Published: 06 February 2024
  • Time series clustering is a usual task in many different areas. Algorithms such as K-means and model-based clustering procedures are used relating to multivariate assumptions on the datasets, as the consideration of Euclidean distances, or a probabilistic distribution of the observed variables. However, in many cases the observed time series are of unequal length and/or there is missing data or, simply, the time periods observed for the series are not comparable between them, which does not allow the direct application of these methods. In this framework, dynamic time warping is an advisable and well-known elastic dissimilarity procedure, in particular when the analysis is accomplished in terms of the shape of the time series. In relation to a dissimilarity matrix, K-means clustering can be performed using a particular procedure based on classical multidimensional scaling in full dimension, which can result in a clustering problem in high dimensionality for large sample sizes. In this paper, we propose a procedure robust to dimensionality reduction, based on an auxiliary configuration estimated from the squared dynamic time warping dissimilarities, using an alternating least squares procedure. The performance of the model is compared to that obtained using classical multidimensional scaling, as well as to that of model-based clustering using this related auxiliary linear projection. An extensive Monte Carlo procedure is employed to analyze the performance of the proposed method in which real and simulated datasets are considered. The results obtained indicate that the proposed K-means procedure, in general, slightly improves the one based on the classical configuration, both being robust in reduced dimensionality, making it advisable for large datasets. In contrast, model-based clustering in the classical projection is greatly affected by high dimensionality, offering worse results than K-means, even in reduced dimension.

    Citation: J. Fernando Vera-Vera, J. Antonio Roldán-Nofuentes. A robust alternating least squares K-means clustering approach for times series using dynamic time warping dissimilarities[J]. Mathematical Biosciences and Engineering, 2024, 21(3): 3631-3651. doi: 10.3934/mbe.2024160

    Related Papers:

  • Time series clustering is a usual task in many different areas. Algorithms such as K-means and model-based clustering procedures are used relating to multivariate assumptions on the datasets, as the consideration of Euclidean distances, or a probabilistic distribution of the observed variables. However, in many cases the observed time series are of unequal length and/or there is missing data or, simply, the time periods observed for the series are not comparable between them, which does not allow the direct application of these methods. In this framework, dynamic time warping is an advisable and well-known elastic dissimilarity procedure, in particular when the analysis is accomplished in terms of the shape of the time series. In relation to a dissimilarity matrix, K-means clustering can be performed using a particular procedure based on classical multidimensional scaling in full dimension, which can result in a clustering problem in high dimensionality for large sample sizes. In this paper, we propose a procedure robust to dimensionality reduction, based on an auxiliary configuration estimated from the squared dynamic time warping dissimilarities, using an alternating least squares procedure. The performance of the model is compared to that obtained using classical multidimensional scaling, as well as to that of model-based clustering using this related auxiliary linear projection. An extensive Monte Carlo procedure is employed to analyze the performance of the proposed method in which real and simulated datasets are considered. The results obtained indicate that the proposed K-means procedure, in general, slightly improves the one based on the classical configuration, both being robust in reduced dimensionality, making it advisable for large datasets. In contrast, model-based clustering in the classical projection is greatly affected by high dimensionality, offering worse results than K-means, even in reduced dimension.



    加载中


    [1] S. Aghabozorgi, A. Shirkhorshidi, T. Wah, Time-series clustering–-A decade review, Inf. Syst., 53 (2015), 16–38, https://doi.org/10.1016/j.is.2015.04.007 doi: 10.1016/j.is.2015.04.007
    [2] W. Liao, Clustering of time series data—A survey, Pattern Recognit., 38 (2005), 1857–1874. https://doi.org/10.1016/j.patcog.2005.01.025 doi: 10.1016/j.patcog.2005.01.025
    [3] H. Li, J. Tong, A novel clustering algorithm for time-series data based on precise correlation coefficient matching in the IoT, Math. Biosci. Eng., 16 (2019), 6654–6671. https://doi.org/10.3934/mbe.2019331 doi: 10.3934/mbe.2019331
    [4] S. Policker, A. B. Geva, Nonstationary time series analysis by temporal clustering, IEEE Trans. Syst. Man Cybern. Part B Cybern., 30 (2000), 339–343. https://doi.org/10.1109/3477.836381 doi: 10.1109/3477.836381
    [5] C. Goutte, P. Toft, E. Rostrup, F. A. Nielsen, L. K. Hansen, On clustering fMRI time series, Neuroimage, 9 (1999), 298–310. https://doi.org/10.1006/nimg.1998.0391 doi: 10.1006/nimg.1998.0391
    [6] N. Subhani, L. Rueda, A. Ngom, C. J. Burden, Multiple gene expression profile alignment for microarraytime-series data clustering, Bioinformatics, 26 (2010), 2281–2288. https://doi.org/10.1093/bioinformatics/btq422 doi: 10.1093/bioinformatics/btq422
    [7] J. McQueen, Some methods for classification and analysis of multivariate observations, in Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. II, (eds. M. Lucien, C. Le, N. Jerzy), Statistical Laboratory of the University of California, Berkeley, (1967), 281–297.
    [8] J. D. Banfield, A. E. Raftery, Model-based Gaussian and non-Gaussian clustering, Biometrics, 49 (1993), 803–821. https://doi.org/10.2307/2532201 doi: 10.2307/2532201
    [9] B. S. Everitt, S. Landau, M. Leese, D. Stahl, Cluster analysis, 5th edition, Wiley series in probability and statistics, Wiley, Chichester, 2011. https://doi.org/10.1002/9780470977811
    [10] H. H. Bock, Model-based clustering methods for time series, in German-Japanese Interchange of Data Analysis Results. Studies in Classification, Data Analysis, and Knowledge Organization, (eds. W. Gaul, A. Geyer-Schulz, Y. Baba, A. Okada), Springer, Cham, (2013), 3–12. https://doi.org/10.1007/978-3-319-01264-3_1
    [11] P. Montero, J. Vilar, TSclust: An R package for time series clustering, J. Stat. Softw., 62 (2014), 1–43. https://doi.org/10.18637/jss.v062.i01 doi: 10.18637/jss.v062.i01
    [12] P. Ortega-Jiménez, M. A. Sordo, A. Suárez-Llorens, Stochastic comparisons of some distances between random variables, Mathematics, 9 (2021), 981. https://doi.org/10.3390/math9090981 doi: 10.3390/math9090981
    [13] J. F. Vera, Clustering and representation of time series. Application to dissimilarities based on divergences, in Trends in Mathematical, Information and Data Sciences. Studies in Systems, Decision and Control, (eds. N. Balakrishnan, M. A. Gil, N. Martín, D. Morales, M. C. Pardo), Springer, Cham, 445 (2023), 243–251. https://doi.org/10.1007/978-3-031-04137-2_22
    [14] T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer Science and Business Media, New York, (2009). https://doi.org/10.1007/978-0-387-84858-7
    [15] J. F. Vera, R. Macías, On the behaviour of K-means clustering of a dissimilarity matrix by means of full multidimensional scaling, Psychometrika, 89 (2021), 489–513. https://doi.org/10.1007/s11336-021-09757-2 doi: 10.1007/s11336-021-09757-2
    [16] J. F. Vera, J. M. Angulo, An MDS-based unifying approach to time series K-means clustering: application in the dynamic time warping framework, Stoch. Environ. Res. Risk Assess., 37 (2023), 4555–4566. https://doi.org/10.1007/s00477-023-02470-9 doi: 10.1007/s00477-023-02470-9
    [17] L. Kaufman, P. J. Rousseeuw, Finding Groups in Data: An Introduction to Cluster Analysis, Wiley Series in Probability and Statistics, Wiley, Hoboken, NJ, USA, 1990. https://doi.org/10.1002/9780470316801
    [18] J. C. Lingoes, Some boundary conditions for a monotone analysis of symmetric matrices, Psychometrika, 36 (1971), 195–203. https://doi.org/10.1007/BF02291398 doi: 10.1007/BF02291398
    [19] D. Steinley, K-means clustering: A half-century synthesis, Br. J. Math. Stat. Psychol., 59 (2006), 1–34, https://doi.org/10.1348/000711005X48266 doi: 10.1348/000711005X48266
    [20] M. Vichi, H. A. L. Kiers, Factorial K-means analysis for two-way data, Comput. Stat. Data Anal., 37 (2001), 49–64. https://doi.org/10.1016/S0167-9473(00)00064-5 doi: 10.1016/S0167-9473(00)00064-5
    [21] Y. Takane, F. W. Young, J. de Leeuw, Nonmetric individual differences multidimensional scaling: An alternating least squares method with optimal scaling features, Psychometrika, 42 (1977), 7–67. https://doi.org/10.1007/BF02293745 doi: 10.1007/BF02293745
    [22] R. Bailey, J. Gower, Approximating a symmetric matrix, Psychometrika, 55 (1990), 665–675. https://doi.org/10.1007/BF02294615 doi: 10.1007/BF02294615
    [23] R. A. Hefner, Extension of the Law of Comparative Judgment to Discriminable and Multidimensional Stimuli, PhD. thesis, University of Michigan, 1958.
    [24] J. L. Zinnes, D. B. Mackay, Probabilistic multidimensional scaling: Complete and incomplete data, Psychometrika, 48 (1983), 27–48. https://doi.org/10.1007/BF02314675 doi: 10.1007/BF02314675
    [25] M. S. Oh, A. E. Raftery, Model-based clustering with dissimilarities: A Bayesian approach, J. Comput. Graph. Stat., 16 (2007), 559–585. https://doi.org/10.1198/106186007X236127 doi: 10.1198/106186007X236127
    [26] T. Giorgino, Computing and visualizing dynamic time warping alignments in R: The dtw package, J. Stat. Softw., 31 (2009), 1–24. https://doi.org/10.18637/jss.v031.i07 doi: 10.18637/jss.v031.i07
    [27] J. F. Vera, C. D. Rivera, A structural equation multidimensional scaling model for one-mode asymmetric dissimilarity data, Struct. Equation Modell. Multidiscip. J., 21 (2014), 54–62. https://doi.org/10.1080/10705511.2014.85669 doi: 10.1080/10705511.2014.85669
    [28] J. F. Vera, P. Mair, SEMDS: An R package for structural equation multidimensional scaling, Struct. Equation Modell. Multidiscip. J., 26 (2019), 803–818. https://doi.org/10.1080/10705511.2018.1561292 doi: 10.1080/10705511.2018.1561292
    [29] K. V. Mardia, Some properties of clasical multi-dimesional scaling, Commun. Stat.- Theory Methods, 7 (1978), 1233–1241. https://doi.org/10.1080/03610927808827707 doi: 10.1080/03610927808827707
    [30] Y. Chen, B. H. Keogh, N. Begum, A. Bagnall, A. Mueen, G. Batista, The UCR Time Series Classification Archive, 2015. Available from: http://www.timeseriesclassification.com/index.php.
    [31] T. Rusch, J. de Leeuw, L. Chen, P. Mair, smacofx: Flexible Multidimensional Scaling and 'smacof' Extensions. R Package Version 0.6-6, 2003. Available from: https://CRAN.R-project.org/package = smacofx.
    [32] L. Scrucca, M. Fop, T. B. Murphy, A. E. Raftery, mclust 5: Clustering, classification and density estimation using Gaussian finite mixture models, R J., 8 (2016), 289–317. https://doi.org/10.32614/RJ-2016-021 doi: 10.32614/RJ-2016-021
    [33] M. Gavrilov, D. Anguelov, P. Indyk, R. Motwani, Mining the stock market: Which measure is best, in Proceedings of the 6th International Conference on Knowledge Discovery and Data Mining (KDD'00), (2000), 487–496. https://doi.org/10.1145/347090.347189
    [34] C. Bouveyron, C. Brunet-Saumard, Model-based clustering of high-dimensional data: A review, Comput. Stat. Data Anal., 71 (2013), 52–78. https://doi.org/10.1016/j.csda.2012.12.008 doi: 10.1016/j.csda.2012.12.008
    [35] L. Davis, Predictive Modelling of Bone Ageing, PhD. thesis, University of East Anglia, UK, 2013.
    [36] A. Bagnall, L. Davis, Predictive modelling of bone age through classification and regression of bone shapes, preprint, arXiv: 1406.4781v1, 2014. https://doi.org/10.48550/arXiv.1406.4781
    [37] A. P. Dempster, N. M. Laird, D. B. Rubin, Maximum likelihood estimation from incomplete data via the EM algorithm, J. R. Stat. Soc. B, 39 (1977), 1–38. https://doi.org/10.1111/j.2517-6161.1977.tb01600.x doi: 10.1111/j.2517-6161.1977.tb01600.x
  • mbe-21-03-160-supplementary.pdf
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(948) PDF downloads(111) Cited by(2)

Article outline

Figures and Tables

Figures(3)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog