
The detection of change points in chaotic and non-stationary time series presents a critical challenge for numerous practical applications, particularly in fields such as finance, climatology, and engineering. Traditional statistical methods, grounded in stationary models, are often ill-suited to capture the dynamics of processes governed by stochastic chaos. This paper explores modern approaches to change point detection, focusing on multivariate regression analysis and machine learning techniques. We demonstrate the limitations of conventional models and propose hybrid methods that leverage long-term correlations and metric-based learning to improve detection accuracy. Our study presents comparative analyses of existing early detection techniques and introduces advanced algorithms tailored to non-stationary environments, including online and offline segmentation strategies. By applying these methods to financial market data, particularly in monitoring currency pairs like EUR/USD, we illustrate how dynamic filtering and multiregression analysis can significantly enhance the identification of change points. The results underscore the importance of adapting detection models to the specific characteristics of chaotic data, offering practical solutions for improving decision-making in complex systems. Key findings reveal that while no universal solution exists for detecting change points in chaotic time series, integrating machine learning and multivariate approaches allows for more robust and adaptive forecasting models. The work highlights the potential for future advancements in neural network applications and multi-expert decision systems, further enhancing predictive accuracy in volatile environments.
Citation: Alexander Musaev, Dmitry Grigoriev, Maxim Kolosov. Adaptive algorithms for change point detection in financial time series[J]. AIMS Mathematics, 2024, 9(12): 35238-35263. doi: 10.3934/math.20241674
[1] | Sang Gil Kang, Woo Dong Lee, Yongku Kim . Bayesian multiple changing-points detection. AIMS Mathematics, 2025, 10(3): 4662-4708. doi: 10.3934/math.2025216 |
[2] | Mengmei Xi, Yi Wu, Xuejun Wang . Strong consistency properties of the variance change point estimator based on strong-mixing samples. AIMS Mathematics, 2024, 9(11): 30059-30072. doi: 10.3934/math.20241452 |
[3] | Yang Du, Weihu Cheng . Change point detection for a skew normal distribution based on the Q-function. AIMS Mathematics, 2024, 9(10): 28698-28721. doi: 10.3934/math.20241392 |
[4] | Xiaofeng Zhang, Hao Jin, Yunfeng Yang . Quasi-autocorrelation coefficient change test of heavy-tailed sequences based on M-estimation. AIMS Mathematics, 2024, 9(7): 19569-19596. doi: 10.3934/math.2024955 |
[5] | Jingjing Yang, Weizhong Tian, Chengliang Tian, Sha Li, Wei Ning . Empirical likelihood method for detecting change points in network autoregressive models. AIMS Mathematics, 2024, 9(9): 24776-24795. doi: 10.3934/math.20241206 |
[6] | Stelios Arvanitis, Michalis Detsis . Mild explocivity, persistent homology and cryptocurrencies' bubbles: An empirical exercise. AIMS Mathematics, 2024, 9(1): 896-917. doi: 10.3934/math.2024045 |
[7] | Wenzhi Zhao, Dou Liu, Huiming Wang . Sieve bootstrap test for multiple change points in the mean of long memory sequence. AIMS Mathematics, 2022, 7(6): 10245-10255. doi: 10.3934/math.2022570 |
[8] | Renjie Chu, Peiyuan Jin, Hanli Qiao, Quanxi Feng . Intrusion detection in the IoT data streams using concept drift localization. AIMS Mathematics, 2024, 9(1): 1535-1561. doi: 10.3934/math.2024076 |
[9] | Jaesung Choi, Pilwon Kim . Early warning for critical transitions using machine-based predictability. AIMS Mathematics, 2022, 7(11): 20313-20327. doi: 10.3934/math.20221112 |
[10] | Dongzhihan Wang, Guijin Ma, Xiaorui Liu . An intelligent recognition framework of access control system with anti-spoofing function. AIMS Mathematics, 2022, 7(6): 10495-10512. doi: 10.3934/math.2022585 |
The detection of change points in chaotic and non-stationary time series presents a critical challenge for numerous practical applications, particularly in fields such as finance, climatology, and engineering. Traditional statistical methods, grounded in stationary models, are often ill-suited to capture the dynamics of processes governed by stochastic chaos. This paper explores modern approaches to change point detection, focusing on multivariate regression analysis and machine learning techniques. We demonstrate the limitations of conventional models and propose hybrid methods that leverage long-term correlations and metric-based learning to improve detection accuracy. Our study presents comparative analyses of existing early detection techniques and introduces advanced algorithms tailored to non-stationary environments, including online and offline segmentation strategies. By applying these methods to financial market data, particularly in monitoring currency pairs like EUR/USD, we illustrate how dynamic filtering and multiregression analysis can significantly enhance the identification of change points. The results underscore the importance of adapting detection models to the specific characteristics of chaotic data, offering practical solutions for improving decision-making in complex systems. Key findings reveal that while no universal solution exists for detecting change points in chaotic time series, integrating machine learning and multivariate approaches allows for more robust and adaptive forecasting models. The work highlights the potential for future advancements in neural network applications and multi-expert decision systems, further enhancing predictive accuracy in volatile environments.
The challenge of detecting changes in time series properties, often referred to as the problem of "change point detection", has not only remained relevant over the past few decades but has also intensified as it extends into new, strategically significant fields such as climatology, seismology, sociology, psychology, economics, and commerce. It is also critical in disaster prediction, medical and technical diagnostics, monitoring of maneuvering objects, and a wide range of issues related to artificial intelligence and cognitive science, as well as image and speech signal processing, among others.
A change point refers to a sudden shift in one or more variables within time series data. In one-dimensional cases, this involves a single variable; in multidimensional cases, several variables may be affected. These abrupt changes signify transitions between states and are distinct from gradual trends, as they occur instantaneously. The process of identifying these sudden shifts is formally known as change point detection [1].
The growing importance and diversity of this problem stem not only from its applications but also from evolving perspectives on the reality surrounding us and the emergence of new technologies for its mathematical description. A particularly crucial area of knowledge has emerged, requiring the formalized description of unstable environments and the processes occurring within them. These inherently nonlinear challenges are characterized by the lack of dynamic stability in observable processes, where even the smallest perturbations can lead to unpredictable outcomes. The mathematical framework for describing such processes is provided by the theory of dynamic chaos [2,3,4]. However, the presence of random components in the monitoring data of the observed object further complicates the mathematical modeling, placing it within the realm of stochastic chaos.
Under these conditions, the traditional problem of change point detection encounters several new and previously unaddressed challenges:
(1) In problems governed by chaotic processes, the condition of repeatability is not met. From a forecasting perspective, this means that two time intervals with similar dynamic structures (according to the chosen proximity metric) may lead to entirely different outcomes. In other words, traditional learning systems based on metric similarity algorithms may prove to be ineffective. Moreover, the lack of repeatability undermines the foundation of the entire probabilistic-statistical paradigm. This implies that conventional statistical data analysis algorithms may result in ineffective and unreliable assessments and decisions.
(2) Changes in the properties of observation series, which describe the dynamics of a controlled object's state, are traditionally attributed to external factors from the surrounding environment. However, in chaotic systems, spontaneous emergence of multiple local trends of indeterminate duration is possible. This creates a need for fundamentally new approaches to detecting change points, based on analyzing signal deformations within the fine structures of observable processes.
The complexity of applying statistical data analysis algorithms does not imply their complete unsuitability for forecasting change points in chaotic data. However, to use them effectively, it is essential to define their applicability boundaries, assess the expected degree of reduced efficiency, and explore the possibility of regularization or ordering transformations of the original data set. For instance, previous studies [5] have explored the use of correlation relationships in multivariate observation series of chaotic processes as a regularizing factor.
Let's begin by considering the traditional formulation of the change point detection problem. We are given a discrete time series of observations yk, where k = 1, …, n, obtained through the monitoring of a deterministic but unknown process x(t), with t∈[0, T]. The index k represents the discrete time steps at which the process is observed, corresponding to the moments t = k(T/n). The smoothed values of these observations constitute the systematic component of the time series.
The classical approach to change point detection is based on a model where the observations yk, k = 1, …, n, represent a random process with an unknown deterministic trend. The stochastic component of the time series νk, k = 1, …, n, is typically described, in the simplest case, by a Gaussian stationary time series model. To further simplify the mathematics, a model of independent and identically distributed (i.i.d.) Gaussian noise is often used.
One of the most well-known formal representations of this process is the additive Wold decomposition [6]:
yk=xk+νk,k=1,…n, | (1) |
where the noise component νk, k = 1, …n is a stationary random process, often modeled as a moving average process
νk=∑∞j=0ajϵk−j,k=1,…,n. | (2) |
Here, εk, k = 1, …, n, represents white noise. It is evident that the process described by Eq (1) cannot be considered stationary, as it includes a time-varying systematic component xk, k = 1, …, n. However, when it is possible to identify and reconstruct the underlying deterministic process, the remaining noise component νk = yk–xk, k = 1, …, n, can, under the initial assumption, be described as a parameterized stationary process with a constant probability density function f(νk, θ), k = 1, …, n. Further simplification of the data model involves the assumption that the distribution function of the noise component is normal. Although many studies on robust statistical data analysis technologies critically evaluate this assumption, it holds under the conditions of the central limit theorems of probability theory, where the empirical distribution converges approximately to the Gaussian model. In this scenario, the parameter vector of the distribution function consists of two parameters: the location parameter θL and the scale parameter θS. Thus, through successive simplifications of the initial data model, the change point detection problem is reduced to analyzing the parameters of a weakly stationary time series, specifically investigating the constancy of the location and scale parameters.
By employing this simplified data model, where the observations are treated as a sequence of random observations with stationary measurement noise, the change point detection problem is formalized within the probabilistic-statistical paradigm.
Let f(yk,θ)∀k=1,...,n represent the probability density function of the observations with a vector of parameters θ. At each observation step, an empirical estimate of the posterior distribution ⌢f(yk,θ)∀k=1,...,n and a corresponding discrepancy metric ⌢μk=μ(⌢f(yk,θ),f(yk,θ))∀k=1,...,n are computed. If the value of this metric exceeds a threshold μ∗(γ), corresponding to the chosen confidence level γ for decision-making, i.e., if k:|μk|>μ∗(γ), a decision is made to detect the change point at the k-th time point.
In terms of hypothesis testing theory, this involves testing the hypothesis H0:δμk=|μk−μ∗(γ)|≠0 against the alternative H1:|μk−μ∗(γ)|=0. The standard approach, based on the assumption of Gaussian statistics δμk, proposes using the Student's t-statistic t=δμk/s(⌢μk) with k-1 degrees of freedom, where s(⌢μk) is the standard deviation of the discrepancy metric ⌢μk. If the current value of the t-statistic exceeds the critical value t*, determined from the Student's t-distribution table for the chosen confidence level γ, a decision is made to detect the change point.
When the problem is addressed under conditions of weak stationarity (or stationarity in a broad sense), the procedure is similar, but the metric used is the distance between the parameters (most often Gaussian) of the distribution ⌢μk=μ(θ,⌢θk))∀k=1,...,n, is the parameter estimate at the current time k.
There are currently numerous technologies for detecting change points, each allowing for various faceted and hierarchical classifications based on different classification criteria. In this work, we will adhere to the classification presented in Figure 1. This faceted classification employs three primary criteria: the nature of data processing (joint or sequential processing), the type of learning involved, and the dimensionality of the problem being addressed. In this paper, we will focus primarily on proactive (i.e., predictive) online technologies for both univariate and multivariate data.
The topic of change point detection algorithms has been extensively explored in the literature [7], with dedicated reviews focusing on offline algorithms [1] and sequential (online) algorithms [8], as well as unsupervised [9] and supervised approaches [10]. However, this article gives limited attention to these studies, as the decomposition methods they describe are not suited to handling chaotic processes and are ineffective in the specific cases discussed below.
Let us begin by examining a simple, idealized case of change points detection in a Gaussian stationary process with parameters N(θL,θS)=N(0,1). Here, the first parameter represents the expected value (mean) of the stationary series θL=E(v)=0, and the second parameter denotes the variance θS=σ2(v)=0. Figure 2 illustrates instances of abrupt changes in both the location and scale parameters over a 24-hour observation period, where measurements are taken at one-minute intervals, resulting in 1440 data points.
These abrupt changes occur in three distinct intervals:
• Between time points [150,400], where the location parameter increases by 2 units.
• Between time points [600,850], where the scale parameter (standard deviation) doubles.
• Between time points [1050, 1300], where the location parameter decreases by 2 units.
Even in this highly simplified scenario, detecting change points proves to be a challenging task; visually identifying the exact moments when these parameter changes occur is not straightforward. Moreover, the model used here is of a stationary process, which is rarely encountered in real-world applications.
In practice, observed processes typically include a systematic component. To illustrate this complexity, we consider a second example involving non-stationary time series with clearly defined systematic components. Figures 3 and 4 display time series with the same change points as in the previous example, but with additional systematic components: a linear trend xk=−6+0.01k,k=1,...,1440 in Figure 3, and a sinusoidal pattern xk=3sin(0.006k),k=1,...,1440 in Figure 4.
It becomes evident that in these examples, it is even more difficult to discern significant abrupt change points, despite the use of a deliberately simplified model involving stationary Gaussian noise. This model, while widely applied in numerous studies to this day, is seldom validated by the analysis of the statistical properties of real-world observations.
The examples discussed earlier pertained to a traditional and typically idealized framework for detecting change points. However, what occurs in real-world scenarios? To explore this, let's examine data obtained from monitoring the EUR/USD currency pair over a 100-day period (see Figure 5).
From the graph, it is evident that the time series represents an oscillatory, non-periodic random process characterized by numerous local trends and clear signs of large-scale self-similarity. This observation suggests that the systematic component of the process xk,k=1,...,n (1) might be considered a manifestation of deterministic chaos [11].
However, the presence of a random noise component vk,k=1,...,n introduces further complexity, resulting in a process described by the concept of stochastic chaos [12,13,14]. Despite ongoing research, a rigorous mathematical description of stochastic chaos remains elusive. Consequently, the primary tools for investigating this phenomenon are numerical methods that analyze the local trends within the observed time series.
It should be noted that the structural model of observations (1) can still be applied to describe the presented process. However, in this context, its components will have a fundamentally different interpretation [15,16,17,18]:
(1) The systematic component xk,k=1,...,n represents an oscillatory, non-periodic process with numerous local trends and abrupt changes in the mean value. This component can be described as a realization of a deterministic chaotic process;
(2) The random component vk,k=1,...,n is a nonstationary stochastic process characterized by heteroskedasticity and a significant number of anomalous observations. As shown in [9], this component can be further decomposed into a sum of two stochastic processes: one with unstable but significant autocorrelation functions, and a white Gaussian noise.
(3) Conventional models, which typically define a change point as a shift in the mean or variance, are inadequate for this context. Here, the process shows a continuous change in the mean without maintaining a consistent variance. The challenges and intricacies of change point detection under these conditions will be addressed in detail in the sections that follow.
In revisiting the problem of detecting change points, it is important to clarify what constitutes a point of change in the properties of a time series (structural breaks). To illustrate this, we perform a posterior segmentation of the time series presented in Figure 6 by identifying segments of monotonic linear trends, where the magnitude of the trend's change exceeds a threshold of Δ∗y=250to300 pips (see Figure 6).
As demonstrated in the figure, the original time series can be divided into 13 segments, each of which is roughly approximated by a linear function with a change magnitude Δ∗y 250–300p. In this context, the structural break points in the observed process are the values within the time series where the sign of the approximating linear trend changes. These points are highlighted with red circles on the graph and can be considered as change points.
(1) Segmentation refers to the process of dividing the observed time series into segments with fixed segmentation criteria. In this case, the criterion is the sign of the linear trend of the segment.
(2) The minimum change threshold of a monotonic trend is crucial when constructing the segmentation. Altering this threshold will change the entire segmentation structure.
(3) A pip is a standardized unit representing the normalized value of the price change of a financial instrument.
It is important to note the conditional nature of the segmentation presented in Figure 6. Modifying the minimum change threshold will inevitably lead to a completely different segmentation of the process and, consequently, a different set of structural break points. Changing the segmentation technique, such as using cubic splines, will also significantly affect the identified break points and so on.
It should be noted that the time series examples presented here represent boundary cases in terms of the difficulty of detecting structural breaks. Examples 1 and 2, which involve stationary noise, are the simplest for detecting structural breaks and have many well-developed and practically proven solutions [19,20,21]. Example 3, which involves chaotic data, is extremely complex and still lacks effective solutions. Most applied problems fall into an intermediate range of complexity and are addressed with varying levels of effectiveness, primarily using numerical data analysis methods.
The formulation of the change point detection problem described earlier is typically found in academic literature, where the standard example involves observing locally stationary processes. These processes maintain their statistical properties within certain time intervals, which are interrupted by abrupt changes in state or scale parameters. However, in practical scenarios, the observed process rarely maintains its parameters even locally; they are constantly shifting, with the differences lying primarily in the structure of these changes. This necessitates a revision of the concept of a change point and the corresponding formalization.
Firstly, it is important to recognize that the concept of strict stationarity is not suitable for tasks involving continuously changing statistical characteristics. Estimating the probability density, including kernel density estimates, requires a large volume of observations under identical conditions—something that is unachievable in the presence of strong non-stationarity in the data. A more constructive approach to detecting change points involves estimating numerical parameters of the distribution or the observed process itself, such as trend parameters.
This leads to the need for a formalized approach to change point detection that takes into account:
• The characteristics of the observed process;
• The way real-world observations reflect the chosen mathematical model of the data;
• The structure and specificity of the applied problem for which the change point detection is being carried out.
It is clear that considering these requirements and constraints narrows the generality of the problem formulation. However, it also makes the problem more concrete and relevant for practical applications.
An important consideration involves optional settings that significantly influence the effectiveness of the solution. Examples of such parameters for change point detection in non-stationary time series include:
The size of the moving observation window L, within which statistical estimates of the criteria parameters are formed:
Yk−L,k=[yk−L,yk−L,...,yk],k=L+1,...,n, | (3) |
• The magnitude of reverse fluctuations Δ∗y in the dynamics of the observed series, which should be considered false alarms, as determined by the specific content of the problem being solved;
• The required confidence level in the decisions made regarding the presence of a change point;
• The impact of decision-making errors on the terminal performance indicators of the task for which change point detection is being carried out, and so on.
• Selecting the threshold is a pivotal step in these algorithms, as the optimal value is highly task- and data-specific. In this study, we adopt an empirical approach to determine the threshold, relying on an analysis of the quality of change point detection across various scenarios.
The intermediate conclusion is that for non-stationary processes, the general formulation of the problem is poorly formalized in terms of constructing an algorithm for change point detection. It requires significant refinements related to:
• The dynamic and statistical structure of the original data;
• The selection and, if necessary, periodic adjustment of the observation model parameters and data processing algorithm;
• The objective function and constraints of the higher-level system for which the change point detection task is being addressed;
• The characteristics of the environment in which the observed process is embedded and the nature of their interactions, and so forth.
As an example of such a specification, consider the problem of posterior segmentation of time series data related to observing currency exchange rates on the Forex market.
Identifying trend reversal points on price charts is a critical task in financial asset management. These points indicate shifts in the direction of price movement, enabling investors to:
• Buy an asset as it begins to rise or sell when it starts to decline.
• Prevent losses by exiting a position before the trend reverses.
• Enter positions early during a growth phase and exit when the trend is fully established.
Various technical indicators and analytical methods can be employed to identify trend reversal points, facilitating more informed investment decisions. As noted earlier, a trend change point in this context refers to a change point in chaotic processes.
To illustrate the challenge of detecting change points in chaotic processes, let's consider the task of automatic posterior segmentation of observation series generated by a financial instrument monitoring system in the Forex market.
The problem is approached using one-dimensional offline change point detection (CPD) with supervised learning to create a data set for training machine learning models such as neural networks or statistical trading algorithms. The corresponding graph of the non-stationary process, obtained through minute-by-minute tracking of EUR/USD exchange rates over a 15-day observation period, is shown in Figure 7.
In the same figure, an offline segmentation of the entire observation period is performed. The segmentation divides the interval into segments where the hypothesis of a constant sign in the linear trend can be conditionally accepted. Thus, the goal of the segmentation is to identify the time intervals ΔTj=[tj,tj+1),j=1,...,m−1, that satisfy the condition sign(tr(Yj))=const, where tr(Yj) represents the value of the linear trend over the segmentation interval ΔTj,j=1,...,m−1, estimated as the coefficient of the linear approximation of the observation series in the sliding window of the sequential data scan Eq (2).
The parameters of the CPD in this case are:
• The critical threshold Δ∗y, identified a priori as false alarms.
• The size of the sliding observation window L, used to estimate the linear trend coefficient ⌢a1(k)=tr(Yk−L,k),k=L+1,...,n calculated using the least squares method (LSM) [22,23].
In offline CPD mode, the process is straightforward: at each step of sequential scanning ⌢a1(k)=tr(Yk−L,k)),k=L+1,...,n and its sign sign(⌢a1(k)),k=L+1,...,n are evaluated. A change in the trend sign at the k-th step marks this point as a potential change point. The final confirmation of the change point at the k-th observation step occurs when the condition |yk−yk+delay|>Δ∗y is met, meaning the size of the backward shift exceeds the critical value, which is estimated to minimize the probability of a false alarm β.
Unlike the graph shown in Figure 6, the presented task considers three possible segment classifications: increasing, decreasing, and sideways (or "flat"), where the trend magnitude is insignificant.
The solution to this problem was relatively straightforward due to the ability to process data offline. However, in the case of real-time online change point detection, the task becomes significantly more complex and faces several fundamental challenges. One of these challenges, related to identifying the systematic component of the observation series, will be addressed in the next section. A similar approach to analyzing financial time series is discussed in [24], where alternative methods are employed to identify trend change points. In this paper, however, we take a different direction.
The challenge of developing and optimizing change point detection (CPD) is fundamentally tied to selecting a mathematical model that provides a formalized description of the observed series. It's important to acknowledge that choosing a model and its structure is inherently subjective. Generally, a single process can be described using multiple mathematical models, each with its own advantages and disadvantages. Interestingly, the degree to which a model aligns with the data is not the sole criterion for assessing the quality of the model. For example, increasing the degree of a polynomial typically enhances the model's fit to the observed time series but simultaneously raises the likelihood of falsely detecting a change point.
In this study, we use Wold decomposition (1) as the baseline model, with a substantially different description of its components compared to the original formulation. The altered descriptors of both the systematic and stochastic components introduce a highly complex problem of isolating the systematic component from the observation series, a critical step in solving the change point detection problem. Specifically, the presence of "long memory" in colored noise νk, k = 1, …, n and the non-periodic oscillatory structure of the systematic component xk, k = 1, …, n make it impossible to separate these components using traditional filtering algorithms, which are typically designed to reduce pure random white noise.
To illustrate the decomposition problem of model (1), we can examine a two-day observation of EUR/USD exchange rates (see Figure 8). In the simplest case, an exponential filter [25] is used to isolate the systematic component xk, k = 1, …, n, described by the formula:
⌢xk=αy(t)+(1−α)⌢xk−1=⌢xk−1+α(yk−⌢xk−1),k=1,…,n, | (4) |
with two transmission coefficient variants: α1 = 0.1 and α2 = 0.005.
The graphs in Figure 8 show that with mild smoothing using α1 = 0.1 the systematic component contains many insignificant inflections, leading to an increase in Type Ⅱ statistical errors. These "false alarms" suggest a change in trend direction (i.e., a change point) when none actually occurs.
On the other hand, when a more heavily smoothed curve is used (with α2 = 0.005), the inflections in the graph correspond to significant trend changes. However, the delay introduced by the shift in the estimated values of the financial instrument leads to lagging decisions, ultimately resulting in poor management outcomes.
The stochastic component of the observation series (see Figure 9) in the context of chaotic dynamics is generally non-stationary. This is evident from the heteroscedasticity and the large number of outlier observations. A hypothesis test on the stationarity and independence of the noise component in the observation series, based on currency instrument price data, is provided in [12].
To address these contradictions, more advanced dynamic filters are required. One such method is bidirectional exponential filtering [25]:
⌢zk−i=⌢zk−i+1+αb(yk−i−⌢zk−i+1),i=1,...,L, | (5) |
⌢xi=⌢xi−1+αf(⌢zi−⌢xi−1),i=k−L,...,k, | (6) |
where αb,αf are the backward and forward smoothing coefficients, respectively, and ⌢zk=yk. This approach significantly reduces the lag in forming sequential estimates and improves forecasting accuracy by 10%–15%.
As an illustration, consider an artificial composite data model:
yk=Asin(2πk)+I([k1,k2])+a1yk−1+γvk,k=1,...,n. | (7) |
The systematic component of this model includes a sinusoidal term, a step function, and a linear trend. A realization of this model with parameters A=0.75,k1=40,andk2=70 is shown in Figure 10. The same figure also presents the results of bidirectional exponential filtering using a sliding smoothing window of 60 data points, with filter transmission coefficients, αb = 0.5 and αf = 0.1.
For change point detection, which we continue to define as a change in trend direction, we use an algorithm that detects inflection points based on five consecutive points of the smoothed curve, following the rules:
xk>xk−1&xk−1>x(k−2)&xk−2<xk−3<xk−4, | (8) |
or
xk<xk−1&xk−1<xk−2&xk−2>xk−3>xk−4. | (9) |
As shown in Figure 10, this approach achieves near-perfect detection of change points, making it highly useful for asset management strategies based on identifying inflection points in the systematic component of the observation series.
Unfortunately, even the best results obtained from local models do not guarantee effective asset management when applied to real data. We will illustrate this with an asset management example using observation series derived from the EUR/USD currency pair on the Forex market over 1,440 one-minute intervals (one day).
Figure 11 shows the result of bidirectional exponential filtering for this data using a smoothing window of 60 points, with filter transmission coefficients αb = 0.5 and αf = 0.1. In this graph, the detected changes are represented by local extrema, identified using rules (6-7), and marked with blue (local minima) and red (local maxima) points. Despite the quality of the bidirectional filtering, clusters of unsystematic local changes are detected, leading to Type Ⅱ errors ("false alarms") that result in cumulative losses. These clusters are highlighted with gray ellipses.
This negative outcome is primarily due to the fundamental difference between dynamic chaos and stochastic processes. The presence of a purely random component vk,k=1,...,n in the data model (1) ensures the inevitability of false alarms, even after multiple rounds of smoothing.
Other negative results stem from the systematic delay in detecting changes by two time steps. In the context of inertia-free chaotic processes, characteristic of asset price dynamics in capital markets [26,27], a two-step delay in decision-making (with a one-minute time discretization) is unacceptable.
It may be possible to use alternative decision-making rules instead of (6-7). For example, change points could be identified at the moments when the first finite differences δxk = xk-xk-1 of the process xk,k=1,...,n change sign. However, even in this case, change point detection is inevitably accompanied by a time delay.
It is worth noting that for chaotic data, the discussed asset management strategy, based on detecting changes in the systematic component of the observation series, likely cannot provide consistent gains. However, its application in combination with other decision-making technologies, such as multi-expert data analysis methodologies [28], may significantly improve the effectiveness of speculative trading operations.
Let's explore the problem of detecting changes in multivariate nonstationary processes, using the example of online analysis of financial instrument quotes in the Forex market [29,30,31]. As a regularizing factor that permits weak ordering of the dynamic structure of market chaos, we assume the presence of long-term correlations between the quoted prices.
To visually confirm the presence of correlation, Figure 12 presents the EUR/USD exchange rate (blue line) alongside the quotes of six other currency pairs (AUD/USD, NZD/USD, EUR/JPY, NZD/JPY, AUD/JPY, GBP/USD) that are most correlated with EUR/USD over a 300-day observation period. The statistical similarity of these graphs indicates the presence of correlations between the processes. This issue has been discussed in detail in [5], as well as in papers on the co-integration of econometric data [32,33].
The presence of such correlations allows the use of multivariate regression analysis (MR) for early detection of change points. The idea behind MR estimation is to assess the value of a financial instrument based on monitoring data from assets correlated with the target instrument. A significant discrepancy between this estimate and the current price of the target financial instrument signals a change, which is interpreted as either an overvaluation or undervaluation. In such cases, one can expect that market mechanisms will soon compensate for this price change, enabling a predictive decision regarding the trajectory of the target financial instrument's price.
The effectiveness of the MR model relies on the following key assumptions:
• The target instrument is assumed to have a linear relationship with the regressors.
• The relationships between the target instrument and its regressors are assumed to remain stable over time.
• The noise component is assumed to lack autocorrelation, as the time series is analyzed post-smoothing.
These assumptions underpin the reliability and accuracy of the MR model for detecting and interpreting change points in financial time series.
To formalize the MR approach, let us consider a vector form of model (1) for jointly describing the quotes of the target instrument and those of correlated market assets, which act as regressors in the statistical analysis of the current situation:
YRk=XRk+VRk, | (10) |
where YRk = (yR1, yR2, …, yRm)k is the vector of observations, XRk = (xR1, xR2, …, xRm)k, is the systematic component vector, and VRk = (νR1, νR2, …, νRm)k, k = 1, …, n is the noise component vector.
The relationship between the regressors and the target instrument can be expressed using a traditional linear MR model:
xk=ˆyk=CkYRk+νk,k=1,…,n. | (11) |
The actual computational scheme for MR estimation of the parameters of a nonstationary process imposes limitations on the training data volume, constrained by a sliding observation window L adjacent to the current time k. Using data from this sliding observation window:
Y(k−L+1):(k−1)=[YR,(k−L+1):(k−1),y(k−L+1):(k−1)],k=L,…,n, | (12) |
where Y(k-L+1):(k-1) = (Yk-L+1, …, Yk-1), y(k-L+1):(k-1) = (yk-L+1, …, yk-1).
We estimate the transmission coefficient in the model (11) using the ordinary least squares (OLS) method:
ˆCk=y(k−L+1):(k−1)(YT(k−L+1):(k−1)YR,(k−L+1):(k−1))−1,k=L,…,n−1. | (13) |
The final OLS estimate of the target instrument's value is:
ˆxk=ˆCkYRk,k=L,…,n. | (14) |
This expression provides a sequential MR estimate of the current value of the target instrument based on the observed parameters of the correlated market segment. Using the discrepancy between the market estimate and the current price, we can construct a change point detection algorithm and a corresponding management strategy, as studied in [31].
For this purpose, we calculate the difference:
dk=ˆyk−yk,k=1,…,n, | (15) |
which reflects the degree of discrepancy (change) between the current price of the target instrument and the market's perception of its real value. When the difference (15) exceeds a critical threshold:
|dk|>d∗,k=1,…,n, | (16) |
a change is detected. This discrepancy signals the presence of a change point as described above.
Selecting an appropriate threshold is a critical step in change point detection algorithms, as the optimal threshold value is highly dependent on the specific task and dataset. In this study, we adopt an empirical approach to threshold selection, grounded in an analysis of the effectiveness of change point detection across a range of scenarios.
As a practical demonstration of early change point detection and the corresponding trading strategy based on multivariate regression (MR) valuation of financial instruments, Figure 13 shows a centered graph of the USD/CHF currency pair dynamics (bottom) and its smoothed graph (top) over a 3-day observation period.
The trading strategy, grounded in adaptive MR valuation, operates as follows: when the absolute difference |dk| between the estimated and current value of the financial instrument exceeds a threshold d*, a recommendation is made to open a position, depending on the current trend.
In Figure 13, the star on the lower graph marks the instrument's value at the moment when an upward position recommendation is issued. The diamond corresponds to the price at the moment of closing the position, recommended when the graph of |dk| crosses the threshold d* again. Following these recommendations yields a profit of approximately 50 pips.
The simplest structural adaptation of this strategy involves selecting the set of regressors with the highest correlation within the sliding observation window (3). Parametric adaptation includes determining the size of the sliding window L, the smoothing factor α of the exponential filter (4), the threshold value d*, and other parameters. Optimization is conducted via exhaustive search with a chosen step size or more computationally efficient methods, such as evolutionary optimization techniques, which have shown high effectiveness [34,35].
Several factors can lead to errors when using MR algorithms to detect change points:
(1) The deviation of the working instrument's price is assessed only within a limited market segment and indicates merely the possibility of a relative price movement in a direction that compensates for the detected mismatch. However, if a strong systemic trend is present across the market, this compensatory price movement might be suppressed by the broader trend.
(2) Pairwise correlation estimates are stable only over long observation intervals. When using estimates from relatively short sliding windows, correlations can become highly variable, significantly reducing the quality of MR-based instrument valuation.
(3) Given the high noise νk, k = 1, …n level in the model (1), it is advisable to use xk, k = 1, …n smoothed price values rather than the raw quotes when forming the difference signal (15):
dk=ˆyk−xk,k=1,…,n, | (17) |
In the simplest case, the system component xk, k = 1, …n is extracted from the mixture (1) using an exponential filter, as defined by Eq (4), with a smoothing factor α = 0.005–0.03.
However, this approach introduces a time lag between the smoothed process and the original time series. In highly volatile environments, this lag can result in significant losses in management effectiveness, as discussed in Section 3.1.
The use of situational analogs or patterns is common in both machine learning tasks [36,37,38,39,40,41,42,43] and manual management techniques for nonstationary processes [44,45,46]. Many traders, for instance, attempt to identify geometric patterns on price charts that precede typical price movements. Well-known patterns include "wedge", "pennant", "flag", and "head-and-shoulders". However, traders often underestimate the unpredictability of chaos, where statistical experience has limited value. The same geometric pattern may be followed by either a price rise or fall with equal probability. By "aftereffect", we refer to directional changes occurring immediately after the appearance of a segment resembling a selected pattern.
To increase the reliability of case-based decisions, a large retrospective dataset spanning several years is necessary. Using non-parametric similarity measures, the frequency of recurring aftereffects can be estimated. Various statistical distance measures are employed to assess similarity: Hotelling's, Kolmogorov-Smirnov, ω2-Mises, Wilks, Kullback-Leibler divergence, Pillai's T02 trace, and others.
Let us formalize the task of detecting change points by applying precedent-based forecasting of nonstationary random processes. This approach hinges on identifying precedents that precede significant change points in retrospective data. Precedents are defined as segments of historical observations that are similar to the current situation.
The current situation refers to the time series observations immediately preceding the present moment (3). The window size L for the current situation is selected a priori, based on the inertia of the forecasted process. If there is no prior knowledge of this inertia, L should be treated as a parameter to be adjusted during model tuning.
Next, a sliding observation window of length L is used to scan the retrospective data of length N0, stored chronologically. At each step, a similarity measure is calculated between the current situation and historical data from the sliding window:
yi−L,i−1=[yi−L,…,yi−1],k=L+1,…,N0. | (18) |
The situation with the highest similarity, i.e., the one that minimizes the discrepancy between the current window and the scanning window (18), forms a precedent:
yW(i∗):ρ[yk−L,k−1,yk−L,k−1(i∗)]=min,i=1,…,N0−L. | (19) |
The method relies on the hypothesis that process changes following identical situations are relatively similar. In other words, the aftereffect of precedent (19) can serve as a basis for forecasting the future evolution of the current situation.
This approach is intuitively clear and likely mirrors how mental forecasts function. To recognize a situation, the brain's neural network identifies stored precedents and formulates predictive control based on the analysis of retrospective aftereffects.
In the one-dimensional case, the most common similarity metric:
ρ[yk−L,k−1,yk−L,k−1(i)],i=1,…,N0−L, | (20) |
is the root mean square deviation:
ρk=1n∑Lj=1(yk−L,k−1−yi−L,i−1))2,i=L+1,N0. | (21) |
The forecast, in this case, corresponds to the aftereffect behind the precedent window (19), i.e., the value from the database offset from the precedent by the forecast interval τ
˜y(k+τ)=y(i∗+L+τ). | (22) |
It is clear that relying on the nearest neighbor based on the chosen similarity metric in the context of chaotic system dynamics may yield unstable results. Therefore, in a more general case, the forecast is based on the averaged aftereffects of a group of ma analogs (i1, …, ima), with the smallest similarity metric values ρ[YW(k), YW(i)], i = 1, …, N0-L:
˜y(k+τ)=1ma∑mj=1y(i∗j+L+τ). | (23) |
A drawback of this approach is that it treats all analogs equally, regardless of their similarity to the current situation YW(k). Thus, it is more appropriate to use a weighted sum of aftereffects, with weights proportional to the degree of similarity. For precedent (19), the weight is set to one ω1 = 1, and for other analogs, the weights are determined as follows: ωj = ρ1/ρj, j = 2, …, ma. Then the weighted forecast value is determined by the following relationship:
ỹ(k+τ)=1S(ma)∑mj=1ωjy(i∗j+L), | (24) |
where S(ma)=∑maj=1ωj.
From the machine learning perspective, this approach represents a forecasting solution using weak metric classifiers [47,48]. The low efficiency of these classifiers is attributed to the nature of time series data, which includes chaotic system components and nonstationary random noise.
As a simple, illustrative example of implementing precedent technology to detect change points, let us consider the task of forecasting the price movements of a currency instrument using a quadratic classifier. An example of selecting analogous situations is shown in Figure 14.
The vertical line separates the retrospective data zone, which is used to generate the precedent forecast, from the operational zone, where the algorithm is tested. The start and end points of the intervals corresponding to the current situation and its analogs are marked with stars and crosses, respectively, while the forecast values are indicated by diamonds.
As seen in the figure, the first analog shows an upward trend, but it has already peaked and is showing signs of decline. The second and third analogs exhibit an unstable lateral trend. In this case, it is advisable to refrain from making predictive decisions, but if absolutely necessary, one might conclude that a lateral trend is emerging.
The further development of this approach involves exploring the possibility of constructing an algorithm for forecasting non-stationary processes for multivariate correlated observation series. In this case, the Mahalanobis distance between the current state matrix:
Yc(k)={yij},i=k,k−1,…,k−L,j=1,…,m,k=L+1,…,n, | (25) |
and the scanning window matrix:
Ys(k,l)={yij},i=l,l−1,...,l−L,j=1,…,m,l=L+1,…,k−L, | (26) |
is given by the following relation:
dMcs(k,l)=(√(Yc−Ec)TPcs(Yc−Ec))l,s, | (27) |
where Ec = Yc(k), Es = Ys(k, l) are the mean vectors for matrices (25) and (26), and Pcs = Pcs(l, k) = cov(Yc, Ys)l, k is the covariance matrix between the datasets.
Note that the Mahalanobis distance defined in (27) is closely related to Hotelling's T²-distribution, which is widely used in multivariate statistical analysis, such as in multivariate analysis of variance (MANOVA) and Fisher's linear discriminant analysis in supervised machine learning. When the covariance matrix is the identity matrix, the Mahalanobis measure reduces to the standard Euclidean distance. In the case of a diagonal covariance matrix, the Mahalanobis metric transforms into a normalized Euclidean distance. The Mahalanobis distance between data points is equivalent to applying the maximum likelihood principle in classification.
In multivariate analysis, any hypothesis may correspond to a large set of possible alternatives, which makes it challenging to find criteria with optimal properties. As a result, analogs to Fisher's F-statistic include various numerical characteristics of matrices U = S1S2-1 or V = S1(S1+S2)-1, where S1, S2 are sample covariance matrices of the examined samples. Let the eigenvalues of matrix U be λ1, …, λm, and those of matrix V be μ1, …, μm. Of particular interest for constructing multivariate forecasts of correlated data using information metrics are statistics like Hotelling's trace (or Lawley-Hotelling trace) T20=trace(U)=∑mi=1λi, Pillai's trace trace(V)=∑mi=1μi, Roy's largest (or smallest) characteristic root λmax(U), λmin(U), λmax(V), λmin(V), and Wilks' lambda statistic W=det(U)=∏mi=1λi=det(S1)det(S2).
The distributions of these statistics for various null hypotheses are extremely complex. In practice, their distributions are often approximated using the F-distribution with specially chosen degrees of freedom.
All of these statistics can be interpreted as measures of proximity and used to find analogs for current situations in the task of forecasting non-stationary processes. An analysis of the applicability of these multivariate statistical metrics for forecasting price series is presented in [39].
The problem of detecting changes in the properties of time series, also known as the "change point" problem, is of great practical importance for numerous applications. However, there is no universal solution to this problem, and it depends heavily on both the structure of the initial data and the technological constraints imposed by the specific features of the task.
Much of the existing literature on this problem, whether explicitly or implicitly, relies on the Wold decomposition model for data representation. Even studies claiming to analyze non-stationary time series continue to appeal to the concept of probability density, the estimation of which is impossible in such contexts. Even less convincing are references to abstract theoretical models, such as the Kolmogorov-Chapman equation or other a priori solutions that impose rigid constraints and are poorly compatible with real-world tasks.
The challenge of analyzing non-stationary data has become more acute with the realization that many observed processes in unstable environments are chaotic in nature. This includes fields such as turbulent hydrodynamics, gas dynamics, plasma stabilization, and the pricing of assets in electronic capital markets. These processes exhibit structural instability to small perturbations, consistent with deterministic chaos models. If, in addition to dynamic instability, the environment also contains random fluctuations, the resulting unstable process falls into the category of stochastic chaos, which has yet to be adequately mathematically described.
It should be noted that chaotic processes can also be interpreted as non-stationary random processes. However, this approach does not simplify the task of detecting change points in the general case. It seems likely that a general solution does not exist due to the high level of uncertainty inherent in the problem.
A natural way to narrow the scope of solvable problems is to classify them, as shown in Figure 1. This paper presents examples of posterior solutions to the change point problem, specifically offline segmentation tasks, which allow for clear formalization and constructive solutions under various sets of a priori constraints. On the other hand, moving to online change point detection in non-stationary and chaotic processes presents a number of challenging requirements, such as the ambiguity of identifying the systematic component of the observation series used to form timely change point decisions, and the trade-off between smoothing quality and bias in sequentially generated estimates.
The problem of early change point detection is particularly difficult in inertia-free immaterial environments, where the dynamics are driven by a large number of uncontrollable latent factors and a high level of dynamic instability.
At the same time, the practical need for efficient solutions to the change point problem in non-stationary processes calls for constructive computational algorithms. It is proposed that modern computing technologies be applied in the context of unstable and non-stationary dynamic observation series. Numerical experiments could be used to assess their suitability for solving the change point problem, which is critically examined in this work.
Specifically, approaches based on multivariate regression analysis and machine learning metric algorithms are discussed. However, these methods are not universal, and the diverse nature of chaos can generate situations where the proposed algorithms are ineffective.
Further development of change point detection algorithms can be divided into several directions. One such direction involves the use of neural network technologies, where hopes for an effective solution are tied to training artificial neural networks on large sets of retrospective data. However, building a neural network capable of real-time retraining during the infinite dynamic variations of chaos seems unpromising, as networks train slowly even with modern high-performance computers.
On the other hand, there is an unverified hypothesis that strongly pronounced change points may be preceded by systematic deformations in the fine structure of the data. If this hypothesis is correct, a neural network might be able to identify such deformations during long-term training and use them as signal indicators for predicting change points.
Another direction for improving change point detection involves the use of multi-expert decision systems (MES). The concept of such systems is discussed in [28]. The main challenge in implementing MES lies in developing algorithms for jointly processing the conclusions of software experts to improve the reliability of terminal decisions. Conflict and compromise game theory techniques [49,50,51] may be used in this case. Ensemble methods used in machine learning tasks are of particular interest [52,53,54].
Our findings emphasize the critical need to tailor detection models to the unique characteristics of chaotic data. By integrating machine learning and multivariate approaches, we can develop forecasting models that are both robust and adaptive.
This work also opens several promising directions for future research:
• Developing more sophisticated algorithms to isolate the systematic component in non-stationary time series.
• Exploring the impact of various model parameters, such as the size of the sliding window and the detection threshold, on the accuracy and reliability of change point detection algorithms.
• Applying the proposed methods to other non-stationary time series, including climate data, seismic activity, and medical time series.
• Designing comprehensive systems that combine change point detection algorithms with other decision-making tools, such as multi-agent systems or machine learning-based frameworks.
These avenues offer the potential to significantly enhance the effectiveness and applicability of change point detection methods in both financial and broader scientific contexts.
Conceptualization, methodology, programming, A.M.; validation, M.K.; investigation, writing, visualization, administration, scientific discussions, supervision, funding acquisition, D.G. All the authors have read and approved the final version of the manuscript for publication.
The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.
The research contributions to sections 2 and 3 by Alexander Musaev were supported by a grant from the Russian Science Foundation (Project No. 24-19-00823). Dmitry Grigoriev's work on sections 1 and 4 was funded by Saint Petersburg State University under research project No. 103905424. All authors have read and agreed to the published version of the article.
The authors declare no conflicts of interest.
[1] |
C. Truong, L. Oudre, N. Vayatis, Selective review of offline change point detection methods, Signal Process., 167 (2020), 107299. https://doi.org/10.1016/j.sigpro.2019.107299 doi: 10.1016/j.sigpro.2019.107299
![]() |
[2] | K. A. Blount, L. Rush, Chaos theory, Entangled Crush, 2021. |
[3] | B. Davies, Exploring chaos: theory and experiment, Studies in Nonlinearity, CRC Press, 2018. https://doi.org/10.1201/9780429502866 |
[4] | D. Feldman, Chaos and dynamical systems, Princeton University Press, 2019. https://doi.org/10.1515/9780691189390 |
[5] |
A. Musaev, D. Grigoriev, Analyzing, modeling, and utilizing observation series correlation in capital markets, Computation, 9 (2021), 88. https://doi.org/10.3390/computation9080088 doi: 10.3390/computation9080088
![]() |
[6] | H. Wold, A study in the analysis of stationary time series, 2 Eds., Almqvist & Wiksell, 1954. |
[7] |
S. Aminikhanghahi, D. J. Cook, A survey of methods for time series change point detection, Knowl. Inf. Syst., 51 (2017), 339–367. https://doi.org/10.1007/s10115-016-0987-z doi: 10.1007/s10115-016-0987-z
![]() |
[8] | B. Namoano, A. Starr, C. Emmanouilidis, R. C. Cristobal, Online change detection techniques in time series: an overview, 2019 IEEE International Conference on Prognostics and Health Management (ICPHM), 2019. https://doi.org/10.1109/ICPHM.2019.8819394 |
[9] | Mario Krause, Unsupervised Change Point Detection for heterogeneous sensor signals, arXiv Press, 2023. https://doi.org/10.48550/arXiv.2305.11976 |
[10] |
F. Li, G. C. Runger, E. Tuv, Supervised learning for change-point detection, Int. J. Prod. Res., 44 (2006), 2853–2868. https://doi.org/10.1080/00207540600669846 doi: 10.1080/00207540600669846
![]() |
[11] |
A. Musaev, A. Makshanov, D. Grigoriev, The genesis of uncertainty: structural analysis of stochastic chaos in finance markets, Complexity, 2023 (2023), 1–16. https://doi.org/10.1155/2023/1302220 doi: 10.1155/2023/1302220
![]() |
[12] | R. M. Yusupov, A. A. Musaev, D. A. Grigoriev, Evaluation of statistical forecast method efficiency in the conditions of dynamic chaos, 2021 IV International Conference on Control in Technical Systems (CTS), 2021,178–180. https://doi.org/10.1109/CTS53513.2021.9562780 |
[13] |
A. Musaev, D. Grigoriev, Numerical studies of statistical management decisions in conditions of stochastic chaos, Mathematics, 10 (2022), 226. https://doi.org/10.3390/math10020226 doi: 10.3390/math10020226
![]() |
[14] |
S. C. Huang, P. J. Chuang, C. F. Wu, H. J. Lai, Chaos-based support vector regressions for exchange rate forecasting, Expert Syst. Appl., 37 (2010), 8590–8598. https://doi.org/10.1016/j.eswa.2010.06.001 doi: 10.1016/j.eswa.2010.06.001
![]() |
[15] | A. Özkaya, Chaotic dynamics in Turkish foreign exchange markets, Bus. Manag. Stud. Int. J., 10 (2022), 787–795. |
[16] |
A. Das, P. Das, Chaotic analysis of the foreign exchange rates, Appl. Math. Comput., 185 (2007), 388–396. https://doi.org/10.1016/j.amc.2006.06.106 doi: 10.1016/j.amc.2006.06.106
![]() |
[17] | E. E. Peters, Fractal market analysis: applying chaos theory to investment and economics, Wiley, 1994. |
[18] | E. E. Peters, Chaos and order in the capital markets: a new view of cycles, prices, and market volatility, 2 Eds., John Wiley & Sons, 1996. |
[19] | M. Basseville, I. V. Nikiforov, Detection of abrupt changes: theory and applications (Translated from English), Prentice-Hall, Inc., 1993. |
[20] |
A. Aue, L. Horváth, Structural breaks in time series, J. Time Ser. Anal., 34 (2013), 1–16. https://doi.org/10.1111/j.1467-9892.2012.00819.x doi: 10.1111/j.1467-9892.2012.00819.x
![]() |
[21] | A. N. Shiryaev, Stochastic disorder problems, Moscow Center for Continuous Mathematical Education (MCNMO), 2019. |
[22] | Y. G. Sinai, Probability theory: an introductory course, Springer Berlin, Heidelberg, 1992. https://doi.org/10.1007/978-3-662-02845-2 |
[23] | M. C. Meyer, Probability and mathematical statistics: theory, applications, and practice in R, SIAM, 2019. https://doi.org/10.1137/1.9781611975789 |
[24] |
M. Aloud, E. Tsang, R. Olsen, A. Dupuis, A directional-change event approach for studying financial time series, Economics, 6 (2012), 1–17, https://doi.org/10.5018/economics-ejournal.ja.2012-36 doi: 10.5018/economics-ejournal.ja.2012-36
![]() |
[25] |
A. Musaev, A. Makshanov, D. Grigoriev, Algorithms of sequential identification of system components in chaotic processes, Int. J. Dyn. Control, 11 (2023), 2566–2579. https://doi.org/10.1007/s40435-023-01121-9 doi: 10.1007/s40435-023-01121-9
![]() |
[26] |
T. G. Kang, P. D. Anderson, The effect of inertia on the flow and mixing characteristics of a chaotic serpentine mixer, Micromachines, 5 (2014), 1270–1286. https://doi.org/10.3390/mi5041270 doi: 10.3390/mi5041270
![]() |
[27] |
A. Musaev, A. Makshanov, D. Grigoriev, Exploring the quotation inertia in international currency markets, Computation, 11 (2023), 209. https://doi.org/10.3390/computation11110209 doi: 10.3390/computation11110209
![]() |
[28] | A. Musaev, D. Grigoriev, Multi-expert systems: Fundamental concepts and application examples, J. Theor. Appl. Inf. Technol., 100 (2022), 336–348. |
[29] |
Z. Ivanovski, N. Ivanovska, Z. Narasanov, The regression analysis of stock returns at MSE, J. Mod. Account. Audit., 12 (2016), 217–224. https://doi.org/10.17265/1548-6583/2016.04.003 doi: 10.17265/1548-6583/2016.04.003
![]() |
[30] | J. Fang, Why logistic regression analyses are more reliable than multiple regression analyses, J. Bus. Econ., 4 (2013), 620–633. |
[31] |
A. Musaev, A. Makshanov, D. Grigoriev, Statistical analysis of current financial instrument quotes in the conditions of market chaos, Mathematics, 10 (2022), 587. https://doi.org/10.3390/math10040587 doi: 10.3390/math10040587
![]() |
[32] |
R. F. Engle, C. W. J. Granger, Co-integration and error correction: Representation, estimation, and testing, Econometrica, 55 (1987), 251–276. https://doi.org/10.2307/1913236 doi: 10.2307/1913236
![]() |
[33] |
C. W. J. Granger, Some properties of time series data and their use in econometric model specification, J. Econometrics, 16 (1981), 121–130. https://doi.org/10.1016/0304-4076(81)90079-8 doi: 10.1016/0304-4076(81)90079-8
![]() |
[34] | L. J. Fogel, A. J. Owens, M. J. Walsh, Artificial intelligence through simulated evolution, John Wiley & Sons, 1966. |
[35] |
A. Musaev, A. Makshanov, D. Grigoriev, Evolutionary optimization of control strategies for non-stationary immersion environments, Mathematics, 10 (2022), 1797. https://doi.org/10.3390/math10111797 doi: 10.3390/math10111797
![]() |
[36] |
M. A. Junior, P. Appiahene, O. Appiah, C. N. Bombie, Forex market forecasting using machine learning: Systematic literature review and meta-analysis, J. Big Data, 10 (2023), 9. https://doi.org/10.1186/s40537-022-00676-2 doi: 10.1186/s40537-022-00676-2
![]() |
[37] | A. A. Baasher, M. W. Fakhr, Forex trend classification using machine learning techniques, Proceedings of the 11th WSEAS international conference on Applied computer science, Stevens Point, WI, USA: World Scientific and Engineering Academy and Society (WSEAS), 1 (2011), 41–47. |
[38] |
A. Musaev, E. Borovinskaya, Prediction in chaotic environments based on weak Musaev quadratic classifiers, Symmetry, 12 (2020), 1630. https://doi.org/10.3390/sym12101630 doi: 10.3390/sym12101630
![]() |
[39] |
A. Musaev, A. Makshanov, D. Grigoriev, Forecasting multivariate chaotic processes with precedent analysis, Computation, 9 (2021), 110. https://doi.org/10.3390/computation9100110 doi: 10.3390/computation9100110
![]() |
[40] | V. Niederhoffer, L. Kenner, Practical speculation, Wiley, 2005. |
[41] | J. Chen, Essentials of technical analysis for financial markets, John Wiley & Sons, 2010. https://doi.org/10.1002/9781119204213 |
[42] | R. Di Lorenzo, Basic technical analysis of financial markets, Springer, 2013. https://doi.org/10.1007/978-88-470-5421-9 |
[43] |
I. K. Nti, A. F. Adekoya, B. A. Weyori, A systematic review of fundamental and technical analysis of stock market predictions, Artif. Intell. Rev., 53 (2020), 3007–3057. https://doi.org/10.1007/s10462-019-09754-z doi: 10.1007/s10462-019-09754-z
![]() |
[44] | B. Donnely, The art of currency trading: a professional's guide to the foreign exchange market, Wiley, 2019. |
[45] | F. Escher, Elements of foreign exchange: a foreign exchange primer, Wentworth Press, 1917. |
[46] | S. K. Parameswaran, Fundamentals of financial instruments: an introduction to stocks, bonds, foreign exchange, and derivatives, 2 Eds., Wiley, 2022. |
[47] | K. Fukunaga, Introduction to statistical pattern recognition, 2Eds., Academic Press, 2013. |
[48] | T. Hastie, R. Tibshirani, J. Friedman, The elements of statistical learning, 2 Eds., Springer, 2009. https://doi.org/10.1007/978-0-387-84858-7 |
[49] |
D. Thakore, Conflict and conflict management, IOSR J. Bus. Manag., 8 (2013), 7–16. https://doi.org/10.9790/487X-0860716 doi: 10.9790/487X-0860716
![]() |
[50] | A. J. Jones, Game theory: mathematical models of conflict, Elsevier, 2000. |
[51] | A. Skowron, S. Ramanna, J. F. Peters, Conflict analysis and information systems: a rough set approach, In: G. Y. Wang, J. F. Peters, A. Skowron, Y. Yao, Rough Sets Knowl. Technol., Springer, 2006. https://doi.org/10.1007/11795131_34 |
[52] | G. Kunapuli, Ensemble methods for machine learning, Manning Publications, 2023. |
[53] |
R. Fonseca, P. Gomez, Automatic model selection in ensembles for time series forecasting, IEEE Latin Am. Trans., 14 (2016), 3811–3819. https://doi.org/10.1109/TLA.2016.7786368 doi: 10.1109/TLA.2016.7786368
![]() |
[54] | M. P. Deisenroth, A. A. Faisal, Mathematics for machine learning, Cambridge University Press, 2020. https://doi.org/10.1017/9781108679930 |