Research article Special Issues

A mathematical model for the dynamics of happiness


  • Positive psychology recognizes happiness as a construct comprising hedonic and eudaimonic well-being dimensions. Integrating these components and a set of theory-led assumptions, we propose a mathematical model, given by a system of nonlinear ordinary differential equations, to describe the dynamics of a person's happiness over time. The mathematical model offers insights into the role of emotions for happiness and why we struggle to attain sustainable happiness and tread the hedonic treadmill oscillating around a relative stable level of well-being. The model also indicates that lasting happiness may be achievable by developing constant eudaimonic emotions or human altruistic qualities that overcome the limits of the homeostatic hedonic system; in mathematical terms, this process is expressed as distinct dynamical bifurcations. This mathematical description is consistent with the idea that eudaimonic well-being is beyond the boundaries of hedonic homeostasis.

    Citation: Gustavo Carrero, Joel Makin, Peter Malinowski. A mathematical model for the dynamics of happiness[J]. Mathematical Biosciences and Engineering, 2022, 19(2): 2002-2029. doi: 10.3934/mbe.2022094

    Related Papers:

    [1] Hongli Niu, Kunliang Xu . A hybrid model combining variational mode decomposition and an attention-GRU network for stock price index forecasting. Mathematical Biosciences and Engineering, 2020, 17(6): 7151-7166. doi: 10.3934/mbe.2020367
    [2] Hongli Niu, Yazhi Zhao . Crude oil prices and volatility prediction by a hybrid model based on kernel extreme learning machine. Mathematical Biosciences and Engineering, 2021, 18(6): 8096-8122. doi: 10.3934/mbe.2021402
    [3] Biyun Hong, Yang Zhang . Research on the influence of attention and emotion of tea drinkers based on artificial neural network. Mathematical Biosciences and Engineering, 2021, 18(4): 3423-3434. doi: 10.3934/mbe.2021171
    [4] Hongji Xu, Shi Li, Shidi Fan, Min Chen . A new inconsistent context fusion algorithm based on BP neural network and modified DST. Mathematical Biosciences and Engineering, 2021, 18(2): 968-982. doi: 10.3934/mbe.2021051
    [5] Eunjae Choi, Yoosang Park, Jongsun Choi, Jaeyoung Choi, Libor Mesicek . Forecasting of garlic price based on DA-RNN using attention weight of temporal fusion transformers. Mathematical Biosciences and Engineering, 2023, 20(5): 9041-9061. doi: 10.3934/mbe.2023397
    [6] Xin Jing, Jungang Luo, Shangyao Zhang, Na Wei . Runoff forecasting model based on variational mode decomposition and artificial neural networks. Mathematical Biosciences and Engineering, 2022, 19(2): 1633-1648. doi: 10.3934/mbe.2022076
    [7] Xiangyang Ren, Juan Tan, Qingmin Qiao, Lifeng Wu, Liyuan Ren, Lu Meng . Demand forecast and influential factors of cold chain logistics based on a grey model. Mathematical Biosciences and Engineering, 2022, 19(8): 7669-7686. doi: 10.3934/mbe.2022360
    [8] Xiong Wang, Zhijun Yang, Hongwei Ding, Zheng Guan . Analysis and prediction of UAV-assisted mobile edge computing systems. Mathematical Biosciences and Engineering, 2023, 20(12): 21267-21291. doi: 10.3934/mbe.2023941
    [9] Xiaotong Ji, Dan Liu, Ping Xiong . Multi-model fusion short-term power load forecasting based on improved WOA optimization. Mathematical Biosciences and Engineering, 2022, 19(12): 13399-13420. doi: 10.3934/mbe.2022627
    [10] Hongqiang Zhu . A graph neural network-enhanced knowledge graph framework for intelligent analysis of policing cases. Mathematical Biosciences and Engineering, 2023, 20(7): 11585-11604. doi: 10.3934/mbe.2023514
  • Positive psychology recognizes happiness as a construct comprising hedonic and eudaimonic well-being dimensions. Integrating these components and a set of theory-led assumptions, we propose a mathematical model, given by a system of nonlinear ordinary differential equations, to describe the dynamics of a person's happiness over time. The mathematical model offers insights into the role of emotions for happiness and why we struggle to attain sustainable happiness and tread the hedonic treadmill oscillating around a relative stable level of well-being. The model also indicates that lasting happiness may be achievable by developing constant eudaimonic emotions or human altruistic qualities that overcome the limits of the homeostatic hedonic system; in mathematical terms, this process is expressed as distinct dynamical bifurcations. This mathematical description is consistent with the idea that eudaimonic well-being is beyond the boundaries of hedonic homeostasis.



    The stock price is the primary factor affecting investors. Short-term stock price forecasts have important significance for reference to investors. The factors affecting stock price are very complex, which makes it difficult to analyze and predict stock price. The stock market can be seen as a nonlinear dynamic system under the action of multiple factors. Time series forecasting models are usually adopted for stock price forecasts. There are multiple regression models, exponential smoothing models, ARIMA models, neural network models, combined forecasting models, and grey forecasting models.

    Many scholars have conducted a series of studies on stock price prediction. Shi [1] used ARMA and BP neural networks to obtain the linear and non-linear components of the stock price sequence and then used the Markov model to correct the error of the result. Vijh [2] used the random forest algorithm to integrate the indicators of multiple stocks, and then used BP neural network to output the predicted observations. Liu [3] proposes a new effective metric, Mean Profit Rate (MPR). The effectiveness of metrics is measured based on the correlation between the metric value and profit of the model. And, the relationship between the features of Bitcoin and the next day change in the price of Bitcoin using an Artificial Neural Network ensemble approach called Genetic Algorithm based Selective Neural Network Ensemble [4] is explored by Sin. Zhu [5] proposes an intelligent trading system using support vector regression optimized by genetic algorithms (SVR-GA) and multilayer perceptron optimized with GA (MLP-GA). And, an improved one-step ahead forecasting system and an improved multi-step ahead prediction [6] are proposed by Dong. Zhang [7] uses BP neural network to train stock prices in the form of spline division, and then output the predicted observations with data independent of the training samples. Teo [8] train the wavelet packet multi-layer perceptron neural network (WP-MLP) by backpropagation for time series prediction. Summarizing the above literature, using deep learning and machine learning methods to predict stock prices can achieve good prediction results. However, most methods do not consider the memory of the sequence when processing the original price sequence. Sang [9] studied the characteristics of long-term memory of fluctuations in the Chinese stock market, which means that the historical data of the stock market have an impact on the stock price at the current time. Since fractional calculus has power-law memory [10], the idea of fractional calculus can be used to obtain information about historical data. Based on the characteristics of fractional calculus, some models that combined the idea of fractional calculus with traditional time series forecasting methods have attracted the attention of many scholars. For example, the ARFIMA model, fractional grey model, etc. are the models that combine fractional ideas. Because the grey model has good performance in the uncertainty system, the prediction of the stock price is very consistent with the grey model.

    The Grey model is an important part of the grey theory, and the most famous one is GM (1, 1) model. However, GM (1, 1) has many shortcomings. The background value is based on approximation, and it cannot handle high-dimensional data. And the ability of nonlinear fitting is insufficient. Because of the shortcomings, many scholars had improved the GM (1, 1) model [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Tien [13] proposed the GMC (1, m) model to solve the shortcomings of traditional GM (1, 1) that cannot handle high-dimensional data. Literature [14,15,16,17,18,19] proposed a variety of grey Bernoulli prediction models and grey models combined with kernel methods, which solved the shortcomings of insufficient nonlinear fitting capabilities. Literature [20,21] had solved the shortcoming that the background value is based on approximation. Literature [22,23,24,25] combined fractional calculus with the grey model and used historical data to solve many practical application problems.

    Since most time series including stock price series usually contain linear and non-linear factors, a single forecasting model cannot achieve good results. According to the high-dimensional, memorable, and non-linear characteristics of stock data, a Hausdorff fractional grey model with convolution, locally linear embedding, and BP neural network (HFGMC (1, m)-LLE-BP) model is proposed in this paper. HFGMC (1, m) model is improved from FGMC (1, m) in literature [25]. The FGMC (1, m) model has several disadvantages. There are factorial operators in the elements of the accumulation generation matrix in the FGMC (1, m) model. When the data length is long, it will have the disadvantages of complex calculation and information loss. The calculation of background value in the FGMC (1, m) model is approximately obtained by using the trapezoidal formula between the two values in the sequence, which will calculate background value ignore the local change trend of the sequence. These shortcomings degrade the prediction performance of FGMC (1, m). The operator derived from HFGMC (1, m) model combined with Hausdorff fractional derivative does not contain a factorial operator, which solves the shortcomings of complex calculation and information loss. Because of the disadvantages of the background value based on approximation, the HFGMC (1, m) model uses the Newton-Cotes discrete formula and the Newton mean difference interpolation polynomial to optimize. Because the Newton-Cotes discrete formula and the Newton mean difference interpolation polynomial consider the changing trend of local data, it can solve the shortcomings of approximate background values. HFGMC (1, m) is used to predict the linear component of stock price. The BP neural network is used to predict the nonlinear component of stock price. The prediction result is obtained by adding a linear component and a nonlinear component.

    This paper uses the stock closing price as the forecast target to verify the performance of HGFMC (1, m)-LLE-BP. Compared with FGMC (1, m), the prediction accuracy of HFGMC (1, m) is much higher than that of FGMC (1, m). This is because FGMC (1, m) only retains local information, while HGFMC (1, m) retains global information. Therefore, the prediction accuracy of HGFMC (1, m) will be higher. Besides, thanks to the excellent nonlinear fitting ability of the BP neural network, it can fit the nonlinear residuals in the time series. The prediction results show that the BP neural network improves the performance of HGFMC (1, m). Finally, compare the HGFMC (1, m)-LLE-BP model with the FGMC (1, m)-LLE-BP model, BP neural network model, LSTM model and the ARIMA-BP hybrid model. The performance of the HGFMC (1, m)- LLE-BP model is far better than other models.

    The rest of this paper is organized as follows. In Section 2, the specific framework of the proposed model is given. In Section 3, the FGMC (1, m) is described. In Section 4, the HFGMC (1, m) is given. In Section 5, the LLE algorithm and BP neural network are introduced. In Section 6, experiments have verified the effectiveness of the proposed model. In Section 7, the work of this paper is summarized.

    In this paper, the HFGMC (1, m) is proposed to predict the linear component of the stock price, and then the BP neural network is used to predict the nonlinear component of the stock price. Because stock price is affected by nonlinear high-dimensional stock data, this paper used the LLE algorithm to reduce the dimensionality of nonlinear high-dimensional stock data. The flow chart of the model proposed in this paper is shown in Figure 1.

    Figure 1.  The flow chart of the model proposed in this paper.

    The text description of the specific steps is as follows.

    Step 1: Input parameters r, K. The parameter r is used to calculate the r-order accumulative generator matrix. The parameter K represents the number of neighboring points of each sample point, which is used for the calculation of locally linear embedding.

    Step 2: Input the original data matrix. Suppose the original data matrix X(0), XRn×m. n is the length of the sequence, and m is the m stock indicators including the sequence to be predicted. Select the sequence to be predicted (usually the first column of the original data matrix), and the remaining sequences are used as related factors that affect the sequence to be predicted (usually other columns in the original data matrix except for the first column, and each column represents a related factor).

    Step 3: Form the correlation factor matrix that affects the sequence to be predicted, and apply the locally linear embedding algorithm to reduce the dimension to d dimension. The sequence to be predicted and the correlation factor matrix after dimensionality reduction form the input matrix X(0). Then normalize the input matrix.

    Step 4: Perform Hausdorff fractional accumulative generation operation (HFAGO) on the input matrix to obtain the Hausdorff fractional accumulative matrix.

    Step 5: Combine the Newton-Cotes formula and use the Hausdorff fractional accumulation matrix to calculate the background value.

    Step 6: After obtaining the optimized background value, combine the least square estimation to solve the parameters.

    Step 7: Solve the whitening equation and get the time response function. Solve the in-sample time response sequence.

    Step 8: Perform the inverse Hausdorff fractional accumulation generation operation (I-HFAGO) on the time response sequence within the sample to obtain the in-sample prediction sequence.

    Step 9: Subtract the in-sample prediction sequence and the in-sample true value sequence to obtain the in-sample residual sequence. The mean of the sum of the residual sequence elements is the mean absolute error (MAE). Update r and K, and then judge whether the parameters r and K exceed the search range. If it exceeds the search range, select r, K that minimize MAE to solve the in-sample prediction sequence, and then go to step 10. If it does not exceed the search range, repeat Step 2 to Step 9.

    Step 10: Input the out-of-sample correlation factor matrix R, and then use the time response function to solve the out-of-sample time response sequence. Then the out-of-sample time response function obtains the out-of-sample prediction sequence through I-HFAGO as a linear component.

    Step 11: The in-sample residual sequence uses BP neural network to do nonlinear fitting, and then the out-of-sample residual sequence is output as the nonlinear component.

    Step 12: Add the linear component and the non-linear component to get the combined prediction sequence.

    The above steps can be divided into 6 modules. Step 1 to Step 3 is the data preprocessing part. Step 4 to Step 7 is the modeling process of the HFGMC (1, m). Step 8 to Step 9 is the residual calculation part. Step 10 is to calculate the linear component through the HFGMC (1, m). Step 11 is to calculate the non-linear part through BP neural network. Step 12 gives the prediction result.

    Literature [25] gave the specific derivation process of FGMC (1, m). Let the original data matrix be X(0), XRn×m. n is the length of the sequence, and m is the indicators including the sequence to be predicted. Then perform the r-order accumulative generation operation (r-AGO) on each column of the original data matrix to obtain the r-order accumulative generation sequence. The mathematical form is defined as

    X(r)j={x(r)j(1),x(r)j(2),,x(r)j(n)},j=1,2,,m (1)
    x(r)j(t)=ti=1x(0)j(i)(ti+r1ti) (2)

    where (ti+r1ti)=(ti+r1)(ti+r1)...(r+1)r(ti)!, and r is called the fractional order. r should be taken in the interval (0, 1). However, r can be an arbitrary real number. The matrix form of the above equations can be written as

    X(r)j=Arn×nX(0)j (3)

    where Arn×n is called the r-order accumulative matrix, and its form is

    Arn×n=[1r1r(r+1)2!r1r(r+1)(r+n2)(n1)!r(r+1)(r+n3)(n2)!r1]n×n (4)

    Then establish the whitening equation. The specific mathematical form is

    dx(r)1(τ+t)dt+b1x(r)1(τ+t)=mj=2bjx(r)j(t)+u (5)

    where τ is the delay period and u is the control coefficient. When τ and u are equal to 0, the traditional GM (1, m) model can be obtained. But the GM (1, m) had been proved inaccurate in [13]. When τ is equal to 0 and m is equal to 1, it can degenerate into a GM (1, 1) model.

    By integrating the Eq (5) and using the trapezoidal formula in the interval [k-1, k], the discrete form of the existing fractional grey model can be obtained as

    (x(r)1(τ+t)x(r)1(τ+t1))+b1Z(r)1(τ+t)=mj=2bjZ(r)j(t)+u (6)

    where Z(r)j is called the background values, which are defined as

    Z(r)1(τ+t)=12[X(r)1(τ+t)+X(r)1(τ+t1)] (7)
    Z(r)j(t)=12[X(r)j(t)+X(r)j(t1)],j=2,3,,m (8)

    The parameter b1,b2,,bm,u of Eq (5) can be obtained by least square estimation. Then substitute it into the whitening equation of Eq (4), and solve the differential equation to obtain the time response function. The form of the time response function is

    ˆx(r)1(τ+t)=x(r)1(τ+1)eb1(t1)+t1eb1[t1(υ1)][mj=2bjx(r)j(υ)+u]dυ (9)

    The convolution operation of the second term on the right side of the Eq (9) can be discretized using the Gaussian formula to obtain the following discrete equation. The specific mathematical form is

    ˆx(r)1(τ+t)=x(r)1(τ+1)eb1(t1)+tυ=2{eb1(tυ+12)12[f(υ)+f(υ1)]} (10)

    where f(υ)=mj=2bjx(r)j(υ)+u. Finally, perform the inverse r-order accumulative generation operation (r-IAGO) to obtain the predicted sequence

    ˆX(0)1=(Ar)1ˆX(r)1 (11)

    Accumulate generation operation (AGO) is the core step of the grey model, which can eliminate the instability caused by fluctuations in the original data. From the r-order accumulation matrix in Eq (4), it can be seen that the original sequence needs to be multiplied by a factorial multiplier when the accumulation operation is performed. When the length of the sequence is long, the calculation in the r-order accumulation matrix will be very complicated. Besides, it can be seen from Eqs (7) and (8) that the calculation of the background value is only obtained by using a simple trapezoidal calculation formula. However, most systems do not grow linearly, which leads to calculation errors when using FGMC (1, m) models. The error is shown in Figure 2.

    Figure 2.  The flow chart of the model proposed in this paper.

    The Hausdorff fractional derivative is defined as [26].

    df(x)dxr=limxxf(x)f(x)xrxr (12)

    Since the first difference can usually be approximately defined by the first derivative as

    Δf(t)limh1f(x)f(xh)h|x=t=f(t)f(t1) (13)

    Therefore, the difference of Hausdorff fractional derivative can be approximately defined by Hausdorff fractional derivative as

    Δrf(t)limx(x1)f(x)f(x)xrxr|x=t=f(t)f(t1)tr(t1)r=[tr(t1)r]1Δf(t) (14)

    Let f(t)=x(0)(1),f(t1)=x(0)(2),,f(1)=x(0)(t) (This is the reverse mapping of the sequence when doing AGO operations), then 1-order reverse accumulate operation (1-RAGO) can be defined as

    f(t)=tj=1f(j) (15)

    1-RAGO is essentially the same as 1-AGO because it just does a simple inverse mapping of the sequence.

    f(t)=tj=1f(j)=x(1)(t)=ti=1x(0)(i)

    Eq (15) combined with Eq (13) to get

    Δf(t)=Δ(tj=1f(j))=tj=1f(j)t1j=1f(j)=f(t) (16)

    In the background of the grey prediction model, the difference and accumulate operation are reversible [24]. Eq (16) can be also written as

    Δf(t)=f(t) (17)

    Without loss of generality, HFAGO is expressed as r and satisfies the following relationship

    Δrrf(t)=f(t) (18)

    Combine Eq (17) with Eq (14), then move the term to the right to get

    Δ(rf(t))=f(t)[tr(t1)r] (19)

    Combining Eq (17), Eq (15) is simplified to

    rf(t)=tj=1f(j)[jr(j1)r] (20)

    Equation (20) is HFAGO. Compared with Eq (2), HFAGO does not contain a factorial multiplier. To understand the difference between r-AGO and HFAGO more intuitively, the next step is to visually compare r-AGO and HFAGO. Assuming

    ϕ(t)=Γ(r+t1)Γ(t)Γ(r) (21)
    ω(t)=tr(t1)r (22)

    where t is equal to 1,2,,n. The Γ() in Eq (21) is the gamma function. Using Eqs (21) and (22), Eq (4) can be rewritten as

    Ar=[ϕ(1)ϕ(2)ϕ(1)ϕ(3)ϕ(2)ϕ(1)ϕ(n)ϕ(n1)ϕ(2)ϕ(1)]n×n (23)
    ArH=[ω(1)ω(2)ω(1)ω(3)ω(2)ω(1)ω(n)ω(n1)ω(2)ω(1)]n×n (24)

    Assuming that r = 0.5, visualized Eqs (21) and (22). The visual comparison is shown in Figure 3.

    Figure 3.  Visual comparison chart of Eqs (21) and (22).

    In Figure 3, the solid line represents the change of the multiplier overtime when r-AGO is performed. The dotted line represents the change of the multiplier over time when performing HFAGO. The analysis shows that the r-AGO is 0 when t > 172. This is because the factorial number is too large, so calculate the value of the denominator in matlab2020a will get INF. It will result in the loss of subsequent information and make the calculation result inaccurate. The visualization curve of HFAGO is very similar to the traditional r-AGO. On this basis, HFAGO can retain global information and reduce the amount of calculation.

    Integrate both sides of Eq (5) on the interval [k-1, k] at the same time.

    (x(r)1(τ+t)x(r)1(τ+t1))+b1kk1x(r)1(τ+t)dt=mj=2bjkk1x(r)j(t)dt+u (25)

    To optimize the error caused by the trapezoidal formula, it is necessary to use the Newton-Cotes formula to calculate the optimized background value for the integral term in Eq (25). Assume the integration area [a, b] is divided into l equal parts. Select node xi = a+ih to construct a discrete integral formula, where h is the step size. Then, the constructed discrete integral formula is called the Newton-Cotes formula. The mathematical form is defined as

    Il=(ba)lk=0C(l)if(x) (26)

    The C(l) in Eq (26) is called the Cotes coefficient. When l = 1, it is a trapezoidal formula. when l = 4, the corresponding quadrature formula is

    I4=ba90[7f(a)+32f(a+14)+12f(a+12)+32f(a+34)+7f(b)] (27)

    This paper divided the integration area into 4 equal parts. Then, the integral term in Eq (25) is constructed using Eq (27) to obtain the optimized background value. The optimized background value is

    Z(r)(k+1)=190[7x(r)(k)+32x(r)(k+14)+12x(r)(k+12)+32x(r)(k+34)+7x(r)(k+1)] (28)

    It can be noticed that x(r)(k+14), x(r)(k+12), x(r)(k+34) do not exist for discrete data, and the Newton means difference interpolation polynomial can be used to solve this problem. The form of the Newton means difference interpolation polynomial is

    Nn(x)=f(x0)+f[x0,x1](xx0)++f[x0,x1,,xn](xx0)(xx1)(xxn) (29)

    where f[x0,x1,,xn] is the (n-1) order mean difference of the function f(x), and n represents the data points involved in the construction of the interpolation polynomial. However, the more points involved in the construction of the polynomial are not the better. When n > 8, the Runge effect usually appears [27]. This paper will use every 4 data points to construct a Newton mean difference interpolation polynomial.

    In this section, the LLE algorithm [28] and BP neural network [29] are introduced.

    The LLE algorithm was proposed by S.T. Roweis and L.K. Saul in 2000. The LLE algorithm is an important dimensionality reduction algorithm in flow learning. The principle is to assume that the data in a high-dimensional space is linear in a region, and certain data can be expressed linearly using data samples in its neighborhood. Then map the data to a low-dimensional space, and try to keep the weight coefficients of the linear relationship before and after the projection the same.

    The algorithm steps are as follows.

    Step 1: Find the K neighboring points of each sample point Xi in the high-dimensional space, and then calculate the Euclidean distance between them.

    Step 2: Solve the covariance matrix between neighboring points.

    Step 3: Solve the weight coefficient vector of each sample point.

    Step 4: The weight coefficient vector forms the weight coefficient matrix W, and then calculates the matrix M that can minimize the reconstruction error function.

    Step 5: Solve d+1 smallest eigenvalues in matrix M. The eigenvector corresponding to the eigenvalues from 2 to d+1 are the original data matrix X corresponding to the mapping space with the smallest reconstruction error.

    BP neural network was proposed by Hinton and McClelland in 1986. The core idea is based on the multi-layer forward neural network combined with the error loss gradient descent direction. The error loss gradient descent direction is combined with the error backpropagation algorithm to update the weights of the multilayer neural network. After enough iterations, the error loss converges to the minimum and finally can approximate any nonlinear function.

    Through the above method and combined with the process in Figure 1, experiments are carried out to verify the performance of the model proposed in this paper. The first step is to select experimental examples. The second step is to select parameters. The third step is to compare the predicted value with the real value. The last step is to evaluate the performance of the model proposed in this paper through error indicators.

    This paper selected the stock closing price as the forecast target and forecasted the closing price for the next 10 days to verify the performance of the model proposed in this paper. This paper divides the predicted stocks into two categories. The first category is stocks with large fluctuations in the closing price of the stock. For example, within the range of a given trading day (excluding public holidays), The stocks with the highest closing price minus the lowest closing price greater than 5 are the first type of stocks. The second category is stocks with small fluctuations in the closing price of the stock. For example, within the range of a given trading day, the stocks with the highest closing price minus the lowest closing price less than 2 are the second type of stocks. In order to verify the effectiveness of this model for the above two types of stocks, this paper selects 6 stocks from the Shanghai Stock Exchange (code prefix is SH) and Shenzhen Stock Exchange (code prefix is SZ) as prediction objects. The 6 stocks are, SZ002607, SZ002567, SZ000796, SH600586, SZ002770 and SZ000573. SZ002607, SZ002567 and SZ000796 belong to the first category of stocks. SH600586, SZ002770 and SZ000573 belong to the second category of stocks. Stock data were crawled on NetEase Finance and Economics website using a web crawler program.

    Figure 4 shows the stock closing price trend chart for a total of 256 trading days from 2019-1-16 to 2020-2-10. The closing price of SH600586, SZ002770 and SZ000573 has a small variation range, while the closing price of SZ002607, SZ002567 and SZ000796 has a large variation range.

    Figure 4.  Daily closing prices of 6 stocks.

    Table 1 shows the data used for modeling in SH600586, and the modeling data of the remaining 5 stocks are shown in the Supplementary. In this paper, the highest price, lowest price, opening price, previous closing price, floor change, floor, exchange, trading volume, trading amount, total market value and circulating market value are selected as indicators that affect the closing price of stocks. Indicators such as the highest price, lowest price, opening price, previous closing, floor change, floor and the exchange can reflect the changing trend of the closing price of stocks in the recent period. Indicators such as trading volume, trading amount, total market value and circulating market value can reflect the financial status of the stock company corresponding to the stock. Imitating the method of literature [30], the above indicators are reduced by dimensionality reduction algorithm, and then input into the HGFMC (1, m) model.

    Table 1.  Modeling data of SH600586.
    No. Date Close Date Top_price Low_price Opening_price Pre_price Floor_price
    1 19/1/16 2.95 19/1/2 3.01 2.94 2.96 2.95 -0.01
    256 20/2/10 2.52 20/1/17 2.87 2.83 2.86 2.86 -0.02
    20/1/20 2.85 2.82 2.83 2.84 0.01
    20/1/21 2.86 2.82 2.84 2.85 -0.01
    20/2/10 2.53 2.43 2.46 2.47 0.05
    No. Date Close Date Floor Exchange Volume Amount Total value Circulation value
    1 19/1/16 2.95 19/1/2 -0.339 0.5625 8,122,601 24,196,597 4,287,408,174 4,245,758,958
    256 20/2/10 2.52 20/1/17 -0.6993 0.3202 4,574,400 13,015,088 4,057,706,800 4,057,706,800
    20/1/20 0.3521 0.3255 4,650,641 13,176,695 4,071,994,500 4,071,994,500
    20/1/21 -0.3509 0.3737 5,339,971 15,129,108 4,057,706,800 4,057,706,800
    20/2/10 2.0243 0.8219 11,743,314 29,233,159 3,600,500,400 3,600,500,400

     | Show Table
    DownLoad: CSV

    First, the delay period parameter τ is discussed. It can be noted that there are 10 trading days' indicator data after the serial number 256. The indicator data of these 10 trading days is used to predict the closing price of the next 10 days. Since the HGFMC (1, m) model is a time series forecasting model that processes high-dimensional data, the model needs a priori data of the next 10 days to predict the stock closing price in the next 10 days. Considering that investors are not sensitive to changes in stock data [31] (except for market crises), this makes investors have a certain delay in responding to stock data. In other words, the stock data at the moment will have a time delay to affect investor sentiment, and then investor sentiment will affect the stock price. It can be seen from Eq (10) that HGFMC (1, m) integrates the delay function and shows the relationship between the stock price and various indicators. Therefore, when used Eq (10) to solve the time response function, the delay period is set to 10. The setting of the delay period τ is equal to the amount of prior data required.

    Then the dimension d of the low-dimensional mapping space of the LLE algorithm is discussed. Literature [30,32,33] shows that setting the dimension d of the low-dimensional space to 2, 3, and 4 can well express the feature space of the data. Therefore, this paper uses dimension 3 as the dimension of the low-dimensional mapping space of the LLE algorithm.

    The stationary test of stock prices is discussed in this section. This paper uses the ADF test to test the stability of stock prices.

    The ADF test can check whether there is a unit root in the sequence. If there is a unit root, the sequence is not stationary. First, ADF test is performed on the original price series.

    As shown in Table 2, the original sequence of 6 stocks cannot reject the existence of unit roots at the significance levels of 1, 5, and 10%. Therefore, the original sequence of 6 stocks can be considered unstable. Since the stock price series is not stationary, it is necessary to stabilize the series before forecasting.

    Table 2.  ADF test of the original sequence.
    Stock code Test statistic Significance level
    1% 5% 10%
    SZ000573 -1.34 -3.45 -2.87 -2.57
    SZ000796 -2.22 -3.45 -2.87 -2.57
    SZ002567 -1.62 -3.45 -2.87 -2.57
    SZ002607 -2.10 -3.45 -2.87 -2.57
    SZ002770 -0.75 -3.45 -2.87 -2.57
    SH600586 -1.08 -3.45 -2.87 -2.57

     | Show Table
    DownLoad: CSV

    From Eqs (1) and (20), it can be known that after the original sequence uses HFAGO, the random walk of the price appears as the increment between data points on the accumulative generation sequence. Therefore, the original sequence can be considered stationary after using HFAGO.

    Figure 5 shows the stabilization process of SZ002770's stock price. The left image of Figure 5 shows the original sequence. The right figure of Figure 5 shows the accumulative generation sequence. The stationary test of the accumulative generation sequence of 6 stocks also uses the ADF test for further verification.

    Figure 5.  Stabilization of original series.

    It can be seen from Table 3 that SZ000573, SZ002567, SZ002607, SZ002770 and SH600586 reject the hypothesis at the significance levels of 1, 5, and 10%, while SZ00796 rejects the hypothesis at the significance levels of 5% and 10%. Therefore, it can be considered that the accumulative generation sequence obtained by the original sequence after using HFAGO is stationary.

    Table 3.  ADF test of the accumulative generation sequence.
    Stock code Test statistic Significance level
    1% 5% 10%
    SZ000573 -5.46 -3.45 -2.87 -2.57
    SZ000796 -2.87 -3.45 -2.87 -2.57
    SZ002567 -6.52 -3.45 -2.87 -2.57
    SZ002607 -6.31 -3.45 -2.87 -2.57
    SZ002770 -3.66 -3.45 -2.87 -2.57
    SH600586 -4.93 -3.45 -2.87 -2.57

     | Show Table
    DownLoad: CSV

    It can be seen from Figure 1 that the establishment of the model requires the selection of appropriate parameters r and K. This paper established the contour diagram between the parameters r, K and the mean absolute error (MAE), and then found the parameter that could minimize the MAE.

    Figure 6 is a contour diagram between the parameters r, K and MAE. In the figures, the horizontal axis represents the order r, and the vertical axis represents the number of K neighboring points of each sample point. The bluer the color, the smaller the MAE. The yellower the color, the larger the MAE. This paper enumerated the order r in the range of 0 to 1 in steps of 0.1 and enumerated the number of neighboring points K of the sample points from 0 to 100 in steps of 5. Then, select the search range that minimizes the MAE. Finally, use the exhaustive method within the range to get the relative optimal parameters.

    Figure 6.  ontour diagram of parameters r, K and MAE.

    Table 4 shows the optimal parameter range of each stock. The search step for parameter r is 0.01, and the search step for parameter K is 5. For all stocks, the optimal search range of parameter K is 45 to 100. For SZ000573, SZ002770 and SH60586, the search range of parameter r is 0.25 to 0.6. For SZ000796, the search range of parameter r is 0.1 to 0.45. For SZ002567 and SZ002607, the search range of parameter r is 0.35 to 0.7.

    Table 4.  Parameter range.
    Stock code r K
    SZ000573 r[0.25,0.6] K[45,100]
    SZ000796 r[0.1,0.45]
    SZ002567 r[0.35,0.7]
    SZ002607 r[0.35,0.7]
    SZ002770 r[0.25,0.6]
    SH600586 r[0.25,0.6]

     | Show Table
    DownLoad: CSV

    Table 5 shows the result of parameters. For SZ000573, when the order r is 0.30 and K is 60, the MAE is 0.6120. For SZ000796, when the order r is 0.14 and K is 50, the MAE is 1.2227. For SZ002567, when the order r is 0.35 and K is 95, the MAE is 2.4767. For SZ002607, when the order r is 0.35 and K is 65, the MAE is 1.1929. For SZ002770, when the order r is 0.44 and K is 65, the MAE is 0.8013. For SH600586, when the order r is 0.41 and K is 55, the MAE is 0.6336.

    Table 5.  Result of parameters.
    Stock code r K MAE
    SZ000573 0.30 60 0.6120
    SZ000796 0.14 50 1.2227
    SZ002567 0.35 95 2.4767
    SZ002607 0.35 65 1.1929
    SZ002770 0.44 65 0.8013
    SH600586 0.41 55 0.6336

     | Show Table
    DownLoad: CSV

    To make the distribution of the data input to FGMC (1, m) the same as HGFMC (1, m), the parameters of the data input to FGMC (1, m) also need to be set according to Table 5. The comparison of HFGMC (1, m) and FGMC (1, m) uses the mean error between the in-sample predicted sequence and the real sequence as an indicator. Table 6 shows the MAE comparison between HFGMC (1, m) and FGMC (1, m).

    Table 6.  The MAE comparison.
    Stock code FGMC (1, m) HFGMC (1, m)
    SZ000573 16.3341 0.6120
    SZ000796 10.2270 1.2227
    SZ002567 11.1814 2.4767
    SZ002607 8.4681 1.1929
    SZ002770 1.9685 0.8013
    SH600586 3.0270 0.6336

     | Show Table
    DownLoad: CSV

    It can be seen from Table 6 that the improvement of FGMC (1, m) by HGFMC (1, m) is very obvious, and the average MAE is reduced by 7.3778.

    In this section, the improvement of the model by the BP neural network is discussed. It can be known from Eq (10) that the out-sample forecast data can be regarded as a linear combination of various indicators, so a single HGFMC (1, m) can only predict the linear part of the future data. Due to the strong nonlinear fitting ability of the BP neural network, it can fit the residual function in the in-sample prediction sequence. Therefore, using BP neural network can get the nonlinear part of future data. Next, compare the model performance of HGFMC (1, m) and HGFMC (1, m) combined with the BP neural network.

    As shown in Table 7, except for the slight decrease and no change in the forecast accuracy of SZ000796 and SH600586, the forecast accuracy of other stocks has been significantly improved.

    Table 7.  Comparison of MAE of out-sample forecast sequence.
    Stock code HFGMC (1, m) combined BP HFGMC (1, m)
    SZ000573 0.12 0.64
    SZ000796 0.42 0.37
    SZ002567 0.20 3.66
    SZ002607 0.78 1.12
    SZ002770 0.03 0.89
    SH600586 0.21 0.21

     | Show Table
    DownLoad: CSV

    This paper chose FGMC (1, m)-LLE-BP, long short-term memory neural network (LSTM), BP neural network (BP), ARIMA-BP [34] as the comparison model. BPNN and LSTMNN used 10 lagging data of the stock closing price to input the neural network in the form of a vector, and output the current value. For example, the input and output samples of SH600586 are shown in Table 8.

    Table 8.  Input and output samples of SH600586.
    Sample x1 x2 x9 x10 y
    1 2.94 2.98 3.05 3.00 2.98
    2 2.98 3.00 3.00 2.98 2.99

     | Show Table
    DownLoad: CSV

    The vector x in Table 8 represents the input vector, and y represents the output. This paper used a sliding window method to obtain all training samples. The forecast results of each model are shown in Figure 7.

    Figure 7.  Forecast results.

    From the analysis of the prediction results in Figure 7, it is concluded that both SZ000573, SZ002770 and SH600586 with a small change in stock trend or SZ000796, SZ002567 and SZ002607 with a large change in stock trend predicted by the proposed model are very close to the true value. However, there are some errors between the predicted results of the comparison models and the true values. The specific model performance analysis is given in the error analysis section.

    The forecast error analysis in this paper used mean absolute error (MAE), root mean square error (RMSE) and mean absolute percentage error (MAPE) as evaluation indicators.

    MAE=1nni=1|ˆx(i)x(i)| (30)
    RMSE=1nni=1(ˆx(i)x(i))2 (31)
    MAPE=1nni=1|ˆx(i)x(i)x(i)|×100% (32)

    where ˆx(i) is the predicted value, and x(i) is the true value.

    From the analysis of Tables 914, it can be seen that when the model proposed in this paper predicted SZ000573, SZ002770 and SH600586 with small changes in trend, the error between the predicted value and the true value is on average below 0.21. When predicted SZ000796, SZ002567 and SZ002607 with large changes in trend, the average error between the predicted value and the true value is less than 0.78. The performance is stable.

    Table 9.  Error comparison of SZ000573.
    Indicator Proposed model FGMC (1, m)-LLE-BP LSTM BP ARIMA-BP
    MAE 0.12 0.66 0.37 0.31 0.21
    RMSE 0.13 0.68 0.38 0.34 0.23
    MAPE 4.3% 23% 13.2% 11% 7.4%

     | Show Table
    DownLoad: CSV
    Table 10.  Error comparison of SZ000796.
    Indicator Proposed model FGMC (1, m)-LLE-BP LSTM BP ARIMA-BP
    MAE 0.37 1.01 1.29 1.53 1.72
    RMSE 0.48 1.30 1.38 1.73 1.87
    MAPE 4.5% 12.5% 16.3% 19.0% 21.6%

     | Show Table
    DownLoad: CSV
    Table 11.  Error comparison of SZ002567.
    Indicator Proposed model FGMC (1, m)-LLE-BP LSTM BP ARIMA-BP
    MAE 0.20 0.91 0.51 0.65 0.91
    RMSE 0.22 1.01 0.60 0.78 0.90
    MAPE 2.6% 11.6% 6.5% 8.0 % 11.4%

     | Show Table
    DownLoad: CSV
    Table 12.  Error comparison of SZ002607.
    Indicator Proposed model FGMC (1, m)-LLE-BP LSTM BP ARIMA-BP
    MAE 0.78 3.36 3.23 3.79 1. 90
    RMSE 0.86 3.81 3.39 3.93 1.95
    MAPE 4.1% 17.6% 17.1% 20.0% 10.3%

     | Show Table
    DownLoad: CSV
    Table 13.  Error comparison of SZ002770.
    Indicator Proposed model FGMC (1, m)-LLE-BP LSTM BP ARIMA-BP
    MAE 0.03 0.05 0.16 0.19 0.14
    RMSE 0.04 0.06 0.22 0.22 0.25
    MAPE 1.7% 2.4% 7.6% 9.0% 6.9%

     | Show Table
    DownLoad: CSV
    Table 14.  Error comparison of SH600586.
    Indicator Proposed model FGMC (1, m)-LLE-BP LSTM BP ARIMA-BP
    MAE 0.21 0.49 0.86 0.90 0.48
    RMSE 0.28 0.52 0.89 0. 97 0.51
    MAPE 6.9% 15.5% 26.6% 28.0% 15.1%

     | Show Table
    DownLoad: CSV

    When FGMC (1, m)-LLE-BP predicted SZ000573, SZ002770, and SH600586 with small changes in trend, the error between the predicted value and the true value is on average below 0.66. When predicted SZ000796, SZ002567 and SZ002607 with large changes in trend, the average errors are 1.01, 0.91 and 3.36 respectively. The performance is not stable.

    When LSTM neural network model predicted the SZ000573, SZ002770, and SH600586 with small trend change, the average errors are 0.37, 0.16 and 0.49 respectively. When predicted the SZ000796, SZ002567 and SZ002607 with large changes in trend, the average errors are 1.29, 0.51 and 3.23 respectively. This shows that LSTMNN has volatility in stocks with large changes in predicted trends and stocks with small changes in predicted trends.

    When BP neural network model predicted the SZ000573, SZ002770, and SH600586 with small trend change, the average errors are 0.31, 0.19 and 0. 90 respectively. When predicted the SZ000796, SZ002567 and SZ002607 with large changes in trend, the average errors are 1.53, 0.65 and 3.79 respectively.

    When ARIMA-BP predicted the SZ000573, SZ002770, and SH600586 with small trend change, the average errors are 0.21, 0.14 and 0. 48 respectively. When predicted the SZ000796, SZ002567 and SZ002607 with large changes in trend, the average errors are 1.72, 0.91 and 1. 90 respectively.

    Combining the analysis of three error indicators, the model proposed in this paper performs best, followed by ARIMA-BP and FGMC (1, m)-LLE-BP. However, neither the LSTM nor the BP neural network model of a single prediction model can achieve the desired effect. This shows that when predicting time series with linear and non-linear factors, the combined model can achieve good results.

    From Figure 8, it can be seen that the model proposed in this paper has stable and excellent performance for 6 stocks with different trends, while the performance of the other 4 comparison models is only good in some stocks. Combining the analysis of three evaluation indicators, the model proposed in this paper performed best on the MAE, RMSE, and MAPE indicators when predicted SZ002607, SZ002567, SZ000796, SH600586, SZ002770 and SZ000573. Besides, the model proposed in this paper performs better than FGMC (1, m)-LLE-BP, which shows that the HFGMC (1, m) has a certain improvement in the accuracy of prediction.

    Figure 8.  Error comparison of 6 stocks.

    In the discussion in Section 6.2, it is pointed out that a model that deals with high-dimensional data requires certain prior data. However, it is very difficult to obtain real-time a priori data for forecasting objects with very high real-time requirements such as stock data. Generally, the high-dimensional grey model obtains the prior data by staggering the time axis of the prediction object (closing price) and prior data (indicators) (as shown in Figure 9). Therefore, the practice of staggering the time axis determines that the model cannot be used for long-term forecasting. When the time axis is staggered longer, the error may increase significantly. In addition to staggering the time axis, the use of sliding time windows to obtain prior data is also widely used in deep learning time series forecasting [2,3,30] (as shown in Table 8). In addition, the price of individual stocks (the experimental case of this paper) is different from stock indexes (Dow Jones, Nasdaq, Shanghai Composite Index, etc.). The price of individual stocks is usually predicted for about 10 days or 10 units of time [4,30,35], and the stock index reflects the general laws of the market, so long-term forecasts are feasible [36,37,38,39]. However, whether it is individual stock prices or stock indexes, the problem of prior data cannot be solved when using high-dimensional models. This is a limitation of high-dimensional models.

    Figure 9.  The relationship between the sequence to be predicted and the prior data.

    Since directly staggering the time axis may lose the information of various indicators in the time axis, the next step can consider using the maximum entropy model to integrate all the information in the time axis as the prior information of the prediction object. Through the above work, the accuracy of the model may be more accurate.

    Because the stock price is affected by many factors, the stock market can be regarded as a non-linear dynamic and grey system under the action of many factors. It is very difficult to predict stock price. There are many time series forecasting methods, but many methods do not consider that the stock market has long-term memory.

    According to the characteristics of the stock market, the HFGMC (1, m)-LLE-BP is proposed. The HFGMC (1, m) is based on the FGMC (1, m) model, and combined the Hausdorff fractional derivative and Newton-Cotes formula. The HFGMC (1, m) model is used to solve the linear component in the prediction sequence. Then, the BP neural network is used to extract the non-linear component in the prediction sequence. Besides, in order to compress the redundant information in high-dimensional data, the HFGMC (1, m) model combined the locally linear embedding algorithm to reduce the dimensionality of factors that affect the stock price. Finally, the linear component and the non-linear component are added to obtain the combined prediction sequence. In the experimental part, the model proposed in this paper predicts the closing price of 6 stocks with different trends in the next 10 days. Based on the above experimental results and discussion, the following conclusions are drawn:

    (i) FGMC (1, m) model has some shortcomings. Hausdorff fractional derivative and Newton-Cotes formula can improve FGMC (1, m) model. The HFGMC (1, m) model improves the prediction accuracy of time series.

    (ii) Because the stock price is affected by a variety of linear and nonlinear factors, a single time prediction model cannot accurately predict the stock price. A combined forecasting model has advantages in forecasting time series affected by complex factors.

    (iii) The model proposed in this paper is very stable and accurate in predicting the stock price with different historical trends. Results show that the model proposed in this paper can be widely used in the prediction of simple or complex time series.

    This work is supported by National Nature Science Foundation under Grant 61862062, 61104035.

    The authors declared that they have no conflict of interest.



    [1] R. M. Ryan, E. L. Deci, On happiness and human potentials: A review of research on hedonic and eudaimonic well-being, Annu. Rev. Psychol., 52 (2001), 141–166. doi: 10.1146/annurev.psych.52.1.141. doi: 10.1146/annurev.psych.52.1.141
    [2] V. Huta, Eudaimonia, in The Oxford Handbook of Happiness, Oxford University Press, Oxford, (2013), 201–213. doi: 10.1093/oxfordhb/9780199557257.013.0015.
    [3] V. Huta, A. Waterman, Eudaimonia and its distinction from hedonia: Developing a classification and terminology for understanding conceptual and operational definitions, J. Happiness Stud., 15 (2014), 1425–1456. doi: 10.1007/s10902-013-9485-0. doi: 10.1007/s10902-013-9485-0
    [4] C. P. Niemiec, R. M. Ryan, What makes for a life well lived? Autonomy and its relation to full functioning and organismic wellness, in The Oxford Handbook of Happiness, Oxford University Press, Oxford, (2013), 214–226. doi: 10.1093/oxfordhb/9780199557257.013.0016.
    [5] K. Hefferon, I. Boniwell, Positive psychology: Theory, Research And Applications, Open University Press, England, 2011.
    [6] F. F. Miao, M. Koo, S. Oishi, Subjective well-being, in The Oxford Handbook of Happiness, Oxford University Press, Oxford, (2013), 174–184. doi: 10.1093/oxfordhb/9780199557257.013.0013.
    [7] J. C. Sprott, Dynamical models of happiness, Nonlinear Dyn. Psychol. Life Sci., 9 (2005), 23–36.
    [8] M. E. P. Seligman, M. Csikszentmihalyi, Positive psychology: An introduction, Am. Psychol., 55 (2000), 5–14. doi: 10.1037/0003-066X.55.1.5. doi: 10.1037/0003-066X.55.1.5
    [9] J. Vittersø, Y. Søholt, A. Hetland, I. A. Thoresen, E. Røysamb, Was hercules happy? some answers from a functional model of human well-being, Soc. Indic. Res., 95 (2010), 1–18. doi: 10.1007/s10902-013-9485-0. doi: 10.1007/s10902-013-9485-0
    [10] S. A. David, I. Boniwell, A. C. Ayers, Introdution, in The Oxford Handbook of Happiness, Oxford University Press, Oxford, (2013), 1–8. doi: 10.1093/oxfordhb/9780199557257.013.0001.
    [11] J. Vittersø, The feeling of excellent functioning: Hedonic and eudaimonic emotions, in Handbook of Eudaimonic Well-Being, Springer International Publishing, Switzerland, (2016), 253–276. doi: 10.1007/978-3-319-42445-3_17.
    [12] C. S. Carver, M. F. Scheier, Origins and functions of positive and negative affect: A control-process view, Psychol. Rev., 97 (1990), 19–35. doi: 10.1037/0033-295X.97.1.19. doi: 10.1037/0033-295X.97.1.19
    [13] A. Campbell, P. E. Converse, W. L. Rodgers, The quality of American life: Perceptions, evaluations, and satisfactions, Russell Sage Foundation, New York, NY, 1976.
    [14] A. C. Michalos, Measuring the quality of life, in Values and the Quality of life, Science History Publications, New York, NY, 1976, 24–37.
    [15] R. Veenhoven, Is happiness relative?, Soc. Indic. Res., 24 (1991), 1–34. doi: 10.1007/BF00292648. doi: 10.1007/BF00292648
    [16] R. A. Cummins, Measuring happiness and subjective well-being, in The Oxford Handbook of Happiness (eds. S. A. David, I. Boniwell and A. C. Ayers), Oxford University Press, Oxford, (2013), 185–200. doi: 10.1093/oxfordhb/9780199557257.013.0014.
    [17] A. Schopenhauer, The World as Will and Representation, Dover Publications, New York, 1966.
    [18] D. G. Myers, E. Diener, Who is happy?, Psychol. Sci., 6 (1995), 10–19. doi: 10.1111/j.1467-9280.1995.tb00298.x. doi: 10.1111/j.1467-9280.1995.tb00298.x
    [19] B. L. Fredrickson, The role of positive emotions in positive psychology: The broaden-and-build theory of positive emotions, Am. Psychol., 56 (2001), 218–226. doi: 10.1037/0003-066X.56.3.218. doi: 10.1037/0003-066X.56.3.218
    [20] B. L. Fredrickson, T. Joiner, Positive emotions trigger upward spirals toward emotional well-being, Psychol. Sci., 13 (2002), 172–175. doi: 10.1111/1467-9280.00431. doi: 10.1111/1467-9280.00431
    [21] D. Watson, L. A. Clark, A. Tellegen, Development and validation of brief measures of positive and negative affect: the panas scales, J. Pers. Soc. Psychol., 54 (1988), 1063–1070. doi: 10.1037/0022-3514.54.6.1063. doi: 10.1037/0022-3514.54.6.1063
    [22] E. Diener, The Science of Wellbeing: The Collected Works of Ed Diener, Springer, New York, NY, 2009. doi: 10.1007/978-90-481-2350-6.
    [23] B. L. Fredrickson, Positivity: Groundbreaking Research Reveals how to Embrace the Hidden Strength of Positive Emotions, Overcome Negativity, and Thrive, Crown Publishers, New York, NY, 2009.
    [24] S. Frederick, G. Loewenstein, Hedonic adaptation, in Well-being: The Foundations of Hedonic Psychology, Russell Sage Foundation, New York, NY, (1999), 302–329.
    [25] S. Lyubomirsky, Hedonic adaptation to positive and negative experiences, in Oxford Handbook of Stress, Health and Coping, Oxford University Press, US, (2010), 200–224. doi: 10.1093/oxfordhb/9780195375343.013.0011.
    [26] R. A. Cummins, Subjective wellbeing, homeostatically protected mood and depression: A synthesis, J. Happiness Stud., 11 (2010), 1–17. doi: 10.1007/s10902-009-9167-0. doi: 10.1007/s10902-009-9167-0
    [27] A. J. Crum, P. Salovey, Emotionally intelligent happiness, in The Oxford Handbook of Happiness, Oxford University Press, Oxford, (2013), 73–87. doi: 10.1093/oxfordhb/9780199557257.013.0006.
    [28] S. L. Beaumont, Contentment, in The Encyclopedia of Positive Psychology, Blackwell Publishing Ltd, Chichester, (2009), 231–232. doi: 10.1002/9781444306002.ch3.
    [29] MATLAB, R2019b, The MathWorks Inc., Natick, Massachusetts, 2019.
    [30] J. D. Meiss, Differential Dynamical Systems, SIAM, Philadenphia, 2007.
    [31] M. Levi, F. C. Hoppensteadt, W. L. Miranker, Dynamics of the Josephson junction, Q. Appl. Math., 36 (1978), 167–198. doi: 10.1090/qam/484023. doi: 10.1090/qam/484023
    [32] J. Guckenheimer, P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-1140-2.
    [33] S. H. Strogatz, Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and Engineering, Perseus Books, Massachusetts, 1994.
    [34] P. Coullet, J. M. Gilli, M. Monticelli, N. Vanderberghe, A damped pendulum forced with a constant torque, Am. J. Phys., 73 (2005), 1–8. doi: 10.1119/1.2074027. doi: 10.1119/1.2074027
    [35] E. Siedlecka, T. F. Denson, Experimental methods for inducing basic emotions: A qualitative review, Emot. Rev., 11 (2019), 87–97. doi: 10.1177/1754073917749016. doi: 10.1177/1754073917749016
    [36] K. Wall, A. Kalpakci, K. Hall, N. Crist, C. Sharp, An evaluation of the construct of emotional sensitivity from the perspective of emotionally sensitive peoples, Border. Pers. Dis. Emot. Dysregulation, 5 (2018), 1–9. doi: 10.1186/s40479-018-0091-y. doi: 10.1186/s40479-018-0091-y
    [37] R. A. Cummins, E. Gullone, A. L. D. Lau, A model of subjective well-being homeostasis: The role of personality, in The Universality of Subjective Wellbeing Indicators: A Multi-disciplinary and Multi-national Perspective, Springer Netherlands, Dordrecht, (2002), 7–46. doi: 10.1007/978-94-010-0271-4_3.
    [38] R. Cummins, R. Eckersley, J. Pallant, J. Vugt, R. Misajon, Developing a national index of subjective wellbeing: The Australian unity wellbeing index, Soc. Indic. Res., 64 (2003), 159–190. doi: 10.1023/A:1024704320683. doi: 10.1023/A:1024704320683
    [39] J. J. Gross, Emotion regulation: Conceptual and empirical foundations, in Handbook of Emotion Regulation, The Guilford Press, New York, NY, (2014), 3–20.
    [40] O. Nydahl, Happiness, in from buddhism to science and back, Hung, Opole & ITAS, Vélez-Málaga, 2007.
    [41] M. Damdrun, M. Ricard, Self-centeredness and selflessness: A theory of self-based psychological functioning and its consequences for happiness, Rev. Gen. Psychol., 15 (2011), 138–157. doi: 10.1037/a0023059. doi: 10.1037/a0023059
    [42] M. F. Scheier, C. Carver, Optimism, in The Encyclopedia of Positive Psychology, Blackwell Publishing Ltd., Chichester, (2009), 656–663. doi: 10.1002/9781444306002.ch15.
  • This article has been cited by:

    1. Yuer Yang, Ruotong Du, Haodong Tang, Yanxin Zheng, 2021, SSLPNet: A financial econometric prediction model for small-sample long panel data, 9781450384971, 174, 10.1145/3512576.3512607
    2. Xiao Ren, Jie Hua, Xin Chi, Yao Tan, Visual analysis of social events and stock market volatility in China and the USA during the pandemic, 2022, 20, 1551-0018, 1229, 10.3934/mbe.2023056
    3. Yuzhen Chen, Hui Wang, Suzhen Li, Rui Dong, A Novel Grey Seasonal Model for Natural Gas Production Forecasting, 2023, 7, 2504-3110, 422, 10.3390/fractalfract7060422
    4. Wanli Xie, Zhenguo Xu, Caixia Liu, Jianyue Chen, A novel fractional Hausdorff grey system model and its applications, 2023, 45, 10641246, 7575, 10.3233/JIFS-230121
    5. Yunfei Yang, Xiaomei Wang, Jiamei Xiong, Lifeng Wu, Yifang Zhang, An innovative method for short-term forecasting of blockchain cryptocurrency price, 2025, 138, 0307904X, 115795, 10.1016/j.apm.2024.115795
    6. Xiaojun Zhou, Chunna Zhao, Yaqun Huang, Chengli Zhou, Junjie Ye, Improved fractional-order gradient descent method based on multilayer perceptron, 2025, 183, 08936080, 106970, 10.1016/j.neunet.2024.106970
    7. Gazi Murat Duman, Elif Kongar, A novel Hausdorff fractional grey Bernoulli model and its Application in forecasting electronic waste, 2025, 3, 29497507, 349, 10.1016/j.wmb.2025.02.002
    8. Yuansheng Qian, Zhijie Zhu, Xinsong Niu, Linyue Zhang, Kang Wang, Jianzhou Wang, Environmental policy-driven electricity consumption prediction: A novel buffer-corrected Hausdorff fractional grey model informed by two-stage enhanced multi-objective optimization, 2025, 377, 03014797, 124540, 10.1016/j.jenvman.2025.124540
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(7718) PDF downloads(935) Cited by(5)

Figures and Tables

Figures(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog