Loading [MathJax]/extensions/TeX/boldsymbol.js
Research article

On fixed point results in F-metric spaces with applications

  • Received: 28 February 2023 Revised: 19 April 2023 Accepted: 27 April 2023 Published: 15 May 2023
  • MSC : 46S40, 54H25, 47H10

  • The aim of this research article is to define locally rational contractions concerning control functions of one variable in the background of F-metric spaces and establish common fixed point results. We also introduce (α-ψ)-contractions and generalized (α, ψ,δF)-contractions in F-metric spaces and obtain fixed points of multifunctions. A non trivial example is also furnished to manifest the originality of the fundamental result. As application, we investigate the solution of nonlinear neutral differential equation.

    Citation: Hanadi Zahed, Zhenhua Ma, Jamshaid Ahmad. On fixed point results in F-metric spaces with applications[J]. AIMS Mathematics, 2023, 8(7): 16887-16905. doi: 10.3934/math.2023863

    Related Papers:

    [1] Bijan Moradi, Mehran Khalaj, Ali Taghizadeh Herat, Asghar Darigh, Alireza Tamjid Yamcholo . A swarm intelligence-based ensemble learning model for optimizing customer churn prediction in the telecommunications sector. AIMS Mathematics, 2024, 9(2): 2781-2807. doi: 10.3934/math.2024138
    [2] Salman khan, Muhammad Naeem, Muhammad Qiyas . Deep intelligent predictive model for the identification of diabetes. AIMS Mathematics, 2023, 8(7): 16446-16462. doi: 10.3934/math.2023840
    [3] Khaled Tarmissi, Hanan Abdullah Mengash, Noha Negm, Yahia Said, Ali M. Al-Sharafi . Explainable artificial intelligence with fusion-based transfer learning on adverse weather conditions detection using complex data for autonomous vehicles. AIMS Mathematics, 2024, 9(12): 35678-35701. doi: 10.3934/math.20241693
    [4] A. Presno Vélez, M. Z. Fernández Muñiz, J. L. Fernández Martínez . Enhancing structural health monitoring with machine learning for accurate prediction of retrofitting effects. AIMS Mathematics, 2024, 9(11): 30493-30514. doi: 10.3934/math.20241472
    [5] Fazeel Abid, Muhammad Alam, Faten S. Alamri, Imran Siddique . Multi-directional gated recurrent unit and convolutional neural network for load and energy forecasting: A novel hybridization. AIMS Mathematics, 2023, 8(9): 19993-20017. doi: 10.3934/math.20231019
    [6] Olfa Hrizi, Karim Gasmi, Abdulrahman Alyami, Adel Alkhalil, Ibrahim Alrashdi, Ali Alqazzaz, Lassaad Ben Ammar, Manel Mrabet, Alameen E.M. Abdalrahman, Samia Yahyaoui . Federated and ensemble learning framework with optimized feature selection for heart disease detection. AIMS Mathematics, 2025, 10(3): 7290-7318. doi: 10.3934/math.2025334
    [7] Spyridon D. Mourtas, Emmanouil Drakonakis, Zacharias Bragoudakis . Forecasting the gross domestic product using a weight direct determination neural network. AIMS Mathematics, 2023, 8(10): 24254-24273. doi: 10.3934/math.20231237
    [8] Federico Divina, Miguel García-Torres, Francisco Gómez-Vela, Domingo S. Rodriguez-Baena . A stacking ensemble learning for Iberian pigs activity prediction: a time series forecasting approach. AIMS Mathematics, 2024, 9(5): 13358-13384. doi: 10.3934/math.2024652
    [9] Ana Lazcano de Rojas, Miguel A. Jaramillo-Morán, Julio E. Sandubete . EMDFormer model for time series forecasting. AIMS Mathematics, 2024, 9(4): 9419-9434. doi: 10.3934/math.2024459
    [10] Tamilvizhi Thanarajan, Youseef Alotaibi, Surendran Rajendran, Krishnaraj Nagappan . Improved wolf swarm optimization with deep-learning-based movement analysis and self-regulated human activity recognition. AIMS Mathematics, 2023, 8(5): 12520-12539. doi: 10.3934/math.2023629
  • The aim of this research article is to define locally rational contractions concerning control functions of one variable in the background of F-metric spaces and establish common fixed point results. We also introduce (α-ψ)-contractions and generalized (α, ψ,δF)-contractions in F-metric spaces and obtain fixed points of multifunctions. A non trivial example is also furnished to manifest the originality of the fundamental result. As application, we investigate the solution of nonlinear neutral differential equation.



    With the rapid industrialization of the world, energy consumption in industry has been increasing and inefficient use of energy consumption and waste of limited resources has led to excessive carbon emissions [1]. The ferrous metals manufacturing industry (i.e., steel industry) is considered one of the highest energy-consuming and carbon-emitting sectors, and it is also one of the largest energy-consuming manufacturing industries globally [2]. This industry heavily relies on coal and other fossil fuels for production, resulting in substantial emissions of carbon dioxide, accounting for approximately 4% to 5% of the total global anthropogenic carbon emissions. According to the reports from the International Energy Agency, the carbon dioxide (i.e., CO2) emissions from the manufacturing industry account for 40% of world carbon dioxide emissions, with the steel manufacturing sector being the largest contributor, making up around 27% of the global manufacturing sector [3].

    In terms of the metallurgical industry, the process of iron smelting in blast furnaces is one of the crucial steps in the reduction and production process. The physical temperature of molten iron constitutes approximately 50% of the total heat input during the entire process. Therefore, effectively controlling the blast furnace molten iron temperature within a reasonable range is essential for reducing the energy consumption in the reduction smelting process [4]. In other words, temperature control is one of the important indicators of product quality [5,6]. To achieve stable pig iron quality and consistent blast furnace performance, the temperature of hot metal should be maintained within a stable range. The stable temperature inside the furnace determines the composition and quality of hot metal, ensuring the smooth operation of the blast furnace [7]. Besides ensuring the highest product quality, reducing production costs is vital for the economic viability of enterprises. Hence, the establishment of a closed-loop process model that accurately predicts the status of the system could significantly reduce production costs [8]. However, directly measuring the blast furnace molten iron temperature is often a challenging task due to the complex dynamics and high-temperature characteristics of the smelting process [9,10].

    Due to the complex combustion processes and interactions among multiple key variables inside the blast furnace, the iron temperature exhibits nonlinear and non-steady-state characteristics [11]. Therefore, establishing a mechanistic model for accurately predicting the iron temperature is challenging, especially due to the lack of high-temperature thermodynamic and kinetic data, as well as the intricate dependencies among internal system variables and the three target variables under study [12]. In the past, expert systems were considered promising alternative solutions for blast furnace modeling. Some groundbreaking work has been accomplished in the field of expert systems [13,14]. Moreover, expert rules have been embedded in artificial and logical intelligent systems to supervise the process adequately. Expert rules provide temporary assumptions for each task under supervision, such as gas flow distribution or furnace temperature levels, followed by appropriate actions based on the outcomes of these temporary assumptions. However, these methods often overlook the complex interactions among multiple key variables, leading to unstable model predictions [15]. In recent years, data-driven methods have gradually gained prominence in blast furnace modeling and process state evaluation. For instance, Jimenez and colleagues have employed a back propagation neural network (BP-NN) model to predict blast furnace iron temperature [16].

    As the furnace temperature rises, the silicon content in molten iron gradually increases, and there is an approximately linear correlation between the silicon content in molten iron and temperature. Therefore, attributes closely related to iron temperature have selected as influencing factors from some studies [17,18] and data-driven algorithms have been employed to investigate the silicon content in molten iron and predict the molten iron temperature [19]. However, due to over-reliance on a single parameter and the high lag in silicon content in molten iron, there is necessity for further improvement in model stability. For instance, Su et al. [20] and Zhang et al. [21] have employed an extreme learning machine (ELM) for molten iron temperature prediction. In comparison to other neural networks, radial basis function neural networks (RBF-NN) exhibit the capability to approximate any nonlinear function with arbitrary precision, showcasing superior approximation, classification and global optimality. Furthermore, RBF-NN is local approximation network that omits the learning behavior of hidden layer weights, thus avoiding the time-consuming process of layer-by-layer error propagation. As a result, RBF-NN exhibits a rapid learning convergence speed. RBF-NN finds extensive applications in various simulation processes within the chemical industry [22,23]. Due to its effectiveness in handling non-linear relationships and its strong capabilities in dealing with high-dimensional data, the sample size requirement is not high [24]. Support vector machine (SVR) is suitable for predicting multi-dimensional related variables. In fact, SVR has been applied in complex, non-linear data modeling and prediction tasks including blast furnace modeling [25,26].

    To address the issue of irregularity in the iron temperature time series, decomposition theory has been employed to process the original sequences, achieving promising results in predicting smelting temperatures [27], wind speeds [28] and fluid properties [29], among other areas. For instance, Wang et al. decomposed the hybrid mode of data, then predicted the decomposed components and finally reconstructed the predicted components to predict high-noise, non-stationary and nonlinear short-term CO2 emissions with the same nature as hot metal temperature, and provided quantitative benchmarks for policy formulation [30]. By decomposing the original sequences, useful information could be successfully extracted. Notably, the improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) method decomposes sequences into intrinsic mode functions (IMFs) with different frequencies and energy features, offering a better grasp of the details and changing trends in temperature signals with different frequency components. By extracting relevant features from temperature data, the method reduces the amount of decomposition, enhancing prediction accuracy and training speed and making the decomposition more accurate and interpretable. Therefore, it has gained widespread application in decomposition algorithms.

    Furthermore, to address the issue of different modes and multi-layered trends in iron temperature, Cui et al. used the fuzzy C-means model to divide the parameter set of the integrated furnace temperature characterization, predicted the furnace parameters step by step to achieve dynamic prediction of hot metal silicon content, and adjusted the control parameters reversely to achieve accurate control of furnace temperature [31]. Fontes et al. applied fuzzy C-means to cluster similar data objects and separate different data objects, which improved the performance of the model [32]. Zhao et al. adopted the simple and easy K-means algorithm, and finally, the performance of the prediction model of molten iron temperature based on the K-means clustering label was further improved, and the accurate monitoring and prediction of molten iron temperature was realized [33]. Clustering analysis including K-means and other algorithms, known for their rapid convergence, minimal hyperparameter adjustments and strong interpretability, has been used to group data objects based on certain attributes, forming multiple classes or clusters, i.e., different feature groups. This ensures the maximization of data object similarity within the same class or cluster and the maximization of differences between different classes or clusters. Subsequently, data within each cluster would be individually trained for corresponding predictions. In summary, decomposition theory and clustering concepts provide effective methods to address the challenges of iron temperature time series and the diverse modes and trends of iron temperature. A range of data analysis tools and techniques have been reported to meet diverse and complex needs.

    While decomposition algorithms and clustering techniques contribute to a more thorough comprehension of features and trends in molten iron temperature data, potential issues arise from the IMFs generated by ICEEMDAN and the results of K-means clustering, particularly concerning the number of components and clusters. To optimize model efficiency and generalization capabilities, the kernel principal component analysis (KPCA) algorithm could be opted to efficiently reduce dimensions when handling high-dimensional data, mitigating the risk of model complexity induced by an excess of components and clusters [34]. In fact, the reduction in data dimensions facilitates the precise capture of critical features in molten iron temperature data, concurrently enhancing the model generalization prowess. Furthermore, KPCA excels in handling non-linear relationships, making it well-suited for addressing potential complex non-linear relationships within molten iron temperature data. This is pivotal for ensuring accurate and robust predictions. The decision to employ the KPCA algorithm is strategic, aiming to strike a balance in handling molten iron temperature data, optimizing model efficiency, streamlining structure and ensuring a comprehensive understanding of data features and trends.

    In this work, this molten iron temperature prediction model utilizes various sets of data collected from the steel production process. It applies decomposition theory (i.e., ICEEMDAN) to decompose the molten iron temperature data, employs KPCA method for dimensionality reduction and utilizes clustering regression to predict IMF, which contains the highest information content. For the remaining sequences with a stronger temporal aspect, the RBF-NN is employed for training and prediction. The final step involves aggregating and reconstructing these results to obtain the ultimate prediction value. The contribution of this work is that the ICEEMDAN is introduced to decompose the molten iron temperature series to deal with the problem of nonlinearity and disorder of the molten iron temperature series for the first time. Furthermore, the laws of IMF components that pass through ICEEMDAN are explored, and K-means, SVR and RBF-NN are used to predict the components with different laws, respectively. Overall, this model provides valuable guidance for on-site operators, aiding in stabilizing furnace operations and achieving real-time furnace control. The work not only guides the field of metallurgical and chemical industries but also may be applied to other nonlinear sequence prediction fields.

    The rest of this article is organized as follows. In Section 2, the experiment and related methods are provided. In Section 3, The decomposition algorithm and the processing results of related parameters are introduced, as well as the results and related indicators of the proposed model. In Section 4, the conclusions are provided, along with an outlook for future work.

    Traditional methods for forecasting molten iron temperature often struggle to accurately predict abrupt changes, resulting in less precise model predictions. Decomposing molten iron temperature into multiple components with various frequencies and building separate forecasting models for each component is a highly effective and feasible solution. Among these methods, complete ensemble empirical mode decomposition with adaptive noise is an improvement over empirical mode decomposition. It effectively overcomes issues like modes mixing that occur after empirical mode decomposition. However, even with complete ensemble empirical mode decomposition with adaptive noise, the components obtained may still contain noise and pseudo modes. To address this, an improved approach (i.e., ICEEMDAN) is introduced. In fact, the k-th mode component IMF is derived from empirical mode decomposition with added white noise but not direct addition of Gaussian white noise during the traditional decomposition process. The specific steps of the ICEEMDAN are as follows.

    (1) Incorporate a set of white noise wi(t) into the molten iron temperature sequence x(t), where xi1(t) is constructed as

    xi1(t)=x(t)+ζ0E1[wi(t)]

    to generate the sequence corresponding to this set a white noise r1(t). Subsequently, the first residual component r1(t) is obtained by

    r1(t)=N(xi1(t)), (1)

    where N(xi1(t)) represents the calculation of the local mean of the data sequence xi1(t).

    (2) The IMF is computed by

    IMF1=x(t)r1(t)=x(t)N(xi1(t)), (2)

    where IMF1 refer as to the first IMF.

    (3) Construct the sequence

    xi2(t)=x(t)+ζ0E2[wi(t)]

    resulting in the second residual component

    r2(t)=N(xi2(t)).

    The second intrinsic mode function IMF2 is calculated by

    IMF2=x(t)r2(t)=x(t)N(x(t)+ζ0E2[wi(t)]), (3)

    where E2() represents the second IMF which is generated by the empirical mode decomposition of white noise wi(t), ζ1 is the signal-to-noise ratio for the added noise divided by the standard deviation of this noise component E2(), and N(xi2(t)) denotes the calculation of the local mean of the data sequence xi2(t).

    (4) Repeating the above step, the corresponding residual component and the IMF can be obtained by

    {rk(t)=N(rk(t)+ζk1Ek[wi(t)]),IMFk=rk1(t)rk(t), (4)

    where rk(t) refers as the k-th residual component and IMFk refers as the k-th IMF.

    KPCA would be a technique used for dimensionality reduction of temperature data and capable of handling nonlinear relationships within temperature data. It has been considered as a nonlinear extension of the principal component analysis method. Its core idea involves mapping the original temperature data through a nonlinear function into a high-dimensional space, where the primary features of the temperature data are identified. In terms of the applications for predicting molten iron temperature, KPCA could help operators better understand complex temperature data and improves the performance of predictive models. The computation procedure of KPCA is provided as follows.

    (1) The covariance matrix of temperature data mapped in the high-dimensional space is given by

    cov=1NNi=1 ϕ(xi)(xi)T, (5)

    where cov refers as the covariance matrix.

    (2) By applying the above equation, the characteristic equation could be derived by

    λw=covw, (6)

    where λ represents the eigenvalue, and w is the corresponding eigenvector.

    (3) With the help of transforming, the above equation can be rewritten by

    λ[ϕ(xj),w]=[ϕ(xj),covw], (7)

    where w can be replaced with

    w=Ni=1 viϕ(xi). (8)

    (4) Thus, the above equation can be rewritten by

    λNi=1  vi[ϕ(xj),ϕ(xi)]=1NNi=1  vi[ϕ(xj),Nk=1  ϕ(xk)][ϕ(xk),ϕ(xi)]. (9)

    (5) By introducing the kernel function, the projection of the mapped temperature data in the high-dimensional space can be calculated as

    Proji=Ni=1 vjiKer(xi,xj)=Ni=1 vji[ϕ(xi),ϕ(xj)], (10)

    where the kernel function employed in this paper is Gaussian kernel function.

    (1) K-means clustering is an iterative data analysis algorithm. Its principle is to divide the data into K groups and initially select K objects randomly as cluster centers. The distance between each object and all the cluster centroids was calculated, and each object was assigned to the nearest cluster centroid. The centroids of the clusters and the objects assigned to them represent a cluster. In this work, there are 64 temperature data points are used for clustering, and K is defined as 2 according to the amount of temperature data. Use the Euclidean distance to figure out which cluster to put in. The steps to determine the cluster to which a new temperature data point belongs are outlined below. Compute the Euclidean distance between the new temperature data point and the cluster centers. For a new temperature data point, the Euclidean distance between it and each cluster center can be computed. Therefore, the Euclidean distance is computed by

    d=ni=1 (xiyi)2, (11)

    where d is the Euclidean distance, n represents the temperature data point dimensionality, indicating the number of features in the temperature data point, xi stands for the i-th feature value of the new temperature data point and yi denotes the i-th feature value of the cluster center. Subsequently, the new temperature data point is assigned to the cluster with the shortest Euclidean distance. Specifically, the Euclidean distance from the new temperature data point to each cluster center could be compared. If the distance to Cluster 1 is shorter than that to Cluster 2, the new temperature data point is assigned to Cluster 1. Conversely, if the distance to Cluster 2 is shorter, it is allocated to Cluster 2.

    (2) Support vector machine is a supervised learning method, which can effectively reveal the correlation between the influencing factors and the target value. Here, K-means cluster analysis is combined with SVR to divide the temperature data into clusters with similar characteristics, which facilitates the identification of potential patterns and clusters within the data. Theoretically, SVR is a regression algorithm built upon the principles of support vector machine. Its functional construction is equivalent to solving a quadratic programming problem. The function involved in solving this problem is given by

    minαα 12li,j=1  (αiαi)(αjαj)×(φ(xi)φ(xj))+εli=1  (αi+αi)li=1  yi(αiαi), (12)

    where

    0αi,αiC

    and

    li=1  (αiαi)=0.

    The obtained regression function is given by

    f(x)=ωφ(x)+b=li=1  (αiαi)K(xi,x)+b, (13)

    where K(xi,x) is the kernel function. Here, the Gaussian kernel function is given by

    K(xi,xj)=φ(xi)φ(xj)=exp(12σ2|xixj|2). (14)

    (3) RBF-NN method which has been primarily used for nonlinear functions is a feedforward network consisting of three layers. The fundamental structure of the RBF-NN comprises three components including the input layer, the radial basis layer (hidden layer) and the output layer, with fully connected nodes between these layers, as depicted in Figure 1.

    Figure 1.  Diagram of basic architecture of RBF-NN used in this work.

    Since the hidden layer consists of RBF-NN, it can directly map the input vector into the hidden space, eliminating the need for weighted connections. Consequently, the connection weights between the input layer and the hidden layer are all set to 1. The hidden layer accomplishes the nonlinear projection of the input vector, while the output layer is responsible for the final linear weighted summation. The activation function of RBF-NN can be defined by

    R(xpci)=exp(12θ2||xpci||2), (15)

    where ||xpci|| refers as the Euclidean norm, c refers as the center of the Gaussian function and σ refers as the variance of the Gaussian function. For j=1,2,,n, the output of RBF-NN can be calculated by

    yj=hi=1 ωijexp(12σ2||xpci||2), (16)

    where xp=(xp1,xp2,,xpm)T refers as the p-th input sample, ci refers as the center of the node in the hidden layer of the network, ωij refers as the weight from the hidden layer to the output layer, i=1,2,,h refers as the number of nodes in the hidden layer and yj refers as the jth actual output. Let d be the output expectation value of the sample, and the variance σ of the function can be calculated by

    σ=1Pmj (||djyici||2). (17)

    Figure 2 shows the process of establishing the hybrid model of hot blast furnace molten iron temperature. Initially, the molten iron temperature data is decomposed into different frequency components and energy terms, including intrinsic mode functions and a residual component, using the ICEEMDAN method. Subsequently, the relevant parameters derived from KPCA and IMF1 are employed for clustering. SVR is trained for prediction, and models are applied to predict the remaining IMFs and residual component. The final step involves aggregating and reconstructing the predicted values.

    Figure 2.  Flowchart of the hybrid model proposed in this work.

    This new model encompasses the following key steps.

    (1) The molten iron temperature and its related data are preprocessed. The parameter data of independent variables were standardized and transformed into a standard normal distribution with mean between 0 and 1 to eliminate the dimensional differences between different parameters. KPCA is used to reduce the dimension of the relevant parameters.

    (2) The molten iron temperature sequence is decomposed by ICEEMDAN. ICEEMDAN is employed to decompose the time series of molten iron temperature. The temperature data is extracted as IMFs, and R is obtained through iteration. This process effectively breaks down the non-stationary and nonlinear complex sequence into a series of more regular sub-sequences.

    (3) The molten iron temperature and related parameters are clustered. IMFs and KPCA-reduced relevant parameters are extracted, and a suitable value for k is selected based on the data characteristics for clustering. Data points are divided into k clusters.

    (4) Model modeling of the processed time series and related data. For each of the K clusters, SVR models are trained and applied for predicting IMF1. As for the remaining IMFs and R sequences, RBF-NN is employed for temperature prediction.

    (5) The performance of the proposed model was evaluated by different evaluation indicators. The results of predictions for individual clusters and sequences are superimposed to reconstruct the overall molten iron temperature prediction. Evaluation metrics including mean absolute percentage error (MAPE), root mean squared error (RMSE), hit ratio (HR) and Pearson correlation coefficient (PCC) are used for assessment and provided as follows.

    To evaluate the model prediction accuracy, the following four error metrics: MAPE, RMSE, HR and PCC are used here and given by

    {MAPE=1NNi=1 |yiˆyiyi|,RMSE=1NNi=1 (yiˆyi)2,HR=1NNi=1 hit(i),PCC=cov(yi,^yi)σyiσ^yi, (18)

    where yi is the actual molten iron temperature, ^yi is the predicted molten iron temperature, N is the length of the predicted series,  hit(i) represents the condition in which the predicted value falls within a±5℃ range of the actual value, cov is the covariance operator, σyi is the standard deviation of the actual molten iron temperature time series and σ^yi is the standard deviation of the predicted molten iron temperature time series. MAPE and RMSE provide quantitative assessments of the absolute extent of errors. Smaller MAPE and RMSE values indicate smaller prediction errors, signifying higher accuracy. MAPE is used to calculate the average percentage error, which represents the average percentage difference between the predicted value and the actual value, and can be compared across different datasets and problems.

    Moreover, RMSE considers the square of errors, providing a measurement of the overall error magnitude. Furthermore, HR finds extensive utility in industrial applications, particularly in the realm of molten iron temperature prediction. HR measures a model ability to predict values within a±5℃ range of actual values. In numerous industrial scenarios, ensuring accuracy within a specific temperature range is of paramount importance, directly impacting the quality and safety of production processes. Last, PCC is employed to gauge the linear relationship between predicted values and actual values. It furnishes information about the strength of the association between two variables, with values closer to 1 or –1 indicating a stronger linear relationship. The comprehensive application of these metrics aids operators or engineers in evaluating various models for molten iron temperature prediction, guiding model selection and refinement, ultimately optimizing industrial processes.

    Permutation entropy would be a nonlinear method for detecting abrupt changes in temperature data and offers a concise way to describe complex systems that were previously challenging to quantify. The magnitude of the permutation entropy value Hp(m) can represent the complexity of a time series. Time series that are more orderly correspond to smaller values of permutation entropy, while more complex time series correspond to larger permutation entropy values. The calculation process is simple and it exhibits high resistance to interference and robustness. Assuming that {xi} is a one-dimensional time series, embedded in a dimension is m with a time delay of τ. The sliding window (m,τ) that sequentially passes through the current sequence, and the one-dimensional time series would be reconstructed by

    \mathit{\boldsymbol{Y}} = \left[\begin{array}{cccc}{x}_{1}& {x}_{1+\tau }& \cdots & {x}_{1+(m-1)\tau }\\ \ \vdots & \ \vdots & \ \vdots & \cdots \\ {x}_{j}& {x}_{j+\tau }& \cdots & {x}_{j+(m-1)\tau }\\ \ \vdots & \ \vdots & \ \vdots & \cdots \\ {x}_{k}& {x}_{k+\tau }& \cdots & {x}_{k+(m-1)\tau }\end{array}\right] , (19)

    where i = \mathrm{1, 2}, \cdots, N , m > 1 , \tau > 0 and each row of the matrix serves as a reconstruction component is given by

    {\mathit{\boldsymbol{Y}}}_{j} = \left[{x}_{j}, {x}_{j+\tau }, \cdots , {x}_{j+(m-1)\tau }\right] , (20)

    where for j = \mathrm{1, 2}, \cdots, k . The reconstruction components are arranged in ascending order based on their numerical values, resulting in a total of m! possible permutations. Using {P}_{1}, {P}_{2}, \cdots, {P}_{k} to represent the probabilities of occurrence for each permutation, permutation entropy is defined by

    {H}_{D}\left(m\right) = -\sum _{i = 1}^{k}{\ }{P}_{i}\mathrm{l}\mathrm{n}\left({P}_{i}\right) , (21)

    where the values of m is selected according to open literature [35].

    In this work, an on-site dataset comprising 80 data points from a metallurgical factory were utilized to validate the effectiveness and accuracy of the proposed hybrid model. The smelting process involved mine, which has a significant impact on the blast furnace operation. As a result, the furnace conditions often exhibit substantial fluctuations, increasing the complexity of the modeling task. Modeling with these data enables effective testing of the model generalization performance. Parameters with a substantial impact on blast furnace temperature were chosen for the investigation. Considering the complexity of the blast furnace smelting environment and the potential for equipment malfunctions affecting data integrity, missing values in the sample data were addressed using the mean imputation method. Due to the relatively limited sample size and the collection of data during stable blast furnace operation, the dataset underwent preprocessing. The numerical ranges of molten iron temperature and related parameters are presented in Table 1.

    Table 1.  Character parameters for molten iron temperature prediction.
    Variable name Variable description Value range Unit
    Y molten iron temperature 1492.5~1519
    X1 silicon content 0.43~0.73 %
    X2 blowing rate 4775~5050 m3·min−1
    X3 coal injection 13~17 kg·t−1
    X4 oxygen enrichment rate 0.79~1.19 %
    X5 permeability 26585~26294 ml
    X6 wind temperature 1040~1060
    X7 blast furnace top pressure 0.184~0.186 kPa
    X8 comprehensive load 3.10~3.22 kPa
    X9 gas utilization rate 40.97~46.04 %

     | Show Table
    DownLoad: CSV

    After preprocessing, the numerical ranges of the molten iron temperature are 1492.5℃ to 1519℃, silicon content ranges from 0.43% to 0.73%, the blowing rate varies between 4775 and 5050 m3·min-1, coal injection ranges from 13 to 17 kg·t-1, oxygen enrichment rate varies between 0.79% and 1.19%, permeability ranges from 26294 to 26585 ml, wind temperature varies between 1040℃ and 1060℃, blast furnace top pressure is in the range of 0.184 kPa to 0.186 kPa, comprehensive load is between 3.10 kPa and 3.22 kPa and gas utilization rate varies from 40.97% to 46.04%. These parameters provide critical data concerning molten iron production and blast furnace operations, enabling the monitoring and control of the smelting process to ensure production stability and quality.

    Considering the diverse units and dimensions of the smelting data relevant parameters, the execution of data standardization is a pivotal procedure to ensure the effectiveness of modeling and investigation. The fundamental objective of data standardization is to mitigate the potential impact of varying magnitude among different parameters on the modeling outcomes. The minimum-maximum normalization method is employed to map all parameters onto a uniform scale, typically within the range of 0 to 1. This practice contributes to maintaining consistency in both modeling and investigation. The scatter plot of standardized relevant parameters is depicted in Figure 3. As illustrated in this figure, the scatter plot of standardized sample data for the relevant parameters exhibits a consistent data range. This further enhances our comprehension of the relationships among these parameters, ultimately improving the accuracy and interpretability of our modeling efforts. This standardization process plays a pivotal role in ensuring the scientific rigor and reliability.

    Figure 3.  Normalized line chart of related factors in terms of original data.

    In parallel, to reduce the data dimensionality and enhance computational efficiency, the KPCA method is applied to investigate nine variables (i.e., factors affecting molten iron temperature) within the input blast furnace molten iron model. The results of this investigation are presented in Table 2. Based on two scientific evaluation criteria, variance contribution rate and cumulative variance contribution rate, parameter selection was performed to eliminate less influential feature parameters in predicting. The first seven principal components were identified, with their cumulative variance contribution rate exceeding 90%. This observation signifies that these seven principal components effectively capture most of the dataset information, simultaneously reducing data dimensionality while preserving data richness. As a result, these seven principal components were retained for subsequent investigation and modeling. This selection process contributes to data simplification, reduction of redundancy, improved modeling efficiency and ultimately, ensures that the predefined objectives would be achieved with greater accuracy and efficiency.

    Table 2.  The calculation results of relevant parameters using KPCA method.
    No. Eigenvalues Eigenvalue variance contribution (%) Cumulative variance contribution (%)
    1 1.26 16.64 16.64
    2 1.09 14.33 30.97
    3 1.06 13.88 44.85
    4 1.00 13.11 57.96
    5 0.98 12.89 70.86
    6 0.87 11.41 82.27
    7 0.70 9.24 91.51
    8 0.65 8.49 100.00

     | Show Table
    DownLoad: CSV

    To fully extract the regular features in complex signals to enhance prediction performance, the ICEEMDAN algorithm is utilized to decompose 80 sets of data. During this process, the noise standard deviation is set to 1, the number of realizations is a specific value and the maximum allowed sifting iteration is 5000. Figure 4 presents the decomposition results of the original molten iron temperature sequence, comprising five modal components and one residual component, each characterized by different frequency scales.

    Figure 4.  Results of iron temperature decomposition using ICEEMDAN.

    Additionally, the Pearson correlation coefficients between each modal component and the residual component with the original time series are computed. By the presentation in this figure, the ICEEMDAN method is applied to decompose the temperature dataset, resulting in five IMFs and one residue. This decomposition process exhibits a distinct trend, with the frequency of oscillations gradually decreasing as more IMFs are added, leading to a gradual smoothing of the data. Findings highlight the excellent performance of the ICEEMDAN decomposition method in capturing local features and periodic variations within the sequence. This implies a better understanding of crucial characteristics within the data, including local peaks and troughs, as well as other nonlinear changes. This decomposition process contributes to providing more accurate and reliable input data for subsequent predictive models. Additionally, it offers a richer data representation that can be utilized for more in-depth investigation and modeling.

    IMF1 demonstrates a notably strong correlation with the original sequence compared to other modal components, with a correlation coefficient as high as 0.7680. This observation reveals that IMF1 encapsulates highly relevant feature information from the original dataset, encompassing the primary trends, periodic components and key information related to the original molten iron temperature sequence. This emphasizes the pivotal role of IMF1 in the decomposition process. Due to its close association with the original sequence, IMF1 proves highly effective in describing the overall dynamic characteristics of the molten iron temperature, thereby enhancing the reliability of modeling for capturing future trends and changes. Furthermore, IMF1 provides insights into significant signal characteristics within the original temperature data, including primary trends and prominent periodic components. Particularly, for forecasting tasks necessitating an in-depth comprehension of the dynamic properties of molten iron temperature, the relevance of IMF1 positions it as a critical modeling element, facilitating a better understanding and prediction of the complex behavior of molten iron temperature.

    To further investigate the relationship between the decomposed sequences and time series, permutation entropy is employed to quantify the complexity of time series and the molten iron temperature sequence, as illustrated in Figure 5. In the calculation of permutation entropy, m = 5 and t = 1.This investigation serves to explore the latent associations between these sequences, providing us with additional insights for a more comprehensive understanding of the dynamic characteristics of molten iron temperature. By observing this figure, it can be noticed that as the number of modal components increases, the permutation entropy values continuously decrease, reaching 0.27574 for the final modal component IMF5 and 0.03356 for the residual. In contrast, the permutation entropy of IMF1 is 0.65027, and the permutation entropy of the original molten iron temperature sequence is 0.65528, with little significant difference between them.

    Figure 5.  Arrangement entropy values for the original molten iron temperature sequence and the decomposed sequences.

    This indicates that, during the decomposition process, with the increasing IMFs, the complexity of the sequence gradually decreases, as reflected in the decreasing trend of permutation entropy values. This trend suggests that in the initial IMFs, especially IMF1, most of the information and complexity from the original data are retained, making it a modal component with high information content. Therefore, IMF1 maintains a relatively high level of complexity in describing molten iron temperature, and its permutation entropy is very close to that of the original data sequence.

    This observation provides insights into the loss of information during the decomposition process and emphasizes the importance of IMF1 in retaining data complexity and essential features. IMF1 retains the same level of complexity as the original data while preserving critical information from the original dataset. Therefore, through visual observation of the decomposed sequences, correlation analysis and permutation entropy analysis, this study chooses to use IMF1 for regression modeling rather than employing other parameters for time series modeling. The primary consideration is that sequence one contains unique characteristics and causal relationships related to molten iron temperature compared to other sequences. Additionally, this decision simplifies the modeling process and reduces complexity, ensuring that the model is both effective and efficient in capturing the dynamics of molten iron temperature.

    The temperature data of blast furnace hot metal after ICEEMDAN decomposition is used for a specific purpose. IMF1 and related parameters are dimensionally reduced and then subjected to K-means clustering, dividing the data into two clusters. These two clusters are then separately input into SVR for training and testing. The training and testing data sets are split in an 8:2 ratio, with 63 data points used for training and 17 data points for testing. The training and testing of RBF-NN are also performed. The remaining IMFs and residual are used for further exploration. In this setup, each sample is input into an RBF-NN with a sample length of 4, meaning it uses the first four temperature data points to predict the following temperature data point. The parameters of the eight model are set as shown in Table 4.

    Table 4.  Training parameters in terms of the various existing and proposed models.
    Model Related training parameters
    BP-NN Iteration count is 1000
    Error threshold is 1×10−6
    Learning rate is 0.01
    SVR Penalty factor is c=4.0
    Radial basis function parameter is g=0.8
    KPCA-SVR Penalty factor is c=4.0
    Radial basis function parameter is g=0.8
    ELM Number of hidden layer nodes is 20
    Activation function is sigmoid
    ICEEMDAN-ELM Number of hidden layer nodes is 20
    Activation function is sigmoid
    RBF-NN Radial basis function spread is 8000
    ICEEMDAN-RBF-NN Radial basis function spread is 8000
    Hybrid model K-means K=2
    SVR Penalty factor is c=4.0
    Radial basis function parameter g=0.8
    RBF-NN Radial basis function spread is 8000

     | Show Table
    DownLoad: CSV

    The prediction results are depicted in Figure 6. As shown in this figure, the proposed hybrid model can accurately forecast temperature fluctuations. The alignment between the model output curve and the actual data curve is quite high. This approach is beneficial for effective advanced temperature control in the furnace.

    Figure 6.  Results of the hybrid model and temperature predictions of the other seven models for molten iron temperature.

    To thoroughly validate the performance of the proposed model, a comparative study was conducted with four classical prediction methods. The compared models include BP-NN, SVR, RBF-NN and ELM models. Additionally, comparative experiments were conducted using ICEEMDAN-ELM and ICEEMDAN-RBF-NN models to assess the improvements brought by the ICEEMDAN algorithm in model performance. To quantitatively evaluate the model performance, several metrics were used. In fact, MAPE measures the average deviation of the forecasted results from the actual results, RMSE represents the degree of deviation between model predictions and measurement reference values, PCC characterizes the correlation between the prediction curve and the actual curve and HR tests the accuracy of the model predictions. The results of these evaluations are presented in Table 4.

    As indicated in Table 5: (1) from the perspective of prediction accuracy and reliability, the composite model proposed in this paper achieves a MAPE of 0.15 and an RMSE of 2.94, both significantly lower than those of traditional models. Specifically, the MAPE and RMSE are reduced by 54.55% and 49.40% compared to BP-NN, 50.00% and 53.18% compared to SVR, 54.55% and 56.83% compared to ELM and 46.43% and 53.33% compared to RBF-NN. This demonstrates a substantial improvement in prediction accuracy and reliability with the new composite model. (2) Considering the overall performance of the model, the proposed model achieves a PCC of 0.87, indicating a strong correlation between the predicted results and the actual values. In contrast, traditional models exhibit only a weak correlation (i.e., 0.44, 0.73, 0.77 and 0.72 higher than traditional models). The proposed model shows a stronger ability to capture underlying patterns. (3) From the perspective of hit rates, the proposed model achieves hit rates as high as 94.12% when allowing an error margin of ±5. This represents a significant improvement over traditional models (i.e., an increase of 41.18%, 29.41%, 23.53% and 17.65% compared to traditional models). These results demonstrate that within an acceptable error range, the proposed model consistently performs well and is more reliable. (4) Compared to single models, the use of the ICEEMDAN algorithm significantly enhances the prediction accuracy and hit rate of the composite model. This enhancement is particularly noticeable in the PCC metric, where models employing the ICEEMDAN algorithm outperform single models. ICEEMDAN-ELM exhibits a 0.69 increase in PCC compared to the ELM model, and ICEEMDAN-RBF-NN shows a 0.63 increase in PCC compared to the RBF-NN model. This suggests that the use of decomposition algorithms can improve the model prediction stability. (5) Furthermore, compared to ICEEMDAN-ELM and ICEEMDAN-RBF-NN models that use decomposition algorithms, the combination of K-means and SVR to enhance the first decomposed sequence (i.e., IMF1) has a significant impact. Hit rates are increased by 23.23% and 11.77%, respectively. It is indicated that the composite model proposed here is better suited for iron water temperature prediction, as evidenced by the lower MAE and RMSE values and the PCC value close to 1, demonstrating its excellence in accuracy, stability and interpretability. (6) In comparison to ICEEMDAN-RBF-NN model, the proposed novel model exhibits improvements in all four indicators including MAPE, RMSE, PCC and HR (i.e., with increases of 16.67%, 21.81%, 11.54% and 14.29%, respectively). This indicates that after undergoing ICEEMDAN decomposition, the specific treatment of the IMF1 sequence, which contains the original information and is the most complex, is crucial for enhancing accuracy.

    Table 5.  Evaluation indices of predictions obtained by the proposed model and other comparative models.
    No. Model MAPE RMSE HR (±5) PCC
    1 BP-NN 0.33 5.81 52.94% 0.43
    2 RBF-NN 0.28 6.30 76.47% 0.15
    3 ELM 0.33 6.81 70.59% 0.10
    4 SVR 0.30 6.28 64.71% 0.14
    5 KPCA-SVR 0.32 6.28 58.82% 0.03
    6 ICEEMDAN-ELM 0.20 4.14 70.59% 0.79
    7 ICEEMDAN-RBF-NN 0.18 3.76 82.35% 0.78
    8 Hybrid model 0.15 2.94 94.12% 0.87

     | Show Table
    DownLoad: CSV

    To enhance the predictive accuracy of the blast furnace temperature forecasting model, a composite model incorporating ICEEMDAN, K-means, SVR and RBF-NN techniques for molten iron temperature prediction are proposed and validated. This new model amalgamates the strengths of ICEEMDAN, K-means, SVR and RBF-NN, enabling effective modeling and prediction of the intricate characteristics within the iron water temperature in the blast furnace smelting process. The findings reveal the following.

    (1) The application of decomposition techniques facilitates the superior extraction of pertinent features from the temperature data. This, in turn, enhances prediction accuracy and provides a more precise and interpretable decomposition of the iron water temperature sequence.

    (2) In comparison to traditional models including BP-NN, SVR and ELM, the new model exhibits a higher degree of precision in forecasting iron water temperature, thereby increasing the reliability and stability of the forecasted temperature.

    (3) Simultaneously exploring the relationships between relevant factors and time series enables a deeper understanding of the complexities involved in the blast furnace smelting process. This in-depth investigation contributes valuable insights to the prediction model.

    (4) Within the ICEEMDAN decomposition of furnace temperature time series, the first decomposed sequence, namely IMF1, demonstrates similarities in terms of correlation and sample entropy when compared to the rest of the sequences. It is suggested that IMF1 contains a substantial portion of the original sequence fluctuation information, indicating its significance and merit.

    In summary, our findings in this paper demonstrate the effectiveness and improvements of the proposed hybrid model in predicting molten iron temperatures. Despite the current limited sample size of datasets of molten iron temperature, this work provides valuable insights for optimizing and controlling the blast furnace smelting process and holds significant potential for various applications. Future work could involve expanding the model application scope, and considering more real-world scenarios and factors to further enhance prediction accuracy and reliability. However, the need for interpretability is crucial for deploying intelligent algorithms as ironmaking processes undergo digital transformation. That aim is to enhance overall model understanding, improving its robustness and adaptability to diverse production environments. Therefore, future work would also focus on utilizing interpretable models or methods to offer a more intuitive understanding, aiming to guide the effective implementation of digitalization in iron and steel production. In that sense, upcoming indispensable efforts would specifically explore the impact of larger sample sizes on establishing the molten iron temperature model.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors acknowledge the financial support from the Yunnan Fundamental Research Project, China (No. 202201BE070001-026), Natural Science Foundation of Yunnan Province, China (No. 202101AU070031), Scientific and Technological Talent and Platform Project of Yunnan Province, China (No. 202305AO350011), Yibin Federation of Social Sciences Circles (No. 2023YBSKL101), Interdisciplinary Research Project of Kunming University of Science and Technology (No. KUST-xk2022001), Open Foundation of State Environmental Protection Key Laboratory of Mineral Metallurgical Resources Utilization and Pollution Control (No. HB202204) and Young Elite Scientist Sponsorship Program by China Association for Science and Technology, China (No. YESS20210106).

    The authors declare that they have no conflicts of interest.



    [1] M. Frechet, Sur quelques points du calcul fonctionnel, Rend. Circ. Mat. Palerm., 22 (1906), 1–72.
    [2] I. A. Bakhtin, The contraction mapping principle in almost metric spaces, Funct. Anal., 30 (1989), 26–37.
    [3] S. Czerwik, Contraction mappings in b-metric spaces, Acta Math. Inform. Univ. Ostra, 1 (1993), 5–11.
    [4] V. Berinde, M. Păcurar, The early development in fixed point theory on b-metric spaces: A brief survey and some important related aspects, Carpathian J. Math., 38 (2022), 523–538. https://doi.org/10.37193/CJM.2022.03.01 doi: 10.37193/CJM.2022.03.01
    [5] J. Brzdek, Comments on the fixed point results in classes of function with values in a b-metric space, RACSAM Rev. R. Acad. A, 116 (2022), 1–17. https://doi.org/10.1007/s13398-021-01173-6 doi: 10.1007/s13398-021-01173-6
    [6] M. Paluszyński, K. Stempak, On quasi-metric and metric spaces, Proc. Am. Math. Soc., 137 (2009), 4307–4312. https://doi.org/10.1090/S0002-9939-09-10058-8 doi: 10.1090/S0002-9939-09-10058-8
    [7] M. A. Khamsi, N. Hussain, KKM mappings in metric type spaces, Nonlinear Anal., 7 (2010), 3123–3129. https://doi.org/10.1016/j.na.2010.06.084
    [8] A. Branciari, A fixed point theorem of Banach-Caccioppoli type on a class of generalized metric spaces, Publ. Math. Debr., 57 (2000), 31–37. https://doi.org/10.1023/A: 1009869405384
    [9] M. Jleli, B. Samet, On a new generalization of metric spaces, J. Fixed Point Theory Appl., 2018 (2018), 128.
    [10] A. E. Al-Mazrooei, J. Ahmad, Fixed point theorems for rational contractions in \mathcal{F}-metric spaces, J. Mat. Anal., 10 (2019), 79–86.
    [11] B. Samet, C. Vetro, P. Vetro, Fixed point theorem for \alpha -\psi contractive type mappings, Nonlinear Anal., 75 (2012), 2154–2165. https://doi.org/10.1016/j.na.2011.10.014
    [12] J. H. Asl, S. Rezapour, N. Shahzad, On fixed points of \alpha-\psi contractive multifunctions, Fixed Point Theory Appl., 2012 (2012), 212. https://doi.org/10.1186/1687-1812-2012-212 doi: 10.1186/1687-1812-2012-212
    [13] A. Hussain, T. Kanwal, Existence and uniqueness for a neutral differential problem with unbounded delay via fixed point results, Trans. A. Razmadze Math., 172 (2018), 481–490. https://doi.org/10.1016/j.trmi.2018.08.006
    [14] L. A. Alnaser, D. Lateef, H. A. Fouad, J. Ahmad, Relation theoretic contraction results in \mathcal{F}-metric spaces, J. Nonlinear Sci. Appl., 12 (2019), 337–344. https://doi.org/10.22436/jnsa.012.05.06
    [15] M. Alansari, S. S. Mohammed, A. Azam, Fuzzy fixed point results in \mathcal{F}-metric spaces with applications, J. Funct. Space., 2020 (2020), 5142815. https://doi.org/10.1155/2020/5142815 doi: 10.1155/2020/5142815
    [16] L. A. Alnaser, J. Ahmad, D. Lateef, H. A. Fouad, New fixed point theorems with applications to non-linear neutral differential equations, Symmetry, 11 (2019), 602. https://doi.org/10.3390/sym11050602 doi: 10.3390/sym11050602
    [17] S. A. Al-Mezel, J. Ahmad, G. Marino, Fixed point theorems for generalized (\alpha \beta -\psi )-contractions in \mathcal{F}-metric spaces with applications, Mathematics, 8 (2020), 584. https://doi.org/10.3390/math8040584 doi: 10.3390/math8040584
    [18] O. Alqahtani, E. Karapınar, P. Shahi, Common fixed point results in function weighted metric spaces, J. Inequalities Appl., 164 (2019), 1–9. https://doi.org/10.1186/s13660-019-2123-6
    [19] D. Lateef, J. Ahmad, Dass and Gupta's fixed point theorem in \mathcal{F}-metric spaces, J. Nonlinear Sci. Appl., 12 (2019), 405–411. https://doi.org/10.22436/jnsa.012.06.06
    [20] A. Hussain, F. Jarad, E. Karapinar, A study of symmetric contractions with an application to generalized fractional differential equations, Adv. Differ. Equ., 2021 (2021), 300. https://doi.org/10.1186/s13662-021-03456-z
    [21] A. Hussain, Fractional convex type contraction with solution of fractional differential equation, AIMS Math., 5 (2020), 5364–5380. https://doi.org/10.3934/math.2020344 doi: 10.3934/math.2020344
    [22] A. Hussain, Solution of fractional differential equations utilizing symmetric contraction, J. Math., 2021 (2021), 1–17. https://doi.org/10.1155/2021/5510971
    [23] Z. Mitrovic, H. Aydi, N. Hussain, A. A. Mukheimer, Reich, Jungck, and Berinde common fixed point results on \mathcal{F}-metric spaces and an application, Mathematics, 7 (2019), 2–11. https://doi.org/10.3390/math7050387 doi: 10.3390/math7050387
    [24] M. Mudhesh, N. Mlaiki, M. Arshad, A. Hussain, E. Ameer, R. George, et al., Novel results of \alpha _{\ast }-\psi -\Lambda -contraction multivalued mappings in \mathcal{F}-metric spaces with an application, J. Inequalities Appl., 113 (2022), 1–19. https://doi.org/10.1186/s13660-022-02842-9 doi: 10.1186/s13660-022-02842-9
    [25] A. Shoaib, Q. Mahmood, A. Shahzad, M. S. M. Noorani, S. Radenović, Fixed point results for rational contraction in function weighted dislocated quasi-metric spaces with an application, Adv. Differ. Equ., 310 (2021), 1–15. https://doi.org/10.1186/s13662-021-03458-x
    [26] A. S. Anjum, C. Aage, Common fixed point theorem in \mathcal{F}-metric spaces, J. Adv. Math. Stud., 15 (2022), 357–365.
    [27] A. Latif, R. F. Al Subaie, M. O. Alansari, Fixed points of generalized multi-valued contractive mappings in metric type spaces, J. Nonlinear Var. Anal., 6 (2022), 123–138.
    [28] J. Brzdek, E. Karapınar, A. Petruşel, A fixed point theorem and the Ulam stability in generalized dq-metric spaces, J. Math. Anal. Appl., 467 (2018), 501–520. https://doi.org/10.1016/j.jmaa.2018.07.022
    [29] W. Hu, Q. Zhu, Existence, uniqueness and stability of mild solutions to a stochastic nonlocal delayed reaction-diffusion equation, Neural Process. Lett., 53 (2021), 3375–3394. https://doi.org/10.1007/s11063-021-10559-x
    [30] X. Yang, Q. Zhu, Existence, uniqueness, and stability of stochastic neutral functional differential equations of Sobolev-type, J. Math. Phys., 56 (2015), 122701. https://doi.org/10.1063/1.4936647
    [31] A. Djoudi, R. Khemis, Fixed point techniques and stability for natural nonlinear differential equations with unbounded delays, Georgian Math. J., 13 (2006), 25–34. https://doi.org/10.1515/GMJ.2006.25
  • This article has been cited by:

    1. Yonghua Liu, Hao Deng, Hanqi Gao, Wei Ni, Research on investment evaluation of highway projects based on system dynamics model, 2024, 9, 2473-6988, 20326, 10.3934/math.2024989
    2. Hualun Zhou, Yibo He, Binzhao Li, Dazhou Song, Qiang Zhu, Yihong Li, Ironmaking process under artificial intelligence technology: A review, 2024, 0301-9233, 10.1177/03019233241277361
    3. Yinzhen Tan, Wei Xu, Kai Yang, Shahab Pasha, Hua Wang, Min Wang, Qingtai Xiao, Predicting cobalt ion concentration in hydrometallurgy zinc process using data decomposition and machine learning, 2025, 962, 00489697, 178420, 10.1016/j.scitotenv.2025.178420
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1382) PDF downloads(73) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog