As one of continuous concern all over the world, the problem of water quality may cause diseases and poisoning and even endanger people's lives. Therefore, the prediction of water quality is of great significance to the efficient management of water resources. However, existing prediction algorithms not only require more operation time but also have low accuracy. In recent years, neural networks are widely used to predict water quality, and the computational power of individual neurons has attracted more and more attention. The main content of this research is to use a novel dendritic neuron model (DNM) to predict water quality. In DNM, dendrites combine synapses of different states instead of simple linear weighting, which has a better fitting ability compared with traditional neural networks. In addition, a recent optimization algorithm called AMSGrad (Adaptive Gradient Method) has been introduced to improve the performance of the Adam dendritic neuron model (ADNM). The performance of ADNM is compared with that of traditional neural networks, and the simulation results show that ADNM is better than traditional neural networks in mean square error, root mean square error and other indicators. Furthermore, the stability and accuracy of ADNM are better than those of other conventional models. Based on trained neural networks, policymakers and managers can use the model to predict the water quality. Real-time water quality level at the monitoring site can be presented so that measures can be taken to avoid diseases caused by water quality problems.
Citation: Jing Cao, Dong Zhao, Chenlei Tian, Ting Jin, Fei Song. Adopting improved Adam optimizer to train dendritic neuron model for water quality prediction[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 9489-9510. doi: 10.3934/mbe.2023417
Related Papers:
[1]
Zhiyuan Wang, Chu Zhang, Shaopei Xue, Yinjie Luo, Jun Chen, Wei Wang, Xingchen Yan .
Dynamic coordinated strategy for parking guidance in a mixed driving parking lot involving human-driven and autonomous vehicles. Electronic Research Archive, 2024, 32(1): 523-550.
doi: 10.3934/era.2024026
[2]
Xiaoying Zheng, Jing Wu, Xiaofeng Li, Junjie Huang .
UAV search coverage under priority of important targets based on multi-location domain decomposition. Electronic Research Archive, 2024, 32(4): 2491-2513.
doi: 10.3934/era.2024115
[3]
Yu Shen, Hecheng Li .
A multi-strategy genetic algorithm for solving multi-point dynamic aggregation problems with priority relationships of tasks. Electronic Research Archive, 2024, 32(1): 445-472.
doi: 10.3934/era.2024022
[4]
Sida Lin, Lixia Meng, Jinlong Yuan, Changzhi Wu, An Li, Chongyang Liu, Jun Xie .
Sequential adaptive switching time optimization technique for maximum hands-off control problems. Electronic Research Archive, 2024, 32(4): 2229-2250.
doi: 10.3934/era.2024101
[5]
Ismail Ben Abdallah, Yassine Bouteraa, Saleh Mobayen, Omar Kahouli, Ali Aloui, Mouldi Ben Amara, Maher JEBALI .
Fuzzy logic-based vehicle safety estimation using V2V communications and on-board embedded ROS-based architecture for safe traffic management system in hail city. Electronic Research Archive, 2023, 31(8): 5083-5103.
doi: 10.3934/era.2023260
[6]
Jian Gong, Yuan Zhao, Jinde Cao, Wei Huang .
Platoon-based collision-free control for connected and automated vehicles at non-signalized intersections. Electronic Research Archive, 2023, 31(4): 2149-2174.
doi: 10.3934/era.2023111
[7]
Hao Li, Zhengwu Wang, Shuiwang Chen, Weiyao Xu, Lu Hu, Shuai Huang .
Integrated optimization of planning and operation of a shared automated electric vehicle system considering the trip selection and opportunity cost. Electronic Research Archive, 2024, 32(1): 41-71.
doi: 10.3934/era.2024003
[8]
Wenjie Wang, Suzhen Wen, Shen Gao, Pengyi Lin .
A multi-objective dynamic vehicle routing optimization for fresh product distribution: A case study of Shenzhen. Electronic Research Archive, 2024, 32(4): 2897-2920.
doi: 10.3934/era.2024132
[9]
Yineng Ouyang, Zhaotao Liang, Zhihui Ma, Lei Wang, Zhaohua Gong, Jun Xie, Kuikui Gao .
A class of constrained optimal control problems arising in an immunotherapy cancer remission process. Electronic Research Archive, 2024, 32(10): 5868-5888.
doi: 10.3934/era.2024271
[10]
Michael Barg, Amanda Mangum .
Statistical analysis of numerical solutions to constrained phase separation problems. Electronic Research Archive, 2023, 31(1): 229-250.
doi: 10.3934/era.2023012
Abstract
As one of continuous concern all over the world, the problem of water quality may cause diseases and poisoning and even endanger people's lives. Therefore, the prediction of water quality is of great significance to the efficient management of water resources. However, existing prediction algorithms not only require more operation time but also have low accuracy. In recent years, neural networks are widely used to predict water quality, and the computational power of individual neurons has attracted more and more attention. The main content of this research is to use a novel dendritic neuron model (DNM) to predict water quality. In DNM, dendrites combine synapses of different states instead of simple linear weighting, which has a better fitting ability compared with traditional neural networks. In addition, a recent optimization algorithm called AMSGrad (Adaptive Gradient Method) has been introduced to improve the performance of the Adam dendritic neuron model (ADNM). The performance of ADNM is compared with that of traditional neural networks, and the simulation results show that ADNM is better than traditional neural networks in mean square error, root mean square error and other indicators. Furthermore, the stability and accuracy of ADNM are better than those of other conventional models. Based on trained neural networks, policymakers and managers can use the model to predict the water quality. Real-time water quality level at the monitoring site can be presented so that measures can be taken to avoid diseases caused by water quality problems.
1.
Introduction
The vehicle routing problem (VRP) is a class of problems that entails finding an optimal set of routes for a fleet of vehicles to serve a set of customers. The objective of the VRP is to minimize vehicle routes costs while originating and terminating at a depot. The VRP was first proposed by Dantzig and Ramser [1] before being extended with different variants: the VRP with time window (VRPTW) [2,3,4,5,6,7], the capacitated VRP (CVRP) [8,9,10,11,12] and other VRPs [13,14,15,16,17]. In recent years, the VRPs have attracted much interest [18]. Exact solutions of the VRPs generally result in branch-and-price [13,19], branch-and-cut [20,21,22] and branch-cut-and-price algorithms [22,23]. Ben Ticha et al. [24,25,26] proposed well-performing branch-and-price algorithms for VRPs on multigraphs with multiple time-related attributes and a homogeneous vehicle fleet. In such approaches, the linear relaxation in each branch-and-bound node is by solved column generation, which has proved to be a powerful approach [9,27,28]. Column generation has been widely used to solve a variety of large mathematical programs such as vehicle routing and crew scheduling problems [3,29,30].
Many authors have suggested solving a shortest path problem with resource constraints (SPPRC) introduced by Desrochers [31] as a multi-dimensional generalization of the shortest path problem with time windows. Resolutions methods and applications of the SPPRC have been extensively discussed in the literature [3,29,31,32,33]. These strategies range from exact methods to heuristics and meta-heuristics.
Exact SPPRC resolution techniques usually use the dynamic programming which has a pseudopolynomial complexity. Desrochers and Soumis [32] proposed a label correcting reaching algorithm that extends the Ford-Bellman algorithm to take resource constraints into account. The algorithm has been shown to be successful for tight resource constraints. Feillet et al. [34] adapted the Desrochers algorithm to solve exactly the ESPPRC (SPPRC with elementary path) pricing problems in the context of the VRPTW. Since that, this algorithm has been the backbone of a number of algorithms based on column generation applied to several important problems such as vehicle routing and crew management [16,26]. Table 1 summarizes some reviewed literature that applied column-generation for VRPs as classified based on solution approaches used for solving the sub-problem (SPPRC or ESPPRC).
Table 1.
Summary of some papers based on the sub-problem solution method.
The main drawback of exact resolution techniques is the computational time which increases with the number of resources, making them suitable for small-scale problems only. To speed up the computations and obtain solutions in a reasonable time, some authors proposed approximate, heuristics and meta-heuristics techniques [14,18,50,51,52,53,54,55]. These algorithms yield near-optimal solutions and are more suitable for real-life larger-scale applications (e.g., thousands of customers from, dozens of depots with numerous vehicles subject to a variety of constraints). Nagih and Soumis [14] proposed a basic heuristic to quickly produce feasible solutions. But, this approach has been limited to acyclic graphs and its speed has not been proved. Some authors have used simulated annealing [56], genetic algorithms [52,53], non-dominated sorting genetic algorithms [54] and simplified swarm optimization [55]. Meta-heuristic methods allows to avoid getting trapped in local optima but this comes at a price of slower convergence.
In this paper, we propose a Lagrangian relaxation based algorithm to approximately solve the SPPRC for arbitrary, acyclic and cyclic graphs. In our approach, called "dominance on Lagrangian cost for the SPPRC" (DLC-SPPRC), a dominance is expressed on a subset of the resources while the rest of the resources are dualized in the objective. Optimized parameter update schemes are used to ensure better performances (descent steps, Lagrange multipliers, subgradients).
The rest of the paper is organized as follows. Section 2 describes formally the SPPRC. Section 3 presents our approximate solution method that speeds up the computations while keeping a good approximate solution. In Section 4, we present an application to column generation and show the results obtained for various VRP datasets. The paper ends with the final conclusions.
2.
SPPRC
2.1. Notations and preliminaries
Consider a graph G=(V,A), where V=N∪{o,d} is a set of nodes N={v1,…,vn} that would be visited from an origin o=v0 to a destination d=vn+1 and a set of arcs A⊂V×V. We denote by R a set of resources of cardinality |R|. An arc (i,j)∈A costs cij and consumes a quantity trij≥0 of each resource r∈R. We suppose that the resources satisfy the triangle inequality. If (i,j)∈A, then the node i is called a predecessor of j and j is called a successor of i. A path is a finite sequence of nodes
To each node vj of P, we associate CPj which is the sum of the costs of its composite arcs up to vj. We denote by Tr,Pj the amount of resource r∈R used to reach the node vj. For the sake of simplification, we drop the index P from these notations and write Cj and Trj. The path P is said to be feasible if it satisfies the resource constraints
Trki∈[arki,brki],∀r∈R,∀i∈{0,..,m}.
A path P can also be represented by X=(xij)(i,j)∈A∈{0,1}, where xij=1 if the nodes vi and vj belong to P, and 0 otherwise.
2.2. Problem formulation
The SPPRC seeks a feasible path from the origin to the destination with a minimal cost. Similar to [3], one can formulate the SPPRC as follows:
minimizeX∑(i,j)∈Acijxij
(2.1)
subject to
∑(i,j)∈Axij−∑(i,j)∈Axji=0,∀i∈V∖{o,d},
(2.2)
∑(o,j)∈Axoj=1;∑(i,d)∈Axid=1,
(2.3)
xij(Tri+trij−Trj)≤0,∀(i,j)∈A,∀r∈R,
(2.4)
arj≤Trj≤brj,∀j∈V,∀r∈R,
(2.5)
xij∈{0,1},∀(i,j)∈A,
(2.6)
where X=(xij)i,j represents a path solution. Eqs (2.2) and (2.3) define the flow constraints on the graph; Eq (2.4) encodes the compatibility requirements between flow and time variables, and Eq (2.5) is the time windows constraint. If the problem is feasible, then there exists an optimal solution [14].
2.3. Algorithms
To solve the SPPRC, labeling algorithms have been efficiently applied by many authors, e.g., Desrochers and Soumis [32], Dabia et al. [13] and Sun et al. [57,58]. Labels are defined for each node to encode partial paths, along with their cost and resource consumption. Each node receives labels and then extend them toward every possible successor node. The algorithm iteratively treats the nodes until no new labels are created.
To limit the proliferation of labels and reduce the computational time, dominance rules are introduced. Dominance relation is a partial ordering applied to eliminate non optimal solutions and retain only non dominated labels. The number of retained labels and, thus, the computation time, increase with the number of resources.
In the next section, we propose an algorithm to determine approximate solutions. The size of the search space is reduced by eliminating some labels with a heuristic dominance procedure.
3.
Lagrange SPPRC label correcting algorithm
With each path Pi arriving to a node vi, we associate a label (or state) vector
Li=(Ci,T1i;⋯;T|R|i).
(3.1)
The path associated to a label L is denoted by XL. The path P is extended to each successor vj with
Cj=Ci+cijandTrj=max{arj,Tri+trij},∀r∈R.
(3.2)
Let Pi and P′i be two feasible paths arriving to a node vi. We say that Pi is dominated by P′i and we write Pi⪯P′i if the label vector of Pi is component-wise less than the one of P′i. On the other hand, we say that Pi≤P′i if the first component of Pi is smaller than the one of P′i or by comparing recursively the following ones if the first components are equal. At least one of the inequalities should be strict. This defines a partial order relation.
3.1. Lagrangian relaxation
To speed up the resolution of the SPPRC (2.1)–(2.6), we propose to relax a subset of resources R1⊂R and apply dominance on the remaining resources R2=R∖R1 only. This is equivalent to consider Eqs (2.4) and (2.5) on R2 and replace Eq (2.1) by the equation
Since the flow is supported by at most one incoming arc at each node j from its predecessors i, the multipliers λri,j=λrj are independent of i. The vector of all multipliers λj= (λrj)r∈R1 can be obtained by solving the following maximization problem
{maxλminXL(λ,X),s.t.λrj≥0,∀r∈R1.
(3.4)
To solve this optimization sub-problem, one can use a projected sub-gradient algorithm which proved to be efficient in practice. The updating scheme is
λk+1j=max{λkj+τkjgkj,0}
(3.5)
where gkj∈∂L(λ,X) is a subgradient vector, τkj>0 is a step size, and the max operator is taken for each element. We choose
gkj=max(aj,Tri+trij)−brj,
(3.6)
τkj=θk(ZUB−L(λk,X))||gkj||2,
(3.7)
where ZUB is the best-known-smallest (upper)-bound of the solution of the problem described by (2.1)–(2.6) and θk∈(0,2] is a scalar. One iteration for the estimation of λ proved to be sufficient (precision and speed) when integrated into column generation.
3.2. Dominance
The dominance at each node j corresponds to the determination of the Pareto optima of the multicriteria problem applied to a set of labels. With the adopted Lagrangian approach, the label vector becomes
Lj=(Cj+∑r1∈R1λr1j(Tr1j−br1j);Tr2j,r2∈R2;).
(3.8)
When the obtained final solution is not feasible (due to relaxation), we need to solve again the relaxed SPPRC by dominating on the Lagrangian cost determined in the first phase and on the resources R2 and by checking the resource windows in the original space R.
Assuming that all predecessors of the node j∈N have been considered, the dominance at the node j can be interpreted as the determination of the Pareto optima for the multicriteria problem of |R2|+1 functions:
● Fi: The list of paths associated to the labels of Ei.
● Si: Set of the successors of the node vi indices.
● Q: List of unprocessed nodes indices.
● Pareto(L,Ej): Add a new label L to a set of non-dominated labels Ej while keeping the non-dominance property.
● ZL is the cost associated with a label L (first component of Eq (3.9)).
Our improved label-correcting procedure is detailed in Algorithms 1 and 2.
Algorithm 1: Modified label-correcting algorithm
input : An arbitrary directed graph G=(V,A), unnecessarily acyclic; R a set of constraints; R1⊂R subset of constraints to be included in the objective function; Lagrange multipliers λi∈R|R1|+ where vi∈V−{o};
R2 a set of resources output : All non-dominated paths from the source to the destination En+1, and the associated list of paths Fn+1.
// Initialization; R2=R−R1; forvi∈Vdo
| Ei←∅ end E0←{(0,0,⋯,0)}; // The size of the labels is:|Eo|=|R2|+1 Q←{0}
// Main loop; while:Q≠∅do
Algorithm 2: DLC-SPPRC: Dominance on Lagrangian cost for the SPPRC.
Input : An arbitrary directed graph G=(V,A), unnecessarily acyclic; R=R1∪R2 a set of resources constraints composed of two types. Output :ˉE, all non-dominated feasible paths from the source to the destination node d.
// Parameters choice; Choose θ∈(0,2]. kmax=10.
// Initialization; forvi∈V−{o}do λ(0)i=(0,…,0) a vector of size |R1|.
// Initial value of the Lagrangian corresponding to the solution of (3.4) ˉZmax←−∞.
// Choose an initial solution of the problem (2.1)–(2.6). X0=(x0ij)ij.
// Compute the objective function value (1) (upper bound). Zu←∑(i,j)∈Acijx0ij. ˉE←∅. k=0.
// Step 1; while (k<kmax) do
4.
Experiments
To solve the VRPTW, we use the Dantzig-Wolfe decomposition which defines K independent sub-problems and a global master problem. Applying a column generation, we alternatively solve the master problem and the K sub-problems. We addressed the VRPTW instances with a column generation approach by using the SPPRC as a sub-problem.
We used the Solomon data sets [59] and Homberger 200 customer instances. Each instance contains customers locations, resources (two for each costumer, i.e., |R|=2) and constraints. These data sets are classified in three categories: the r-instances (customers are located randomly), the c-instances (customers are located in clusters) and the rc-instances (mixed random and clustered structures). Furthermore, each family of instances is divided into two types. The first type (r1xx, c1xx, rc1xx) has a short scheduling horizon, i.e., small time windows, that allows only a few customers per route (approximately 5 to 10). The second type (r2xx, c2xx, rc2xx) has a long scheduling horizon permitting many customers (more than 30) to be serviced by the same vehicle. We applied our approximate DLC-SPPRC algorithm and the classical SPPRC [32].
We implemented our algorithm using Java programming language. For the simulation, we used a CPU Intel Core i9-9900KF (8 cores), 3.60 GHz, RAM 32 GB, running under Windows 10 (64 bit). For the SPPRC, we implemented the Desrochers and Soumis algorithm [32]. Linear programs for restricted master problems are solved with ILOG CPLEX 20.1.
Tables 2–5 report the iteration number (Ni), the lower bound (Lbi), the computational time in seconds (Ti) and the number of generated columns (Ci), where i=1 for the SPPRC and i=2 for the DLC-SPPRC. A gap is computed as
Gap=100×Lb1−Lb2Lb1.
(4.1)
Table 2.
Comparison of two approaches for solving the VRPTW for Solomon's instances with 25 customers: approximate DLC-SPPRC algorithm (2) and the classical SPPRC [32].
Table 3.
Comparison of two approaches for solving the VRPTW for Solomon's instances with 50 customers: approximate DLC-SPPRC algorithm (2) and the classical SPPRC [32].
Table 4.
Comparison of two approaches for solving the VRPTW for Solomon's instances with 100 customers: approximate DLC-SPPRC algorithm (2) and the classical SPPRC [32].
Table 5.
Comparison of two approaches for solving the VRPTW for Homberger's instances with 200 customers: approximate DLC-SPPRC algorithm (2) and the classical SPPRC [32].
For most of the cases, we obtained the optimal solutions in reasonable times. As in [34], the column generation process is initiated with an adaptation of the Clarke and Wright algorithm [60]. The sub-problem is stopped at each iteration when 500 labels have been extended to the depot with a negative cost.
From Tables 2–5, we can draw the following conclusions. Our algorithm was faster in all cases (228). In 15 instances (7%), the SPPRC algorithm could not obtain a solution in reasonable time (more than 10 hours for one iteration). Our algorithm was 10 times faster in 37 cases (16% of the cases) and 1.5 times faster in 166 case (73% of the cases). On average, we achieved a gain of 564% of the computational time. For the precision, in 188 cases (82%), we obtained similar precisions (a gap less than 1%) whereas the difference was more than 1% only in 25 cases (11%) and greater than 5% only in 2 cases (0.88%).
5.
Conclusions
This paper proposes an approximate solution algorithm to solve the SPPRC, which relies on a Lagrangian relaxation. Our method applies to both acyclic and cyclic graphs. Dominance is applied on a subset of the resources only. Optimized parameters updates schemes are used to ensure fast convergence. When applied to the VRP, our approach offers a good compromise between precision and computational time, and thus, can be applied to large-scale practical problems. In the future, we plan to apply our algorithm to solve various problems such as branch-and-price, branch-and-cut and branch-and-price-and-cut. In addition, we will explore more advanced optimization algorithms (heuristics, meta-heuristics, etc.) that have been successfully applied in other domains, such as, online learning, scheduling, multi-objective optimization, transportation, medicine, and data classification.
Conflict of interest
The authors declare that there is no conflicts of interest.
References
[1]
T. Ma, N. Zhao, Y. Ni, J. Yi, J. P. Wilson, L. He, et al., China's improving inland surface water quality since 2003, Sci. Adv., 6 (2020), eaau3798. https://doi.org/10.1126/sciadv.aau3798 doi: 10.1126/sciadv.aau3798
[2]
N. Nemerow, Scientific Stream Pollution Analysis, Scripta Book Co., 1974.
[3]
O. Kisi, K. S. Parmar, Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution, J. Hydrol., 534 (2016), 104–112. https://doi.org/10.1016/j.jhydrol.2015.12.014 doi: 10.1016/j.jhydrol.2015.12.014
[4]
Y. Matsuda, A water pollution prediction system by the finite element method, Adv. Water Resour., 2 (1979), 27–34. https://doi.org/10.1016/0309-1708(79)90004-6 doi: 10.1016/0309-1708(79)90004-6
[5]
G. Tan, J. Yan, C. Gao, S. Yang, Prediction of water quality time series data based on least squares support vector machine, Procedia Eng., 31 (2012), 1194–1199. https://doi.org/10.1016/j.proeng.2012.01.1162 doi: 10.1016/j.proeng.2012.01.1162
[6]
H. Chen, L. Xu, W. Ai, B. Lin, Q. Feng, K. Cai, Kernel functions embedded in support vector machine learning models for rapid water pollution assessment via near-infrared spectroscopy, Sci. Total Environ., 714 (2020), 136765. https://doi.org/10.1016/j.scitotenv.2020.136765 doi: 10.1016/j.scitotenv.2020.136765
[7]
S. Moni, E. Aziz, A. P. A. Majeed, M. Malek, The prediction of blue water footprint at Semambu water treatment plant by means of Artificial Neural Networks (ANN) and Support Vector Machine (SVM) models, Phys. Chem. Earth, 123 (2021), 103052. https://doi.org/10.1016/j.pce.2021.103052 doi: 10.1016/j.pce.2021.103052
[8]
Y. Khan, C. S. See, Predicting and analyzing water quality using machine learning: a comprehensive model, in 2016 IEEE Long Island Systems, Applications and Technology Conference (LISAT), (2021), 1–6. https://doi.org/10.1109/LISAT.2016.7494106
[9]
M. Azrour, J. Mabrouki, G. Fattah, A. Guezzaz, F. Aziz, Machine learning algorithms for efficient water quality prediction, Model. Earth Syst. Environ., 8 (2022), 2793–2801. https://doi.org/10.2166/wqrj.2022.004 doi: 10.2166/wqrj.2022.004
[10]
N. Noori, L. Kalin, S. Isik, Water quality prediction using SWAT-ANN coupled approach, J. Hydrol., 590 (2020), 125220. https://doi.org/10.1016/j.jhydrol.2020.125220 doi: 10.1016/j.jhydrol.2020.125220
[11]
L. Kumar, M. S. Afzal, A. Ahmad, Prediction of water turbidity in a marine environment using machine learning: A case study of Hong Kong, Reg. Stud. Mar. Sci., 52 (2022), 102260. https://doi.org/10.1016/j.rsma.2022.102260 doi: 10.1016/j.rsma.2022.102260
[12]
L. Li, J. Qiao, G. Yu, L. Wang, H. Y. Li, C. Liao, et al., Interpretable tree-based ensemble model for predicting beach water quality, Water Res., 211 (2022), 118078. https://doi.org/10.1016/j.watres.2022.118078 doi: 10.1016/j.watres.2022.118078
[13]
M. G. Uddin, S. Nash, M. T. M. Diganta, A. Rahman, A. I. Olbert, Robust machine learning algorithms for predicting coastal water quality index, J. Environ. Manage., 321 (2022), 115923. https://doi.org/10.1016/j.jenvman.2022.115923 doi: 10.1016/j.jenvman.2022.115923
[14]
W. S. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biol., 52 (1990), 99–115. https://doi.org/10.1007/BF02459570 doi: 10.1007/BF02459570
[15]
F. Rosenblatt, The perceptron: a probabilistic model for information storage and organization in the brain, Psychol. Rev., 65 (1958), 386. https://doi.org/10.1037/h0042519 doi: 10.1037/h0042519
[16]
D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning representations by back-propagating errors, Nature, 323 (1986), 533–536. https://doi.org/10.1038/323533a0 doi: 10.1038/323533a0
[17]
Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, 86 (1998), 2278–2324. https://doi.org/10.1109/5.726791 doi: 10.1109/5.726791
[18]
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, et al., Backpropagation applied to handwritten zip code recognition, Neural Comput., 1 (1989), 541–551. https://doi.org/10.1162/neco.1989.1.4.541 doi: 10.1162/neco.1989.1.4.541
[19]
T. Mikolov, M. Karafiát, L. Burget, J. Cernocky, S. Khudanpur, Recurrent neural network based language model, Interspeech, 2 (2010), 1045–1048.
[20]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), 770–778.
[21]
T. Dawood, E. Elwakil, H. M. Novoa, J. F. G. Delgado, Toward urban sustainability and clean potable water: Prediction of water quality via artificial neural networks, J. Cleaner Prod., 291 (2021), 125266. https://doi.org/10.1016/j.jclepro.2020.125266 doi: 10.1016/j.jclepro.2020.125266
[22]
T. A. Sinshaw, C. Q. Surbeck, H. Yasarer, Y. Najjar, Artificial neural network for prediction of total nitrogen and phosphorus in US lakes, J. Environ. Eng., 145 (2019), 04019032. https://doi.org/10.1061/(ASCE)EE.1943-7870.0001528 doi: 10.1061/(ASCE)EE.1943-7870.0001528
[23]
M. Hameed, S. S. Sharqi, Z. M. Yaseen, H. A. Afan, A. Hussain, A. Elshafie, Application of artificial intelligence (AI) techniques in water quality index prediction: a case study in tropical region, Malaysia, Neural Comput. Appl., 28 (2017), 893–905. https://doi.org/10.1007/s00521-016-2404-7 doi: 10.1007/s00521-016-2404-7
[24]
A. Kadam, V. Wagh, A. Muley, B. Umrikar, R. Sankhua, Prediction of water quality index using artificial neural network and multiple linear regression modelling approach in Shivganga River basin, India, Model. Earth Syst. Environ., 5 (2019), 951–962. https://doi.org/10.1007/s40808-019-00581-3 doi: 10.1007/s40808-019-00581-3
[25]
Y. Zhang, X. Gao, K. Smith, G. Inial, S. Liu, L. B. Conil, et al., Integrating water quality and operation into prediction of water production in drinking water treatment plants by genetic algorithm enhanced artificial neural network, Water Res., 164 (2019), 114888. https://doi.org/10.1016/j.watres.2019.114888 doi: 10.1016/j.watres.2019.114888
[26]
J. Wu, Z. Wang, A hybrid model for water quality prediction based on an artificial neural network, wavelet transform, and long short-term memory, Water, 14 (2022), 610. https://doi.org/10.3390/w14040610 doi: 10.3390/w14040610
[27]
Y. Wang, J. Zhou, K. Chen, Y. Wang, L. Liu, Water quality prediction method based on LSTM neural network, in 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), (2017), 1–5. https://doi.org/10.1109/ISKE.2017.8258814
[28]
Q. Ye, X. Yang, C. Chen, J. Wang, River water quality parameters prediction method based on LSTM-RNN model, in 2019 Chinese Control And Decision Conference (CCDC), (2019), 3024–3028. https://doi.org/10.1109/CCDC.2019.8832885
[29]
J. Bi, Y. Lin, Q. Dong, H. Yuan, M. Zhou, Large-scale water quality prediction with integrated deep neural network, Inf. Sci., 571 (2021), 191–205. https://doi.org/10.1016/j.ins.2021.04.057 doi: 10.1016/j.ins.2021.04.057
[30]
C. Hu, F. Zhao, Improved methods of BP neural network algorithm and its limitation, in 2010 International Forum on Information Technology and Applications, (2010), 11–14. https://doi.org/10.1109/IFITA.2010.324
[31]
T. Venkateswarlu, J. Anmala, Application of random forest model in the prediction of river water quality, in Proceedings of Seventh International Congress on Information and Communication Technology, (2023), 525–535. https://doi.org/10.1016/j.asej.2021.11.004
[32]
M. Jeung, S. Baek, J. Beom, K. H. Cho, Y. Her, K. Yoon, Evaluation of random forest and regression tree methods for estimation of mass first flush ratio in urban catchments, J. Hydrol., 575 (2019), 1099–1110. https://doi.org/10.1016/j.jhydrol.2019.05.079 doi: 10.1016/j.jhydrol.2019.05.079
[33]
H. Lu, X. Ma, Hybrid decision tree-based machine learning models for short-term water quality prediction, Chemosphere, 249 (2020), 126169. https://doi.org/10.1016/j.chemosphere.2020.126169 doi: 10.1016/j.chemosphere.2020.126169
[34]
S. M. Saghebian, M. T. Sattari, R. Mirabbasi, M. Pal, Ground water quality classification by decision tree method in Ardebil region, Iran, Arabian J. Geosci., 7 (2014), 4767–4777. https://doi.org/10.1007/s12517-013-1042-y doi: 10.1007/s12517-013-1042-y
[35]
Z. Hippe, J. Zamorska, A new approach to application of pattern recognition methods in analytical chemistry. Ⅱ. Prediction of missing values in water pollution grid using modified KNN-method, Chem. Anal., 44 (1999), 597–602.
[36]
J. Park, W. H. Lee, K. T. Kim, C. Y. Park, S. Lee, T. Y. Heo, Interpretation of ensemble learning to predict water quality using explainable artificial intelligence, Sci. Total Environ., 832 (2022), 155070. https://doi.org/10.1016/j.scitotenv.2022.155070 doi: 10.1016/j.scitotenv.2022.155070
[37]
A. Gidon, T. A. Zolnik, P. Fidzinski, F. Bolduan, A. Papoutsi, P. Poirazi, et al., Dendritic action potentials and computation in human layer 2/3 cortical neurons, Science, 367 (2020), 83–87. https://doi.org/10.1126/science.aax6239 doi: 10.1126/science.aax6239
[38]
I. S. Jones, K. P. Kording, Might a single neuron solve interesting machine learning problems through successive computations on its dendritic tree?, Neural Comput., 33 (2021), 1554–1571. https://doi.org/10.1162/neco_a_01390 doi: 10.1162/neco_a_01390
[39]
A. Destexhe, E. Marder, Plasticity in single neuron and circuit computations, Nature, 431 (2004), 789–795. https://doi.org/10.1038/nature03011 doi: 10.1038/nature03011
[40]
C. Koch, Computation and the single neuron, Nature, 385 (1997), 207–210. https://doi.org/10.1038/385207a0 doi: 10.1038/385207a0
[41]
B. E. Stein, T. R. Stanford, B. A. Rowland, Development of multisensory integration from the perspective of the individual neuron, Nat. Rev. Neurosci., 15 (2014), 520–535. https://doi.org/10.1038/nrn3742 doi: 10.1038/nrn3742
[42]
Y. Todo, H. Tamura, K. Yamashita, Z. Tang, Unsupervised learnable neuron model with nonlinear interaction on dendrites, Neural Netw., 60 (2014), 96–103. https://doi.org/10.1016/j.neunet.2014.07.011 doi: 10.1016/j.neunet.2014.07.011
[43]
F. Teng, Y. Todo, Dendritic neuron model and its capability of approximation, in 2019 6th International Conference on Systems and Informatics (ICSAI), (2019), 542–546. https://doi.org/10.1109/ICSAI48974.2019.9010147
[44]
J. He, J. Wu, G. Yuan, Y. Todo, Dendritic branches of dnm help to improve approximation accuracy, in 2019 6th International Conference on Systems and Informatics (ICSAI), (2019), 533–541. https://doi.org/10.1109/ICSAI48974.2019.9010196
[45]
Z. Sha, L. Hu, Y. Todo, J. Ji, S. Gao, Z. Tang, A breast cancer classifier using a neuron model with dendritic nonlinearity, IEICE Trans. Commun., 98 (2015), 1365–1376. https://doi.org/10.1587/transinf.2014EDP7418 doi: 10.1587/transinf.2014EDP7418
[46]
T. Jiang, S. Gao, D. Wang, J. Ji, Y. Todo, Z. Tang, A neuron model with synaptic nonlinearities in a dendritic tree for liver disorder, IEEJ Trans. Electr. Electron. Eng., 12 (2017), 105–115. https://doi.org/10.1002/tee.22350 doi: 10.1002/tee.22350
[47]
Y. Tang, J. Ji, S. Gao, H. Dai, Y. Yu, Y. Todo, A pruning neural network model in credit classification analysis, Comput. Intell. Neurosci., 15 (2014), 520–535. https://doi.org/10.1155/2018/9390410 doi: 10.1155/2018/9390410
[48]
Z. Song, C. Tang, J. Ji, Y. Todo, Z. Tang, A simple dendritic neural network model-based approach for daily pm2.5 concentration prediction, Electronics, 10 (2021), 373. https://doi.org/10.3390/electronics10040373 doi: 10.3390/electronics10040373
[49]
Z. Song, Y. Tang, J. Ji, Y. Todo, Evaluating a dendritic neuron model for wind speed forecasting, Knowl. Based Syst., 201 (2020), 106052. https://doi.org/10.1016/j.knosys.2020.106052 doi: 10.1016/j.knosys.2020.106052
[50]
T. Zhou, S. Gao, J. Wang, C. Chu, Y. Todo, Z. Tang, Financial time series prediction using a dendritic neuron model, Knowl. Based Syst., 105 (2016), 214–224. https://doi.org/10.1016/j.knosys.2016.05.031 doi: 10.1016/j.knosys.2016.05.031
[51]
W. Chen, J. Sun, S. Gao, J. J. Cheng, J. Wang, Y. Todo, Using a single dendritic neuron to forecast tourist arrivals to japan, IEICE Trans. Inf. Syst., 100 (2017), 190–202. https://doi.org/10.1587/transinf.2016EDP7152 doi: 10.1587/transinf.2016EDP7152
J. Ji, M. Dong, Q. Lin, K. C. Tan, Noninvasive cuffless blood pressure estimation with dendritic neural regression, IEEE Trans. Cybern., 2022 (2022). https://doi.org/10.1109/TCYB.2022.3141380 doi: 10.1109/TCYB.2022.3141380
[55]
J. F. Khaw, B. Lim, L. E. Lim, Optimal design of neural networks using the taguchi method, Neurocomputing, 7 (1995), 225–245. https://doi.org/10.1016/0925-2312(94)00013-I doi: 10.1016/0925-2312(94)00013-I
[56]
D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning internal representations by error propagation, in California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
[57]
J. H. Friedman, , Greedy function approximation: a gradient boosting machine, Ann. Stat., 2001 (2001), 1189–1232. https://doi.org/10.1214/AOS/1013203451 doi: 10.1214/AOS/1013203451
[58]
T. Chen, C. Guestrin, Xgboost: A scalable tree boosting system, in Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, (2016), 785–794. https://doi.org/10.1145/2939672.2939785
[59]
G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: theory and applications, Neurocomputing, 70 (2006), 489–501. https://doi.org/10.1016/j.neucom.2005.12.126 doi: 10.1016/j.neucom.2005.12.126
[60]
D. W. Zimmerman, B. D. Zumbo, Relative power of the Wilcoxon test, the Friedman test, and repeated-measures ANOVA on ranks, J. Exp. Educ., 62 (1993), 75–86. https://doi.org/10.1080/00220973.1993.9943832 doi: 10.1080/00220973.1993.9943832
This article has been cited by:
1.
Yuandong Chen, Jinhao Pang, Yuchen Gou, Zhiming Lin, Shaofeng Zheng, Dewang Chen,
Research on the A* Algorithm for Automatic Guided Vehicles in Large-Scale Maps,
2024,
14,
2076-3417,
10097,
10.3390/app142210097
Table 2.
Comparison of two approaches for solving the VRPTW for Solomon's instances with 25 customers: approximate DLC-SPPRC algorithm (2) and the classical SPPRC [32].
Table 3.
Comparison of two approaches for solving the VRPTW for Solomon's instances with 50 customers: approximate DLC-SPPRC algorithm (2) and the classical SPPRC [32].
Table 4.
Comparison of two approaches for solving the VRPTW for Solomon's instances with 100 customers: approximate DLC-SPPRC algorithm (2) and the classical SPPRC [32].
Table 5.
Comparison of two approaches for solving the VRPTW for Homberger's instances with 200 customers: approximate DLC-SPPRC algorithm (2) and the classical SPPRC [32].