
Pad printing is used in automotive, medical, electrical and other industries, employing diverse materials to transfer a 2D image onto a 3D object with different sizes and geometries. This work presents a universal fixation system for pad printing of plastic parts (UFSP4) in response to the needs of small companies that cannot afford to invest in the latest technological advances. The UFSP4 comprises two main subsystems: a mechanical support system (i.e., support structure, jig matrix and braking system) and a control system (i.e., an electronic system and an electric-hydraulic system). A relevant feature is the combination of a jig matrix and jig pins to fixate complex workpieces with different sizes. Using finite element analysis (FEA), in the mesh convergence, the total displacement converges to 0.00028781 m after 12,000 elements. The maximum equivalent stress value is 1.22 MPa for the polycarbonate plate in compliance with the safety factor. In a functionality test of the prototype performed in a production environment for one hour, the jigs fixed by the plate did not loosen, maintaining the satisfactory operation of the device. This is consistent with the displacement distribution of the creep analysis and shows the absence of the creep phenomenon. Based on FEA that underpinned the structural health computation of the braking system, the prototype was designed and built, seeking to ensure a reliable and safe device to fixate plastic parts, showing portability, low-cost maintenance and adaptability to the requirements of pad printing of automotive plastic parts.
Citation: José Alejandro Fernández Ramírez, Óscar Hernández-Uribe, Leonor Adriana Cárdenas-Robledo, Alfredo Chávez Luna. Universal fixation system for pad printing of plastic parts[J]. Mathematical Biosciences and Engineering, 2023, 20(12): 21032-21048. doi: 10.3934/mbe.2023930
[1] | Meiyuan Du, Chi Zhang, Sheng Xie, Fang Pu, Da Zhang, Deyu Li . Investigation on aortic hemodynamics based on physics-informed neural network. Mathematical Biosciences and Engineering, 2023, 20(7): 11545-11567. doi: 10.3934/mbe.2023512 |
[2] | Yoon-gu Hwang, Hee-Dae Kwon, Jeehyun Lee . Feedback control problem of an SIR epidemic model based on the Hamilton-Jacobi-Bellman equation. Mathematical Biosciences and Engineering, 2020, 17(3): 2284-2301. doi: 10.3934/mbe.2020121 |
[3] | Shixuan Yao, Xiaochen Liu, Yinghui Zhang, Ze Cui . An approach to solving optimal control problems of nonlinear systems by introducing detail-reward mechanism in deep reinforcement learning. Mathematical Biosciences and Engineering, 2022, 19(9): 9258-9290. doi: 10.3934/mbe.2022430 |
[4] | Reymundo Itzá Balam, Francisco Hernandez-Lopez, Joel Trejo-Sánchez, Miguel Uh Zapata . An immersed boundary neural network for solving elliptic equations with singular forces on arbitrary domains. Mathematical Biosciences and Engineering, 2021, 18(1): 22-56. doi: 10.3934/mbe.2021002 |
[5] | Huiqing Wang, Sen Zhao, Jing Zhao, Zhipeng Feng . A model for predicting drug-disease associations based on dense convolutional attention network. Mathematical Biosciences and Engineering, 2021, 18(6): 7419-7439. doi: 10.3934/mbe.2021367 |
[6] | Han Ma, Qimin Zhang . Threshold dynamics and optimal control on an age-structured SIRS epidemic model with vaccination. Mathematical Biosciences and Engineering, 2021, 18(6): 9474-9495. doi: 10.3934/mbe.2021465 |
[7] | Dan Shi, Shuo Ma, Qimin Zhang . Sliding mode dynamics and optimal control for HIV model. Mathematical Biosciences and Engineering, 2023, 20(4): 7273-7297. doi: 10.3934/mbe.2023315 |
[8] | Sung Woong Cho, Sunwoo Hwang, Hyung Ju Hwang . The monotone traveling wave solution of a bistable three-species competition system via unconstrained neural networks. Mathematical Biosciences and Engineering, 2023, 20(4): 7154-7170. doi: 10.3934/mbe.2023309 |
[9] | Jing Cao, Dong Zhao, Chenlei Tian, Ting Jin, Fei Song . Adopting improved Adam optimizer to train dendritic neuron model for water quality prediction. Mathematical Biosciences and Engineering, 2023, 20(5): 9489-9510. doi: 10.3934/mbe.2023417 |
[10] | Hong Gao, Cuiyun Wu, Dunnian Huang, Dahui Zha, Cuiping Zhou . Prediction of fetal weight based on back propagation neural network optimized by genetic algorithm. Mathematical Biosciences and Engineering, 2021, 18(4): 4402-4410. doi: 10.3934/mbe.2021222 |
Pad printing is used in automotive, medical, electrical and other industries, employing diverse materials to transfer a 2D image onto a 3D object with different sizes and geometries. This work presents a universal fixation system for pad printing of plastic parts (UFSP4) in response to the needs of small companies that cannot afford to invest in the latest technological advances. The UFSP4 comprises two main subsystems: a mechanical support system (i.e., support structure, jig matrix and braking system) and a control system (i.e., an electronic system and an electric-hydraulic system). A relevant feature is the combination of a jig matrix and jig pins to fixate complex workpieces with different sizes. Using finite element analysis (FEA), in the mesh convergence, the total displacement converges to 0.00028781 m after 12,000 elements. The maximum equivalent stress value is 1.22 MPa for the polycarbonate plate in compliance with the safety factor. In a functionality test of the prototype performed in a production environment for one hour, the jigs fixed by the plate did not loosen, maintaining the satisfactory operation of the device. This is consistent with the displacement distribution of the creep analysis and shows the absence of the creep phenomenon. Based on FEA that underpinned the structural health computation of the braking system, the prototype was designed and built, seeking to ensure a reliable and safe device to fixate plastic parts, showing portability, low-cost maintenance and adaptability to the requirements of pad printing of automotive plastic parts.
With the incredible success of neural networks in the field of machine learning tasks, including image recognition, computer vision, natural speech processing, and cognitive science as well as the prospect of harnessing the great computing power of specialized hardware, there has been much interest in investigating their suitability also for high-performance computing tasks. The result is now an exciting new research field known as scientific machine learning, where techniques such as deep neural networks and statistical learning are applied to classical problems of applied mathematics. Thanks to the general approximation property of the neural network, it is natural to consider using the neural network to obtain the approximate solution for governing PDEs. As the predominant method nowadays for data-driven problems, deep neural networks (DNN) are used as surrogate models of PDE solvers to accelerate optimization. DNN is generally adopted as training a supervised machine learning task to establish the nonlinear mapping from input to output data pairs [1,2,3,4,5,6], that is, learning a specific model from training data and the algorithm defined in advance structure, its quality is closely related to training data or distribution. Such models have yielded remarkable success in data-rich domains, yet in many fields of physics and engineering, the training data is often implied some prior knowledge, such as the flow field data of fluid mechanics problems need to satisfy the conservation of mass and momentum, and this part of prior knowledge is not utilized in the classic machine learning algorithms.
Physics-informed neural networks (PINNs) [6,7], which combine data-driven machine learning and the advantages of physical models, can train models that automatically satisfy physical constraints with a small amount of training data. The PINN algorithm has better generalization performance and can predict the important physical parameters of the model. The PINN can accurately solve the forward problems by minimizing the mean squared error loss function, such that one can get the numerical solutions of the PDEs by soving optimization problem which requires that the loss is close to zero. The loss function of PINN contains initial and boundary loss term as well as the residual from the governing equation given by the physics-informed part. It should be noted that different boundary condition setting methods used in PINN training have a significant impact on the training results. Usually, the boundary loss term is set by soft boundary in the loss function, and the weight of boundary loss term is controlled by penalty coefficient to accelerate the convergence of optimization problem. Furthermore, the selection of penalty coefficient often depends on experience to adjust, improper penalty coefficient is easy to lead to abnormal solution. Wang et al. [8] proposed an adaptive learning rate annealing algorithm, which utilizes the back-propagated gradient statistics during model training to assign appropriate weight to each term in the composite loss functions, that aims to balance the interplay between data-fit and regularization. This empirical parameter adjustment technique does not explain why some times PINNs fail to train. To investigated this question, Wang et al. utilized the neural tangent kernel (NTK) in their subsequent work [9], and proved that NTK of PINNs converges to a deterministic kernel that stays constant during training in the infinite width limit of fully-connected networks. They developed a novel adaptive training strategy that exploits the eigenvalues of the NTK to adaptively calibrate the convergence rate of the total training error. When using PINN to solve stiff ODE systems, Ji et al. [10] noticed that stiffness could cause the failure of the regular PINN. Thus he developed stiff-PINN approach which applies PINN to non/mild-stiff systems obtained by employing quasi-steady-state-assumptions (QSSA) to reduce the stiffness.
The neural networks can be regarded as a combination of linear transformation and nonlinear transformation, in which the activation function only determines the nonlinear approximation effect of the neural networks. The activation function plays an important role in the PINN training process as a result of the dependence of the derivative of the loss function on optimization parameters, in fact, depends on the derivative of the activation function. In the PINN algorithm various activation functions such as tanh, sin etc are used to solve various problems. There is no unified selection criterion for the activation function since it is often related to the specific problem. For ordinary neural networks, lots of literature [11,12] have confirmed that well-designed adaptive activation function can accelerate the convergence process. Jagtap et al. [13] introduced a scalable hyper-parameter in the activation function, which can be optimized and updated synchronously with neural networks parameters, and could dynamically adjust the derivative value of the activation function. Compared with the fixed activation function, this method can significantly accelerate the convergence rate and improve the accuracy of PINN training. To further accelerate the training convergence rate, different scalable parameters are introduced into the activation function of each neural layer/neuron separately in PINN [14]. Lu et al. [7] developed a Python library for PINN, i.e., DeepXDE, which can solve forward problems with initial and boundary conditions, as well as inverse problems. DeepXDE contributes to the rapid popularization and extensive applications of PINN, such as in the fields of fluid mechanics [15], biomedicine [16], cardiovascular flow [17] and so on. For more applications, the interested readers are referred to the review [18]. In [16], Sahli et al. improved diagnostic predictability in diagnosing atrial fibrillation by using PINN to solve an nonlinear wave dynamics equation satisfied by cardiac activation mapping. Kissas et al. [17] trained the PINN model on noisy and scattered clinical data of flow and wall displacement to predict blood flow in the cardiovascular system. In addition to the applications mentioned above, PINN is actively explored and improved in localized wave solutions[19,20], high-dimensional integrable systems [21], porous flow [22], seepage equation [23] and so on.
The partial differential equation governs several important phenomena in physics, engineering and biology, and has a wide range of applications in the fields of epidemiological transmission[24], tumor growth and wound healing [25,26], bacterial aggregation [27,28], the cardiomyocyte potential propagation model [29,30] and other fields. As the fundamental equation in such areas, Hamilton-Jacobi equations are numerically solved by many high-order accurate numerical methods. These works include, but are not limited to, central schemes [31,32], Godunov-type central schemes [33,34,35], WENO schemes [36,37,38]. Using neural networks to represent the viscosity solution of certain Hamilton-Jacobi (HJ) PDEs is not by itself a new idea, and some neural network architectures have led to promising results [39,40]. Graber et al. [39] presented optimal control problems on generalized networks that the controllability assumptions are not satisfied around the junctions, the Value Function is characterized as the unique solution to a system of HJ equations in a bilateral viscosity sense. In [40], the high-dimensional Hamilton-Jacobi-Bellman (HJB) PDEs is solved by Deep Galerkin Method which approximates the solution by a deep neural network trained to satisfy the differential operator, boundary condition, and initial conditions. The adaptive deep learning networks is proposed to model semi-global solutions for high-dimensional HJB PDEs in [41]. In [42,43], the authors proposed shallow neural network architectures to express the viscosity solution of certain HJ PDEs with particular form of convex initial data and specific Hamiltonians, where physical constraints that satisfy certain conditions are naturally encoded into the neural networks. Moreover, [43] investigated that two network architectures exactly represent the Lax-Oleinik formula solution of certain HJ PDEs whose initial data and convex Hamiltonian satisfy certain assumptions. Note that the Lax-Oleinik formula solution is given as the viscosity solution of the Hamiltonian which does not depend on the state variable x and the time variable t. However, the performance of PINN is not yet fully investigated in solving HJ PDEs which have non-convex Hamiltonian. It sparks our interest to utilize the PINN algorithm to solve more fundamental HJ PDEs.
The paper is structured as follows. In Section 2, we will discuss the problem setup for HJ PDEs. Section 3 explores the generality of the PINN training algorithm for solving HJ equations, exactly embedding Dirichlet or periodic boundary conditions and physical constraints in neural networks architecture. To further improve the predictive accuracy, a physics-informed neural networks based on adaptive weighted loss functions (AW-PINN) is trained to solve unsupervised learning tasks for HJ PDEs with fewer training data while physical information constraints are imposed during the training process. In Section 4, we demonstrate the effectiveness and convergence of the AW-PINN training algorithm for convex and non-convex Hamiltonian. The series of numerical experiments illustrate that the proposed algorithm effectively achieves noticeable improvements in predictive accuracy and the convergence rate of the total training error. A comparison between the proposed algorithm and the original PINN algorithm for HJ equations indicates that the proposed AW-PINN algorithm can train the solutions more accurately with fewer iterations. Finally, we summarize and discuss our results.
We consider the time-dependent Hamilton-Jacobi (HJ) equations
{φt+H(∇xφ(x,t))=0,x∈Ω∈Rn,t∈(0,+∞)φ(x,0)=h(x),x∈Ω,φ(x,t)=g(x,t),x∈∂Ω,t∈(0,+∞) | (2.1) |
with Dirichlet or periodic boundary conditions on ∂Ω. The partial derivative with respect to t and the gradient vector with respect to x of solution φ(x,t) are denoted by φt and ∇xφ(x,t)=(∂φ(x,t)∂x1,…,∂φ(x,t)∂xn), respectively. The Hamilton H depends on ∇xφ(x,t) and possibly on x and t. The solution of an HJ equation may have a discontinuity even when the initial data is smooth. As in conservation laws, the unique physically relevant solution can be singled out by the consideration of viscosity solutions which provides a consistent definition of a weak solution of Eq (2.1). Thus, we want to design a neural networks to approximate the viscosity solution of HJ equations from small training data. Here we draw motivation from the PINN algorithm [6,8,9] and some neural network architectures to solve some HJ PDEs [43]. We construct the PINN training method for HJ PDEs with different kinds of Hamiltonian and boundary conditions, and then optimize the weight coefficients to each term in the loss function such that their gradients during back-propagation are similar in magnitude. The problem of solving a PDE is converted into the multi-objective optimization problem where certain constraints are introduced in loss functional minimizing.
We consider NL:RDi→RDo to be fully connected feed-forward neural networks of L layers and Nk neurons in kth layer (N0=Di,andNL=Do). The input vector is denoted by z∈RDi and the output vector at kth layer is denoted by Nk(z) and N0(z)=z. Each hidden layer of the network receives an output Nk−1(z)∈RNk−1 from the previous layer where an affine transformation of the form
Nk(z):=WkNk−1(z)+bk, | (3.1) |
is performed. The weight matrix and bias vector in the kth layer (1≤k≤L) are denoted by Wk∈RNk×Nk−1 and bk∈RNk respectively, which are initialized from independent and identically distributed samplings. Such a linear model is simple to solve but is limited in its capacity to solve complex problems. We should use a smooth nonlinear activation function, in order to efficiently compute arbitrary-order derivatives in the back propagation processe; in this study, we choose the hyperbolic tangent (tanh). The nonlinear activation function σ(⋅) is applied to each component of the transformed vector before sending it as an input to the next layer. Then the L layers fully connected feed-forward neural networks is defined as
Nk(z)=σ(WkNk−1(z))+bk,1≤k≤L. | (3.2) |
The activation function is an identity function in the last hidden-layer. By taking θ={Wk,bk} as the collection of all weights and biases that represent the trainable parameters in the networks, We can write the neural network as follows
˜φ(z):=NL(z;θ). | (3.3) |
The distribution of training points has a certain impact on the flexibility of PINN. In unsupervised learning for solving PDEs, training data only consist of the initial and boundary conditions and the residual point locations in the domain, which is done without using true solution information. Figure 1 shows two different ways to select the residual point locations in the domain. The lattice-like training points are the same as the finite difference grid points, which are equispaced in the spatio-temporal domain. The scattered training points can be taken from certain quasi-random sequences, such as the Sobol sequences or the Latin hypercube sampling. The available sets of measurements can also be used to train the neural network model for some practical problems.
Let F(x,t) denote the left-hand-side of the first equation in (2.1), i.e.,
F(x,t):=φt+H(∇xφ(x,t)). | (3.4) |
Following the original idea of PINN in [6], we then approximate φ(x,t) by the neural network denoted by ˜φ(x,t), in which the parameters θ consist of the weights Wk and biases bk. The schematic diagram for the neural network with multiple hidden layer is shown in Figure 2. The residual of (2.1) defines as
r(x,t;θ):=∂∂t˜φ(x,t)+H(∇x˜φ(x,t)), | (3.5) |
where the partial derivatives of the neural networks with respect to the space and time coordinates can be readily computed by automatic differentiation [44]. To learn a good set candidate parameters θ in neural networks ˜φ(x,t), we minimize the mean squared error via gradient descent for the following composite loss function in the general form
L(θ)=λrLr(θ)+M∑i=1λiLi(θ) | (3.6) |
where λr and λi are hyper-parameters, which used to balance the interplay between the different loss terms. Here, Lr(θ) is a loss term that penalizes the PDE residual, and Li(θ),i=1,…,M, correspond to data-fit terms (e. g., measurements, initial or boundary conditions, etc.). For a typical initial and boundary value problem, these loss functions would take the specific form as
L0(θ)=1N0N0∑i=1|˜φ(xi0,0)−h(xi0)|2, | (3.7) |
Lb(θ)=1NbNb∑i=1|˜φ(xib,tib)−g(xib,tib)|2, | (3.8) |
Lr(θ)=1NrNr∑i=1|r(xir,tir;θ)|2, | (3.9) |
where {(xi0,0),h(xi0)}N0i=1 denotes the initial data, {(xib,tib),g(xib)}Nbi=1 denotes the boundary data, and {(xir,tir),0}Nri=1 denotes a set of collocation points that are randomly placed inside the domain Ω in order to minimize the PDE residual. Consequently, Lr penalizes the equation for not being satisfied on a finite set of collocation points, which constitutes the physics-informed part of the neural networks. The loss terms L0(θ) and Lb(θ) correspond to the initial and boundary data, which must be satisfied by the neural networks solution.
The resulting optimization problem leads to finding the minimum of a loss function by optimizing the parameters, i.e., we seek to find
θ∗=arg minθ(L(θ)). | (3.10) |
One can solve this minimization problem by the stochastic gradient descent (SGD) algorithm which is widely used in the machine learning community, i.e.,
θn+1=θn−η∇θL(θn)=θn−ηλr∇θLr(θn)−ηM∑i=1λi∇θLi(θn). | (3.11) |
Here η>0 is the learning rate and L(θn) is the loss function at nth iteration while the SGD methods can be initialized with some starting value θ0. We see how the constants λr and λi can effectively introduce a rescaling of the learning rate corresponding to each loss term. In particular, the weights are updated as
Wn+1=Wn−ηλr∇WLr(Wn)−ηM∑i=1λi∇WLi(Wn). | (3.12) |
The appropriate SGD optimizer such as Adam [45], AdaGrad [46] and L-BFGS [47] can be used according to the features of the neural networks.
The weight coefficients of loss function play an important role in improving the trainability of NN, which can be user-defined or tuned automatically. One can obtain λi arbitrarily via a trial-and-error procedure, yet this manual hyperparameter tuning may not produce satisfying consequences. However, the optimal weights need to be reconstructed for different governing equations, which means we cannot find a fixed empirical formula that is transferable across different problems. Most importantly, the loss function needs to be tailored according to the form of PDEs. It is impractical to set the optimal weights for different loss terms without enough prior knowledge. Wang et al. have made some frontier exploration for adaptive weight by utilizing the back-propagated gradient statistics [8] and exploited the eigenvalues of the NTK during training [9]. Meer et al. introduced a scaling parameter as loss weight which balances the relative importance of the different constraints [48].
Here we draw motivation from the above research works and the Adam algorithm [45] to derive an adaptive estimate for choosing the weights during the training process. The Adam algorithm adaptively tunes the learning rate associated with each parameter in the θ vector, based on the track of the first- and second-order moments of the back-propagated gradients during training. Following a similar idea, our goal is to adaptively design appropriate weight to each loss term that their gradients are similar in magnitude during back-propagation. We define λr=1 such that the residual loss generally dominate the other loss terms. For given initial and boundary loss terms Li, find ˆλi satisfying
ˆλimean{|∇θnLi(θn)|}=max{|∇θnLr(θn)|},i=0,b, | (3.13) |
where |⋅| denotes the elementwise absolute value and the mean function denotes the average of all the elementwise value for |∇θnLi(θn)|. Therefore, it follows that
ˆλi=max{|∇θnLr(θn)|}mean{|∇θnLi(θn)|},i=0,b. | (3.14) |
Then update the weight coefficients λi using the logarithmic mean [49] of the form
λn+1i=λni−ˆλiln(λni)−ln(ˆλi),i=0,b, | (3.15) |
which include a numerically stable implementation and the details can be seen in Appendix 6. What's more, this updating method can also avoid the additional hyperparameter. This optimal choice of λr and λi leads to an adaptively weighted loss function
L(θ)=1NrNr∑i=1|r(xir,tir)|2+λ0N0N0∑i=1|˜φi−φ(xi0,ti0)|2+λbNbNb∑i=1|˜φi−φ(xib,tib)|2. | (3.16) |
Thus, We propose the physics-informed neural networks based on adaptive weighted loss function, which adaptively assign appropriate weight to each term in the loss function during model training, an illustrative schematic is shown in Figure 2. Moreover, the logarithmic mean is employed in updating the weights, which can avoid the additional hyperparameter than learning rate annealing for PINN (LRA-PINN) [8].
Algorithm 1 Algorithm 1:AW-PINN algorithm |
Step 1: Specify the training set over all domain |
Initial and boundary training data: {xiφ,tiφ,φi}Nφi=1. |
Residual training points: {xir,tir}Nri=1. |
Step 2: Construct neural networks NN(x,t;θ) with the initialization of parameters θ. |
Step 3: Construct the residual neural network r by substituting surrogate ˜φ into the governing equations using automatic differentiation. |
Step 4: Specify the adaptively weighted loss function as shown in Eq (3.16), initialize the weight parameters λi by 1. Then use a gradient descent algorithm to update the parameters θ as : |
for n=1,…,S do |
(a) Compute the weights ˆλi by (3.14). |
(b) Update the adaptive weight coefficients λi using the logarithmic mean (3.15). |
(c) Update the parameters θ via gradient descent Eq (3.11). |
end for |
As summarized in Algorithm 1, our proposed AW-PINN algorithm assigns appropriate weight to each term in the loss function such that the learning rate is adaptively tuned as shown in Eq (3.11), and the gradients of each term in the loss function are similar in magnitude. The proposed AW-PINN algorithm is a modification of the original PINN algorithm [6] and learning rate annealing for PINN [8]. we remark that one can update the adaptive weights according to the Eqs (3.14) and (3.15) either at every iteration of the gradient descent loop or at a frequency specified by the user. The proposed AW-PINN algorithm can be easily extended to loss functions consisting of multiple terms such as multiple boundary conditions for multivariate problems, while only the gradient statistics in Eqs (3.14) and (3.15) need to be calculated. Moreover, the AW-PINN algorithm can be used to compute the solution of HJ PDEs with different kinds of Hamiltonian H, initial data, and boundary conditions, which further confirms the generality of physics-informed neural networks. Specifically, if take the adaptive weights λr,λi to be 1 in (3.6), then the AW-PINN becomes the conventional one, i.e.
L(θ)=Lr(θ)+M∑i=1Li(θ). | (3.17) |
Here we also explore the generality of the PINN algorithm for solving Hamilton-Jacobi equations, exactly embedding Dirichlet or periodic boundary conditions and physical constraints in neural networks architecture.
In this section, we provide a series of numerical examples to capture the viscosity solution and illustrate the capacity of the proposed AW-PINN algorithm for solving HJ equations with both convex and nonconvex Hamiltonian. The original method introduced in [6] will also be explored in our experiments for comparison. For the sake of generality, only fully connected feedforward neural networks are considered. Unless otherwise stated, the proposed AW-PINN algorithm has the following set up of hyper-parameters: 4 hidden layers with 100 neurons in each layer, and the optimizing procedure is the Adam optimizer with an initial learning rate of 0.001 for 50000 iterations followed by the L-BFGS-B optimizer, in which training process would stop if the relative error between two neighboring training steps is less than ε=10−8. The additional L-BFGS-B training process is used to accelerate the convergence rate during the training process. Moreover, the AW-PINN algorithm is initialized using the Glorot scheme [50] and implemented in TensorFlow.
For one dimensional case, unless otherwise stated, all randomly sampled collocation points inside the computational region are generated using a space filling Latin Hypercube Sampling strategy, such as Figure 3 for Example 4.1.1.
Example 4.1.1 Variable coefficient linear equation along with periodic boundary condition is given by
{φt+sin(x)φx=0,x∈[0,2π],t∈[0,1],φ(x,0)=sin(x),φ(0,t)=φ(2π,t). | (4.1) |
The exact solution is given as
φ(x,t)=sin(2arctan(e−ttan(x2))). | (4.2) |
Our goal here is to use this canonical benchmark problem to systematically analyze the performance of the AW-PINN algorithm. A neural networks ˜φ approximating the solution of (4.1) can now be trained by minimizing the mean squared error loss
L(θ)=Lr(θ)+λ0L0(θ)+λbLb(θ), | (4.3) |
where L0(θ) and Lr(θ) are defined by Eqs (3.7) and (3.9). The periodic boundary condition can strictly impose the periodicity requirement on the function and its derivative up to a finite order. Thus, the boundary loss term Lb(θ) for periodic boundary condition can be defined as
Lb(θ)=1Nb(Nb∑i=1|˜φ(0,tib)−˜φ(2π,tib)|2+Nb∑i=1|∂∂t˜φ(0,tib)−∂∂t˜φ(2π,tib)|2), | (4.4) |
where {xib,tib}Nbi=1 denotes the boundary training data. The training point set consists of Nb=50 boundary data randomly sampled from a uniform distribution δt=0.01 in [0,1], N0=50 initial data randomly parsed from a uniform distribution δx=0.01 in [0,2π], as well as Nr=2000 randomly sampled collocation points to enforce Eq (4.1) inside the solution domain. Figure 4 presents the approximate solutions obtained by the proposed AW-PINN algorithm with 50000 gradient descent iterations. The predicted solution obtained by the AW-PINN algorithm shown in Figure 4(b) is well consistent with the exact solution in Figure 4(a). From the comparison of the exact, PINN, LRA-PINN and AW-PINN solution at time t=1 shown in Figure 4(c), it is observed that the predicted solutions obtained by the three algorithms agree well with the exact one. However, after 50000 Adam iterations and 9 L-BFGS-B iterations in about 1667.092 seconds, the relative error defined by ‖φ(x,t)−˜φ(x,t)‖L2/‖φ(x,t)‖L2 for our AW-PINN is 6.1164e-04 which is smaller than 1.4195e-02 for traditional PINN after 50000 Adam iterations and 278 L-BFGS-B iterations in about 852.0633 seconds as shown in Table 1. The LRA-PINN algorithm achieves the relative error of 1.4967e-03 after 50000 Adam iterations and 31 L-BFGS-B iterations in about 1994.6314 seconds. Figure 4(d) shows the loss history over the number of iterations, where the loss of the AW-PINN is decreasing faster than others. It can be deduced that the learning capabilities of AW-PINN are better as it improves greatly the convergence rate (accelerate the training), especially at the early stage. It is easy to see from Figure 5(a) that the weight coefficients λ0 and λb adaptively changed with the iteration. Besides, from the evolution of loss terms L0(θ),Lb(θ),Lr(θ) over the number of iterations shown in Figure 5(b), we obtain that all of the amplitude gradually decrease with iterations and the loss term of PDE residual is dominant. Table 2 summarizes our results with different methods and different neural network architectures. Evidently, the relative L2 errors of the AW-PINN is smaller than the conventional PINN and LRA-PINN. In contrast, the proposed AW-PINN algorithm appear to be better robust with respect to network architectures.
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 278 | 50000; 31 | 50000; 9 |
Relative L2 error | 1.4195e-02 | 1.4967e-03 | 6.1164e-04 |
Relative L∞ error | 3.9156e-02 | 3.4360e-03 | 1.9243e-03 |
![]() |
Conventional PINN | LRA-PINN | AW-PINN |
30 neurons / 2 hidden layers | 2.8067e-03 | 2.4658e-03 | 1.1468e-03 |
60 neurons / 2 hidden layers | 1.3770e-03 | 3.3325e-03 | 1.6954e-03 |
120 neurons / 2 hidden layers | 1.8578e-03 | 3.9267e-03 | 1.8192e-03 |
30 neurons / 4 hidden layers | 1.5972e-03 | 9.1343e-04 | 1.1114e-03 |
60 neurons /4 hidden layers | 7.1538e-03 | 6.3312e-04 | 3.3524e-04 |
120 neurons /4 hidden layers | 1.3858e-02 | 1.6538e-03 | 9.3797e-04 |
30 neurons / 6 hidden layers | 2.6337e-03 | 3.2978e-03 | 2.0124e-03 |
60 neurons /6 hidden layers | 1.2910e-02 | 1.7492e-03 | 1.2216e-03 |
120 neurons /6 hidden layers | 2.4746e-02 | 2.6474e-03 | 8.3961e-04 |
Example 4.1.2 The strictly convex Hamiltonian is given by
{φt+12(φx+1)2=0,x∈[0,2],t∈[0,1.5/π2],φ(x,0)=−cos(πx),φ(0,t)=φ(2,t), | (4.5) |
with periodic boundary conditions. The singularity occurs at about t=1/π2. The change of variables, v=φx+1, transforms the Eq (4.5) into a conservation law, which can be easily solved via the method of characteristics [51]. Here we randomly choose initial training subset with N0=50 from a uniform distribution with nx=201 in the space domain, boundary training subset with Nb=50 from a uniform distribution with nt=101 in the time domain, and the collocation points with Nr=2000 using a space filling Latin Hypercube Sampling strategy. Figure 6 shows the comparison of the exact solution and the predicted solution for the strictly convex HJ PDEs (4.5) trained by the conventional PINN, LRA-PINN and AW-PINN algorithms. Compared with the exact solution given in Figure 6(a), the neural networks presented in Figure 6(b) trained by our AW-PINN algorithm predict the solution very well. After the formation of the singularity at t=1.5/π2, the comparison of exact, conventional PINN, LRA-PINN and AW-PINN solutions is given in Figure 6(c), it can be seen that three algorithms accurately capture singularity of the solution. Table 3 contains the Iterations and relative errors of the approximations. Figure 6(d) shows the comparison of the loss history for the conventional PINN, LRA-PINN and AW-PINN algorithm, the loss of the AW-PINN algorithm converges faster to the global minimum after 20000 iterations without occasional large spikes occurring. Figure 7(a) provides a more detailed visual evolution of the adaptive weights λ0 and λb used to scale the loss of initial and boundary conditions during the training procedure of the AW-PINN. Figure 7(b) shows the evolution of mean squared error loss terms L0(θ),Lb(θ),Lr(θ) over the number of iterations, where all loss terms are convergent gradually. Evidently, the residual loss terms dominate the others.
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 6349 | 50000; 40 | 50000; 22 |
Relative L2 error | 4.7554e-03 | 3.6710e-03 | 6.7551e-03 |
Relative L∞ error | 2.6565e-02 | 1.7579e-02 | 2.2131e-02 |
Example 4.1.3 The one-dimensional Eikonal equation is given by
{φt+|φx|=0,x∈[0,2π],t∈[0,1],φ(x,0)=sin(x),φ(0,t)=φ(2π,t), | (4.6) |
with periodic boundary conditions. Despite its origins in both geometric optics and wave propagation theory, the Eikonal equations are widely applied in science and engineering disciplines. For example, it is used to infer 3D surface shapes, calculate the distance fields, image denoising, segmentation in image processing and optimal path planning in robotics. As well, the Eikonal equation is used to compute geodesic distances in computer graphics and calculate travel time fields in seismology and is also used for etching, deposition, and lithography simulations in semi-conductor manufacturing. It is of great practical significance to study this equation. The viscosity solution to this equation has a shock forming in φx at x=π/2 and a rarefaction wave at x=3π/2. The exact solution can be obtained via the Lax-Hopf formula [52]. Our training data is composed of the initial data N0=50 randomly parsed from a uniform distribution with nx=201 in [0,2π], the boundary points Nb=50 randomly sampled from a uniform distribution with nt=101 in the time domain, as well as the collocation points Nr=2000 using a space filling Latin Hypercube Sampling strategy. Figure 8 demonstrates the contour plot of the solution of the Eikonal equation on the x−t domain in the top row. Figure 8(c) provides a more detailed visual comparison of exact, conventional PINN, LRA-PINN and AW-PINN predicted solution at time t=1. One can observe that the AW-PINN algorithm is able to capture the kink formed at π2 and the rarefaction wave at 3π2. After 50000 Adam iterations and various number of L-BFGS iterations, the relative prediction error of AW-PINN is 7.6622e-03, improved by one order of magnitude compared to the conventional PINN (2.2495e-02) as shown in Table 4. Figure 8(d) shows that the loss of the AW-PINN algorithm converges faster towards global minimum, which indicates that the proposed AW-PINN algorithm seems to have the ability to train the superior solution more quickly, and have good stability also. Figure 9(a) provides the evolution process for the adaptive weights λ0 and λb used to scale the initial and boundary conditions loss term during model training in the AW-PINN Algorithm. It can be seen that the weight λ0 is larger than λb throughout the iterations and they all change more slowly after 20000 iterations. Figure 9(b) shows the mean squared error loss terms L0(θ),Lb(θ),Lr(θ) over the number of iterations. Compared with the results obtained by the conventional PINN and LRA-PINN, the loss terms of the AW-PINN converge to a small deterministic number as the iterations increasing. Obviously, the proposed AW-PINN training algorithm can properly balance the interplay between the initial, boundary, and residual loss terms, and avoid oscillations at extreme points.
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 3636 | 50000; 19 | 50000; 16 |
Relative L2 error | 2.2495e-02 | 3.8931e-02 | 7.6622e-03 |
Relative L∞ error | 1.2236e-01 | 1.8352e-01 | 2.9233e-02 |
Example 4.1.4 The nonconvex Hamiltonian is given by
{φt−cos(φx+1)=0,x∈[−1,1],t∈[0,1.5/π2],φ(x,0)=−cos(πx),φ(−1,t)=φ(1,t), | (4.7) |
with periodic boundary conditions. We note that the exact solution can be given by φ(x,t)=−cos(πx0)+t[(v−1)sinv+cosv],v=1+πsin(πx0) using characteristic methods. We approximate the solution by fully-connected neural networks NN(x,t;θ) of 8 hidden layers with 100 neurons in each layer. Here we choose randomly sampled training points with N0=50, Nb=50 and Nr=2000 as the training set. The contour plot of the solution obtained by the AW-PINN algorithm shown in Figure 10(b) is well consistent with the exact solution in Figure 10(a). Figure 10(c) provides a more detailed visual comparison of the exact, conventional PINN, LRA-PINN and AW-PINN predicted solution at time t=1.5π2. The predicted solution of the AW-PINN training algorithm outperforms the other two, and the L2 relative error is 3.7065e-03, improved by one order of magnitude compared to the conventional PINN (5.9719e-02). The number of iterations and relative error of the three training algorithms are given in Table 5. In contrast, the proposed can train the solution more accurately with fewer iterations. The loss of the AW-PINN decreases faster and accelerates convergence as shown in Figure 10(d). Figure 11 illustrates the evolution process for the adaptive weights λ0 and λb, and the mean squared error loss terms L0(θ),Lb(θ),Lr(θ). We can observe that the loss terms converge, and the residual loss term dominates the others. This example shows that the AW-PINN algorithm can approximate the solution even when the Hamiltonian is not convex.
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 19342 | 50000; 17106 | 50000; 9402 |
Relative L2 error | 5.9719e-02 | 3.8731e-02 | 3.7065e-03 |
Relative L∞ error | 2.6706e-01 | 2.0716e-01 | 2.6123e-02 |
Table 6 displays the relative L2 error of the AW-PINN method for different examples, and for different neural architectures obtained by varying the number of hidden layers and the different number of neurons per layer. The proposed AW-PINN algorithm appear to be better robust with respect to network architectures and show a consistent trend in improving the prediction accuracy as the number of hidden layers and neural units is increased. As can be seen from Figure 4(d)–Figure 10(d) for the loss value over iterations, the loss trained in the first step are very large, the possible reason is the Xavier initialization. However, the loss decreases to a stable interval in a few steps. The numerical examples verify that our AW-PINN algorithm is stable, and the random initialization has tiny effect on numerical results.
![]() |
Example 4.1.1 | Example 4.1.2 | Example 4.1.3 | Example 4.1.4 |
30 neurons / 2 hidden layers | 1.1468e-03 | 3.6212e-02 | 6.8436e-03 | 4.8704e-02 |
60 neurons / 2 hidden layers | 1.6954e-03 | 1.1317e-02 | 1.6412e-02 | 9.2931e-03 |
120 neurons / 2 hidden layers | 1.8192e-03 | 1.4429e-02 | 1.0010e-02 | 2.4058e-02 |
30 neurons / 4 hidden layers | 1.1114e-03 | 5.4070e-03 | 1.3936e-02 | 8.9544e-02 |
60 neurons /4 hidden layers | 3.3524e-04 | 2.6192e-03 | 5.8579e-03 | 1.5446e-02 |
120 neurons /4 hidden layers | 9.3797e-04 | 3.0318e-03 | 9.4430e-03 | 5.4431e-02 |
30 neurons / 6 hidden layers | 2.0124e-03 | 8.7984e-03 | 4.5297e-03 | 1.2776e-02 |
60 neurons /6 hidden layers | 1.2216e-03 | 2.1817e-03 | 7.4221e-03 | 3.4884e-03 |
120 neurons /6 hidden layers | 8.3961e-04 | 4.0882e-03 | 3.3320e-03 | 3.8391e-02 |
Example 4.2.1 The two-dimensional convex Hamiltonian is given by
{φt+12(φx+φy+1)2=0,(x,y)∈Ω,t∈[0,1.5/π2],φ(x,y,0)=−cos(π(x+y)/2),(x,y)∈Ω, | (4.8) |
with periodic boundary conditions on the domain Ω=[0,2π]×[0,2π]. This problem can be reduced to a one-dimensional problem via the coordinate transformation ζ=(x+y)/2,η=(x−y)/2, we can thus use the one-dimensional exact solution to analyze our predicted results. We approximate the solution by fully-connected neural networks NN(x,t;θ) of 5 hidden layers with 200 neurons in each layer. Small data set consists of the uniformly spaced grid points in the domain Ω with the number of space and time grid points to be nx=ny=41 and nt=41. Here we choose the randomly sampled collocation point with N0=400,Nb=4000 and Nr=30000. We present the contour plot of the solution after the singularity formation in Figure 12. By using the same neural architecture hyperparameter, the trained solutions obtained by the AW-PINN and the conventional PINN algorithms are consistent with the exact one. However, the L2 relative error 9.8475e-03 for the AW-PINN is smaller than 1.2731e-02 for the conventional PINN. What's more, the loss decreases faster and accelerates convergence as shown in Figure 12(d). Finally, Figure 13 presents the evolution process for the adaptive weights λ0 and λb, and the mean squared error loss terms L0(θ),Lb(θ),Lr(θ) over iterations during the training of AW-PINN with 50000 iterations using the Adam optimizer. It can be seen from Figure 13(a) that although the weights λ0 and λb trained in the first step are very large, they decrease to a stable interval in a few steps. Figure 13(b) shows that the residual loss term plays a dominant role in the total loss.
Example 4.2.2 The two-dimensional nonlinear equation is given by
{φt+φxφy=0,(x,y)∈Ω,t∈[0,0.5],φ(x,y,0)=sin(x)+cos(y),(x,y)∈Ω, | (4.9) |
with Dirichlet boundary conditions on the domain Ω=[−π,π]2. This is a genuinely nonlinear problem with a nonconvex Hamiltonian. The exact solution is given implicitly by φ(x,y,t)=−cos(q)sin(r)+sin(q)+cos(r) where x=q−tsin(r) and y=r+tcos(q). We approximate the solution by fully-connected neural networks NN(x,t;θ) of 5 hidden layers with 200 neurons in each layer. Small data set consists of the uniformly spaced grid points in the domain Ω, where the numbers of grid points are nx=ny=41 and nt=41. Here we choose N0=1681,Nb=4000 and Nr=30000 randomly sampled collocation points. We also present the detailed visual comparison of the exact solution, predicted solution using conventional PINN and AW-PINN at time t=0.5 in Figure 14. Compared with the conventional PINN, the loss of the AW-PINN training algorithm decreases faster and accelerates convergence after 30000 iterations as shown in Figure 14(d). What's more, the L2 relative error is 1.2702e−01, which is smaller than 1.2937e−01 for the conventional PINN. Finally, Figure 15 shows the evolution process for the adaptive weights λ0 and λb, and the mean squared error loss terms L0(θ),Lb(θ),Lr(θ) for 50000 iterations. One can obtain that the variation range of the weights become smaller as the number of iterations increases and the residual loss term dominant the others.
Example 4.2.3 The shape-from-shading problem is given by
{φt+I(x,y)√1+φ2x+φ2y−1=0,(x,y)∈Ω,t∈[0,1],φ(x,y,0)=(4096/9)xy(1−x)(1−y)(x−1/2)(y−1/2),(x,y)∈Ω, | (4.10) |
with φ(x,y,t)=0 at the boundary of the domain Ω=[0,1]2. Here I(x, y) is the brightness value with 0<I(x,y)≤1, specifically, we take
I(x,y)=1/√1+(2πcos(2πx)sin(2πy))2+(2πsin(2πx)cos(2πy))2. | (4.11) |
The steady-state solution of equation (4.10) is the shape function, which has the brightness I under vertical lighting, see [53]. The shape-from-shading problem has multiple solutions to Eqs (4.10) and (4.11), and all satisfy the homogeneous boundary condition, according to [54]. Similar to traditional numerical methods [55], the additional "boundary conditions" are introduced at points where I(x,y)=1. In our problem, we consider such "boundary conditions":
φ(14,14)=φ(34,34)=1,φ(14,34)=φ(34,14)=−1,φ(12,12)=0, | (4.12) |
then the exact solution will be φ(x,y)=sin(2πx)sin(2πy). We approximate the solution by a fully-connected neural networks NN(x,t;θ) of 5 hidden layers with 200 neurons in each layer. Small data set consists of the uniformly spaced grid points in the domain Ω, where nx=ny=41 and nt=161 are the number of space and time grid points. Here we take randomly sampled collocation points with N0=1681,Nb=4000 and Nr=30000. We also show the detailed visual comparison of the exact solution, predicted solution using the conventional PINN and the AW-PINN at time t=1 in Figure 16. The loss of AW-PINN training algorithm decreases faster and accelerates convergence after 30000 iterations as shown in Figure 16(d), and the L2 relative error is 1.7686e−02, improved by one order of magnitude compared to the conventional PINN (1.5982e-01). Finally, Figure 17 presents the evolution process for the adaptive weights λ0 and λb, and the mean squared error loss terms L0(θ),Lb(θ),Lr(θ) for 50000 iterations. For this problem, the residual loss term dominant the others as usual. However, to make the total loss decreases as the number of iterations increases, the weights for loss terms corresponding to the initial and boundary conditions change more frequently here.
Although there are some very valuable works related to developing specific neural network architectures to solve some sets of HJ PDEs [42,43], whose Hamiltonian is convex and the viscosity solution can be defined by the Lax-Oleinik formula. The performance of physics-informed neural networks algorithm is not yet fully investigated in the literature in solving time-dependent HJ PDEs such as nonconvex Hamiltonian equations that can capture shock and rarefaction waves. The original PINN formulation [6] is trained to solve unsupervised learning tasks by minimizing the mean squared error loss with physics-informed constraint. This training method is suitable for solving certain types of problems, such as the curse of dimensionality problems. However, the original PINN also has difficulties in representing an accurate approximation for nonconvex Hamiltonian. To further improve the predictive accuracy, we have proposed the AW-PINN algorithm such that the weights for different loss terms can be adaptively updated, in which the residual loss terms keep dominating the others. This approach is an improvement of the learning rate annealing for physics-informed neural networks of Wang, Teng, Karniadakis [8], and updates the weights using the log average and provides better predictive accuracy for HJ PDEs. We examined our proposed training algorithm for solving time-dependent HJ PDEs, including variable coefficient linear equation, strictly convex Hamiltonian, Eikonal equation, nonconvex Hamiltonian, two-dimensional Burgers-type Hamiltonian, and two-dimensional nonlinear equation with a nonconvex Hamiltonian and the shape-from-shading problem. The series of numerical experiments show that the proposed algorithm effectively achieves noticeable improvements in predictive accuracy and increases the convergence rate. Although a series of numerical results have verified that this training algorithm can learn accurate approximation solutions of HJ PDEs, and yield practical improvements, the theoretical analysis of the proposed AW-PINN algorithm is worth further research. We will investigate many critical applications in computational science and engineering to better understand PINN. These numerical results may provide some inspiration for the subsequent theoretical research. And we notice that the AW-PINN algorithm can enforce exactly periodic boundary conditions by imposing the periodicity requirement on the function and all its derivatives. The neural network architecture can also be modified to exactly impose Dirichlet and periodic boundary conditions.
This work is supported by National Natural Science Foundation of China (Grant Nos. 11871399, 11901460) and China Postdoctoral Science Foundation (Grant No. 2022M712600), which are gratefully acknowledged.
The authors declare there is no conflict of interest.
The logarithmic mean of a is defined as
aln(l,r)=al−arln(al)−ln(ar) |
However, this is not numerically well-posed when al→ar. To overcome this, let us write the logarithmic mean in another form. Let ζ=alar, so that
aln(l,r)=al+arlnζζ−1ζ+1, |
where ln(ζ)=2(ζ−1ζ+1+13(ζ−1)3(ζ+1)3+15(ζ−1)5(ζ+1)5+17(ζ−1)7(ζ+1)7+o((ζ−1)9)) is used to obtain a numerically well-formed logarithmic mean. The subroutine for computing the logarithmic mean is as following, let
ζ=aLaR,f=ζ−1ζ+1,u=f∗f, |
and
F={1.0+u/3.0+u∗u/5.0+u∗u∗u/7.0,if u<ε;ln(ζ)/2.0/(f),otherwise, |
so that aln(l,r)=al+ar2F with ε=10−2.
[1] |
R. Müller, M. Vette, A. Geenen, Skill-based dynamic task allocation in human-robot-cooperation with the example of welding application, Procedia Manuf., 11 (2017), 13–21. https://doi.org/10.1016/j.promfg.2017.07.113 doi: 10.1016/j.promfg.2017.07.113
![]() |
[2] |
E. Olaiz, J. Zulaika, F. Veiga, M. Puerto, A. Gorrotxategi, Adaptive fixturing system for the smart and flexible positioning of large volume workpieces in the wind-power sector, Procedia CIRP, 21 (2014), 183–188. https://doi.org/10.1016/j.procir.2014.03.193 doi: 10.1016/j.procir.2014.03.193
![]() |
[3] | L. Joyanes, Industria 4.0: La Cuarta Revolución Industrial, 1st edition, Alfaomega Grupo Editor, 2017. |
[4] |
V. Ivanov, F. Botko, I. Dehtiarov, M. Kočiško, A. Evtuhov, I. Pavlenko, et al., Development of flexible fixtures with incomplete locating, Machines, 10 (2022), 493. https://doi.org/10.3390/machines10070493 doi: 10.3390/machines10070493
![]() |
[5] | M. Jones, L. Zarzycki, G. Murray, Does industry 4.0 pose a challenge for the SME machine builder? A case study and reflection of readiness for a UK SME, in Precision Assembly in the Digital Age, Springer, (2019), 183–197. https://doi.org/10.1007/978-3-030-05931-6_17 |
[6] |
Y. Chen, A. Klingler, K. Fu, L. Ye, 3D printing and modelling of continuous carbon fibre reinforced composite grids with enhanced shear modulus, Eng. Struct., 286 (2023), 116165. https://doi.org/10.1016/j.engstruct.2023.116165 doi: 10.1016/j.engstruct.2023.116165
![]() |
[7] | C. Bodenstein, H. M. Sauer, F. Fernandes, E. Dörsam, Assessing and improving edge roughness in pad-printing by using outlines in a one-step exposure process for the printing form, J. Print Media Technol. Res., 8 (2019), 19–27. |
[8] |
T. S. S. Saikumar, Bhanumurthysoppari, C. R. Bandaru, Design and simulation of automated pad printing machine using automation studio, Mater. Today Proc., 45 (2021), 2871–2877. https://doi.org/10.1016/j.matpr.2020.11.813 doi: 10.1016/j.matpr.2020.11.813
![]() |
[9] |
E. Hrušková, M. Matúšová, Š. Václav, Design of construction and controlling of automation technics in order to improve skills of students, Multidiscip. Aspects Prod. Eng., 4 (2021), 120–131. https://doi.org/10.2478/mape-2021-0011 doi: 10.2478/mape-2021-0011
![]() |
[10] | H. Hashemi, A. M. Shaharoun, S. Izman, B. Ganji, Z. Namazian, S. Shojaei, Fixture design automation and optimization techniques: Review and future trends, Int. J. Eng. Trans. B, 27 (2014), 1787–1794. |
[11] |
A. F. Casas Pulido, O. Bohórquez, O. A. González-Estrada, J. Quiroga, A. Pertuz, Adhesive joints for composite materials produced by additive manufacturing, J. Phys. Conf. Ser., 1386 (2019), 012005. https://doi.org/10.1088/1742-6596/1386/1/012005 doi: 10.1088/1742-6596/1386/1/012005
![]() |
[12] |
I. A. Daniyan, A. O. Adeodu, B. I. Oladapo, O. L. Daniyan, O. R. Ajetomobi, Development of a reconfigurable fixture for low weight machining operations, Cogent Eng., 6 (2019), 1579455. https://doi.org/10.1080/23311916.2019.1579455 doi: 10.1080/23311916.2019.1579455
![]() |
[13] |
A. Gameros, S. Lowth, D. Axinte, A. Nagy-Sochacki, O. Craig, H. R. Siller, State of the art in fixture systems for the manufacture and assembly of rigid components: A review, Int. J. Mach. Tools Manuf., 123 (2017), 1–21. https://doi.org/10.1016/j.ijmachtools.2017.07.004 doi: 10.1016/j.ijmachtools.2017.07.004
![]() |
[14] | Y. Kılıçarslan, Modular Fixture Design for CNC Machining Centers, Master thesis, Middle East Technical University in Ankara, Turkey, 2019. |
[15] | P. Raval, N. P. Maniar, S. Thaker, P. Thanki, Industry 4.0 technology: Design and manufacturing of modular fixture, in Recent Advances in Mechanical Infrastructure, Springer, (2021), 411–417. https://doi.org/10.1007/978-981-33-4176-0_35 |
[16] |
H. Tohidi, T. AlGeddawy, Change management in modular assembly systems to correspond to product geometry change, Int. J. Prod. Res., 57 (2019), 6048–6060. https://doi.org/10.1080/00207543.2018.1559374 doi: 10.1080/00207543.2018.1559374
![]() |
[17] |
A. Stornelli, S. Ozcan, C. Simms, Advanced manufacturing technology adoption and innovation: A systematic literature review on barriers, enablers, and innovation types, Res. Policy, 50 (2021), 104229. https://doi.org/10.1016/j.respol.2021.104229 doi: 10.1016/j.respol.2021.104229
![]() |
[18] | A. Sachdeva, R. Agrawal, C. Chaudhary, D. Siddhpuria, D. Kashyap, S. Timung, Sustainability of 3D printing in industry 4.0: A brief review, in 3D Printing Technology for Water Treatment Applications, Elsevier, (2023), 229–251. https://doi.org/10.1016/B978-0-323-99861-1.00010-2 |
[19] |
D. R. Harish, T. Gowtham, A. Arunachalam, M. S. Narassima, D. Lamy, M. Thenarasu, Productivity improvement by application of simulation and lean approaches in an multimodel assembly line, Proc. Inst. Mech. Eng. Part B: J. Eng. Manuf., 2023. https://doi.org/10.1177/09544054231182264 doi: 10.1177/09544054231182264
![]() |
[20] | G. Schuh, G. Bergweiler, F. Fiedler, V. Slawik, C. Ahues, A review of data-based methods for the development of an adaptive engineering change system for automotive body shop, in Proceedings of the Conference on Production Systems and Logistics, Hannover: publish-Ing., (2021), 359–369. https://doi.org/10.15488/11295 |
[21] |
H. Radhwan, M. S. M. Effendi, M. F. Rosli, Z. Shayfull, K. N. Nadia, Design and analysis of jigs and fixtures for manufacturing process, IOP Conf. Ser.: Mater. Sci. Eng., 551 (2019), 012028. https://doi.org/10.1088/1757-899X/551/1/012028 doi: 10.1088/1757-899X/551/1/012028
![]() |
[22] |
N. Ma, H. Huang, H. Murakawa, Effect of jig constraint position and pitch on welding deformation, J. Mater. Process. Technol., 221 (2015), 154–162. https://doi.org/10.1016/j.jmatprotec.2015.02.022 doi: 10.1016/j.jmatprotec.2015.02.022
![]() |
[23] | H. C. Pandit, Jigs and fixtures in manufacturing, Int. J. Eng. Res. Appl., 12 (2022), 50–55. |
[24] |
S. Weckx, S. Robyns, J. Baake, E. Kikken, R. De Geest, M. Birem, et al., A cloud-based digital twin for monitoring of an adaptive clamping mechanism used for high performance composite machining, Procedia Comput. Sci., 200 (2022), 227–236. https://doi.org/10.1016/j.procs.2022.01.221 doi: 10.1016/j.procs.2022.01.221
![]() |
[25] |
K. Ju, C. Duan, J. Kong, Y. Chen, Y. Sun, Clamping deformation of thin circular workpiece with complex boundary in vacuum fixture system, Thin-Walled Struct., 171 (2022), 108777. https://doi.org/10.1016/j.tws.2021.108777 doi: 10.1016/j.tws.2021.108777
![]() |
[26] |
S. Mousavi, M. Guskov, J. Duchemin, P. Lorong, Clamping modeling in automotive flexible workpieces machining, Procedia CIRP, 101 (2021), 134–137. https://doi.org/10.1016/j.procir.2021.04.004 doi: 10.1016/j.procir.2021.04.004
![]() |
[27] |
H. Tohidi, T. AlGeddawy, Planning of modular fixtures in a robotic assembly system, Procedia CIRP, 41 (2016), 252–257. https://doi.org/10.1016/j.procir.2015.12.090 doi: 10.1016/j.procir.2015.12.090
![]() |
[28] |
M. Matejic, B. Tadic, M. Lazarevic, M. Misic, D. Vukelic, Modelling and simulation of a novel modular fixture for a flexible manufacturing system, Int. J. Simul. Model., 17 (2018), 18–29. https://doi.org/10.2507/IJSIMM17(1)407 doi: 10.2507/IJSIMM17(1)407
![]() |
[29] |
K. Yu, S. Wang, Y. Wang, Z. Yang, A flexible fixture design method research for similar automotive body parts of different automobiles, Adv. Mech. Eng., 10 (2018). https://doi.org/10.1177/1687814018761272 doi: 10.1177/1687814018761272
![]() |
[30] |
W. T. Seloane, K. Mpofu, B. I. Ramatsetse, D. Modungwa, Conceptual design of intelligent reconfigurable welding fixture for rail car manufacturing industry, Procedia CIRP, 91 (2020), 583–593. https://doi.org/10.1016/j.procir.2020.02.217 doi: 10.1016/j.procir.2020.02.217
![]() |
[31] |
J. Villena Toro, A. Wiberg, M. Tarkian, Application of optimized convolutional neural network to fixture layout in automotive parts, Int. J. Adv. Manuf. Technol., 126 (2023), 339–353. https://doi.org/10.1007/s00170-023-10995-0 doi: 10.1007/s00170-023-10995-0
![]() |
[32] |
L. Gong, H. Söderlund, L. Bogojevic, X. Chen, A. Berce, Å. Fast-Berglund, et al., Interaction design for multi-user virtual reality systems: An automotive case study, Procedia CIRP, 93 (2020), 1259–1264. https://doi.org/10.1016/j.procir.2020.04.036 doi: 10.1016/j.procir.2020.04.036
![]() |
[33] |
K. J. Jonsson, R. Stolt, F. Elgh, A case-based reasoning method including tooling function for case retrieval and reuse in stamping tooling design, Comput. Aided Des. Appl., 20 (2023), 839–855. https://doi.org/10.14733/cadaps.2023.839-855 doi: 10.14733/cadaps.2023.839-855
![]() |
[34] |
J. P. Cardona, J. J. Leal, J. U. Castellanos, J. E. Ustariz, Soluciones analíticas y numéricas de esfuerzos mecánicos en placas rectangulares isotrópicas, Inf. Tecnol., 32 (2021), 13–24. http://doi.org/10.4067/S0718-07642021000600013 doi: 10.4067/S0718-07642021000600013
![]() |
[35] |
L. Croppi, N. Grossi, A. Scippa, G. Campatelli, Fixture optimization in turning thin-wall components, Machines, 7 (2019), 68. https://doi.org/10.3390/machines7040068 doi: 10.3390/machines7040068
![]() |
[36] |
B. Zhu, Z. Mu, W. He, L. Fan, G. Zhao, Y. Yang, Research on clamping action control technology for floating fixtures, Materials, 15 (2022), 5571. https://doi.org/10.3390/ma15165571 doi: 10.3390/ma15165571
![]() |
[37] |
A. R. Aderiani, M. Hallmann, K. Wärmefjord, B. Schleich, R. Söderberg, S. Wartzack, Integrated tolerance and fixture layout design for compliant sheet metal assemblies, Appl. Sci., 11 (2021), 1646. https://doi.org/10.3390/app11041646 doi: 10.3390/app11041646
![]() |
[38] |
Y. Chen, K. Fu, B. Jiang, Modelling localised progressive failure of composite sandwich panels under in-plane compression, Thin-Walled Struct., 184 (2023), 110552. https://doi.org/10.1016/j.tws.2023.110552 doi: 10.1016/j.tws.2023.110552
![]() |
[39] |
M. M. Rashid, A. A. Khan, M. Usman, M. Ayub, B. Rustam, Optimization of modular fixture layout by minimizing work-piece deformation, Pak. J. Eng. Technol., 4 (2021), 43–51. https://doi.org/10.51846/vol4iss1pp43-51 doi: 10.51846/vol4iss1pp43-51
![]() |
[40] |
F. Veiga, T. Bhujangrao, A. Suárez, E. Aldalur, I. Goenaga, D. Gil-Hernandez, Validation of the mechanical behavior of an aeronautical fixing turret produced by a design for additive manufacturing (DfAM), Polymers, 14 (2022), 2177. https://doi.org/10.3390/polym14112177 doi: 10.3390/polym14112177
![]() |
[41] | Y. Xu, W. Zhang, X. Wei, B. Huang, Research on image quality control technology of pad printing, in China Academic Conference on Printing and Packaging, Springer, (2023), 163–169. https://doi.org/10.1007/978-981-19-9024-3_22 |
[42] |
K. C. Arredondo-Soto, J. Blanco-Fernández, M. A. Miranda-Ackerman, M. M. Solís-Quinteros, A. Realyvásquez-Vargas, J. L. García-Alcaraz, A plan-do-check-act based process improvement intervention for quality improvement, IEEE Access, 9 (2021), 132779–132790. https://doi.org/10.1109/ACCESS.2021.3112948 doi: 10.1109/ACCESS.2021.3112948
![]() |
[43] | A. Al Aboud, E. Dörsam, D. Spiehl, Investigation of printing pad geometry by using FEM simulation, J. Print Media Technol. Res., 9 (2020), 81–93. |
[44] |
E. Pessot, A. Zangiacomi, C. Battistella, V. Rocchi, A. Sala, M. Sacco, What matters in implementing the factory of the future: Insights from a survey in European manufacturing regions, J. Manuf. Technol. Manag., 32 (2021), 795–819. https://doi.org/10.1108/JMTM-05-2019-0169 doi: 10.1108/JMTM-05-2019-0169
![]() |
[45] |
T. Masood, P. Sonntag, Industry 4.0: Adoption challenges and benefits for SMEs, Comput. Ind., 121 (2020), 103261. https://doi.org/10.1016/j.compind.2020.103261 doi: 10.1016/j.compind.2020.103261
![]() |
[46] |
M. Sanchez, E. Exposito, J. Aguilar, Industry 4.0: Survey from a system integration perspective, Int. J. Comput. Integr. Manuf., 33 (2020), 1017–1041. https://doi.org/10.1080/0951192X.2020.1775295 doi: 10.1080/0951192X.2020.1775295
![]() |
[47] | S. H. Moon, Industry 4.0 for advanced manufacturing and its implementation, Eurasian J. Anal. Chem., 13 (2018), 491–497. |
[48] |
B. Tjahjono, C. Esplugues, E. Ares, G. Pelaez, What does industry 4.0 mean to supply chain, Procedia Manuf., 13 (2017), 1175–1182. https://doi.org/10.1016/j.promfg.2017.09.191 doi: 10.1016/j.promfg.2017.09.191
![]() |
[49] |
X. Li, S. Yu, Y. Lei, N. Li, B. Yang, Intelligent machinery fault diagnosis with event-based camera, IEEE Trans. Ind. Inform., 2023. https://doi.org/10.1109/TII.2023.3262854 doi: 10.1109/TII.2023.3262854
![]() |
[50] | K. T. Ulrich, S. D. Eppinger, M. C. Yang, Product Design and Development, 7th edition, McGraw-Hill, 2019. |
[51] | V. Balachandran, Design of Jigs, Fixtures and Press Tools, Notion Press, 2015. |
[52] | Protel 99 SE Training Manual, PCB Design, 2001. Available from: https://www.mikrocontroller.net/attachment/17909/Protel_99_SE_Traning_Manual_PCB_Design.pdf. |
[53] |
A. Corrado, W. Polini, Tolerance analysis tools for fixture design: A comparison, Procedia CIRP, 92 (2020), 112–117. https://doi.org/10.1016/j.procir.2020.05.174 doi: 10.1016/j.procir.2020.05.174
![]() |
[54] | MatWeb, Materials Information Resource, 2023. Available from: https://matweb.com/. |
[55] |
M. Dixit, V. Mathur, S. Gupta, M. Baboo, K. Sharma, N. S. Saxena, Morphology, miscibility and mechanical properties of PMMA/PC blends, Phase Transitions, 82 (2009), 866–878. https://doi.org/10.1080/01411590903478304 doi: 10.1080/01411590903478304
![]() |
[56] |
Y. Chen, L. Ye, H. Dong, Lightweight 3D carbon fibre reinforced composite lattice structures of high thermal-dimensional stability, Compos. Struct., 304 (2023), 116471. https://doi.org/10.1016/j.compstruct.2022.116471 doi: 10.1016/j.compstruct.2022.116471
![]() |
[57] | R. C. Hibbeler, Mechanics of Materials, 11th edition, Pearson, 2022. |
[58] |
S. Jazouli, W. Luo, F. Brémand, T. Vu-Khanh, Nonlinear creep behavior of viscoelastic polycarbonate, J. Mater. Sci., 41 (2006), 531–536. https://doi.org/10.1007/s10853-005-2276-1 doi: 10.1007/s10853-005-2276-1
![]() |
1. | Youqiong Liu, Li Cai, Yaping Chen, Pengfei Ma, Qian Zhong, Variable separated physics-informed neural networks based on adaptive weighted loss functions for blood flow model, 2024, 153, 08981221, 108, 10.1016/j.camwa.2023.11.018 | |
2. | Jianpeng Ran, Xiaoxiang Hu, Xiaoqian Yuan, Aoyi Li, Peng Wei, 2023, Physics-Informed neural networks based low thrust orbit transfer design for spacecraft, 979-8-3503-3775-4, 1, 10.1109/SAFEPROCESS58597.2023.10295814 | |
3. | Fangrui Xiu, Zengan Deng, A dynamically adaptive Langmuir turbulence parameterization scheme for variable wind wave conditions: Model application, 2024, 192, 14635003, 102453, 10.1016/j.ocemod.2024.102453 | |
4. | Sandor M. Molnar, Joseph Godfrey, Binyang Song, Balance Equations for Physics-Informed Machine Learning, 2024, 24058440, e38799, 10.1016/j.heliyon.2024.e38799 | |
5. | Yanjie Song, He Wang, He Yang, Maria Luisa Taccari, Xiaohui Chen, Loss-attentional physics-informed neural networks, 2024, 501, 00219991, 112781, 10.1016/j.jcp.2024.112781 | |
6. | Ikhyun Ryu, Gyu-Byung Park, Yongbin Lee, Dong-Hoon Choi, Physics-informed neural network for engineers: a review from an implementation aspect, 2024, 38, 1738-494X, 3499, 10.1007/s12206-024-0624-9 | |
7. | Vikas Yadav, Mario Casel, Abdulla Ghani, RF-PINNs: Reactive Flow Physics-Informed Neural Networks for Field Reconstruction of Laminar and Turbulent Flames using Sparse Data, 2024, 00219991, 113698, 10.1016/j.jcp.2024.113698 | |
8. | Filip Rękas, Marcin Chutkowski, Krzysztof Kaczmarski, Application of Physics-Informed Neural Networks to predict concentration profiles in gradient liquid chromatography, 2025, 00219673, 465831, 10.1016/j.chroma.2025.465831 | |
9. | Daolun Li, Qian Wang, Wenshu Zha, Luhang Shen, Yuxiang Hao, Xiang Li, Shuaijun Lv, Inversion of Multiple Reservoir Parameters Based on Deep Neural Networks Guided by Lagrange Multipliers, 2025, 1086-055X, 1, 10.2118/225442-PA |
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 278 | 50000; 31 | 50000; 9 |
Relative L2 error | 1.4195e-02 | 1.4967e-03 | 6.1164e-04 |
Relative L∞ error | 3.9156e-02 | 3.4360e-03 | 1.9243e-03 |
![]() |
Conventional PINN | LRA-PINN | AW-PINN |
30 neurons / 2 hidden layers | 2.8067e-03 | 2.4658e-03 | 1.1468e-03 |
60 neurons / 2 hidden layers | 1.3770e-03 | 3.3325e-03 | 1.6954e-03 |
120 neurons / 2 hidden layers | 1.8578e-03 | 3.9267e-03 | 1.8192e-03 |
30 neurons / 4 hidden layers | 1.5972e-03 | 9.1343e-04 | 1.1114e-03 |
60 neurons /4 hidden layers | 7.1538e-03 | 6.3312e-04 | 3.3524e-04 |
120 neurons /4 hidden layers | 1.3858e-02 | 1.6538e-03 | 9.3797e-04 |
30 neurons / 6 hidden layers | 2.6337e-03 | 3.2978e-03 | 2.0124e-03 |
60 neurons /6 hidden layers | 1.2910e-02 | 1.7492e-03 | 1.2216e-03 |
120 neurons /6 hidden layers | 2.4746e-02 | 2.6474e-03 | 8.3961e-04 |
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 6349 | 50000; 40 | 50000; 22 |
Relative L2 error | 4.7554e-03 | 3.6710e-03 | 6.7551e-03 |
Relative L∞ error | 2.6565e-02 | 1.7579e-02 | 2.2131e-02 |
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 3636 | 50000; 19 | 50000; 16 |
Relative L2 error | 2.2495e-02 | 3.8931e-02 | 7.6622e-03 |
Relative L∞ error | 1.2236e-01 | 1.8352e-01 | 2.9233e-02 |
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 19342 | 50000; 17106 | 50000; 9402 |
Relative L2 error | 5.9719e-02 | 3.8731e-02 | 3.7065e-03 |
Relative L∞ error | 2.6706e-01 | 2.0716e-01 | 2.6123e-02 |
![]() |
Example 4.1.1 | Example 4.1.2 | Example 4.1.3 | Example 4.1.4 |
30 neurons / 2 hidden layers | 1.1468e-03 | 3.6212e-02 | 6.8436e-03 | 4.8704e-02 |
60 neurons / 2 hidden layers | 1.6954e-03 | 1.1317e-02 | 1.6412e-02 | 9.2931e-03 |
120 neurons / 2 hidden layers | 1.8192e-03 | 1.4429e-02 | 1.0010e-02 | 2.4058e-02 |
30 neurons / 4 hidden layers | 1.1114e-03 | 5.4070e-03 | 1.3936e-02 | 8.9544e-02 |
60 neurons /4 hidden layers | 3.3524e-04 | 2.6192e-03 | 5.8579e-03 | 1.5446e-02 |
120 neurons /4 hidden layers | 9.3797e-04 | 3.0318e-03 | 9.4430e-03 | 5.4431e-02 |
30 neurons / 6 hidden layers | 2.0124e-03 | 8.7984e-03 | 4.5297e-03 | 1.2776e-02 |
60 neurons /6 hidden layers | 1.2216e-03 | 2.1817e-03 | 7.4221e-03 | 3.4884e-03 |
120 neurons /6 hidden layers | 8.3961e-04 | 4.0882e-03 | 3.3320e-03 | 3.8391e-02 |
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 278 | 50000; 31 | 50000; 9 |
Relative L2 error | 1.4195e-02 | 1.4967e-03 | 6.1164e-04 |
Relative L∞ error | 3.9156e-02 | 3.4360e-03 | 1.9243e-03 |
![]() |
Conventional PINN | LRA-PINN | AW-PINN |
30 neurons / 2 hidden layers | 2.8067e-03 | 2.4658e-03 | 1.1468e-03 |
60 neurons / 2 hidden layers | 1.3770e-03 | 3.3325e-03 | 1.6954e-03 |
120 neurons / 2 hidden layers | 1.8578e-03 | 3.9267e-03 | 1.8192e-03 |
30 neurons / 4 hidden layers | 1.5972e-03 | 9.1343e-04 | 1.1114e-03 |
60 neurons /4 hidden layers | 7.1538e-03 | 6.3312e-04 | 3.3524e-04 |
120 neurons /4 hidden layers | 1.3858e-02 | 1.6538e-03 | 9.3797e-04 |
30 neurons / 6 hidden layers | 2.6337e-03 | 3.2978e-03 | 2.0124e-03 |
60 neurons /6 hidden layers | 1.2910e-02 | 1.7492e-03 | 1.2216e-03 |
120 neurons /6 hidden layers | 2.4746e-02 | 2.6474e-03 | 8.3961e-04 |
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 6349 | 50000; 40 | 50000; 22 |
Relative L2 error | 4.7554e-03 | 3.6710e-03 | 6.7551e-03 |
Relative L∞ error | 2.6565e-02 | 1.7579e-02 | 2.2131e-02 |
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 3636 | 50000; 19 | 50000; 16 |
Relative L2 error | 2.2495e-02 | 3.8931e-02 | 7.6622e-03 |
Relative L∞ error | 1.2236e-01 | 1.8352e-01 | 2.9233e-02 |
Conventional PINN | LRA-PINN | AW-PINN | |
Iteration times (Adam; L-BFGS-B) | 50000; 19342 | 50000; 17106 | 50000; 9402 |
Relative L2 error | 5.9719e-02 | 3.8731e-02 | 3.7065e-03 |
Relative L∞ error | 2.6706e-01 | 2.0716e-01 | 2.6123e-02 |
![]() |
Example 4.1.1 | Example 4.1.2 | Example 4.1.3 | Example 4.1.4 |
30 neurons / 2 hidden layers | 1.1468e-03 | 3.6212e-02 | 6.8436e-03 | 4.8704e-02 |
60 neurons / 2 hidden layers | 1.6954e-03 | 1.1317e-02 | 1.6412e-02 | 9.2931e-03 |
120 neurons / 2 hidden layers | 1.8192e-03 | 1.4429e-02 | 1.0010e-02 | 2.4058e-02 |
30 neurons / 4 hidden layers | 1.1114e-03 | 5.4070e-03 | 1.3936e-02 | 8.9544e-02 |
60 neurons /4 hidden layers | 3.3524e-04 | 2.6192e-03 | 5.8579e-03 | 1.5446e-02 |
120 neurons /4 hidden layers | 9.3797e-04 | 3.0318e-03 | 9.4430e-03 | 5.4431e-02 |
30 neurons / 6 hidden layers | 2.0124e-03 | 8.7984e-03 | 4.5297e-03 | 1.2776e-02 |
60 neurons /6 hidden layers | 1.2216e-03 | 2.1817e-03 | 7.4221e-03 | 3.4884e-03 |
120 neurons /6 hidden layers | 8.3961e-04 | 4.0882e-03 | 3.3320e-03 | 3.8391e-02 |