
We focused on the quasi-projective synchronization (QPS) and finite-time synchronization (FNTS) for a class of fractional-order memristive complex-valued delay neural networks (FOMCVDNNs). Rather than decomposing the complex-valued system into its real and imaginary components, we adopted a more streamlined approach by introducing a lemma associated with the complex-valued sign function. This innovative technique enabled us to design a simpler discontinuous controller. Then, based on the finite-time Lemma, measurable selection theorem, Lyapunov function theory, properties of the Mittag-Leffler function, and the fractional-order Razumikhin theorem, various substantial results were derived using a novel hybrid control scheme. In conclusion, we presented numerical simulations to illustrate the practical effectiveness of our theoretical findings.
Citation: Jiaqing Zhu, Guodong Zhang, Leimin Wang. Quasi-projective and finite-time synchronization of fractional-order memristive complex-valued delay neural networks via hybrid control[J]. AIMS Mathematics, 2024, 9(3): 7627-7644. doi: 10.3934/math.2024370
[1] | Caicai Feng, Saratha Sathasivam, Nurshazneem Roslan, Muraly Velavan . 2-SAT discrete Hopfield neural networks optimization via Crow search and fuzzy dynamical clustering approach. AIMS Mathematics, 2024, 9(4): 9232-9266. doi: 10.3934/math.2024450 |
[2] | Farah Liyana Azizan, Saratha Sathasivam, Nurshazneem Roslan, Ahmad Deedat Ibrahim . Logic mining with hybridized 3-satisfiability fuzzy logic and harmony search algorithm in Hopfield neural network for Covid-19 death cases. AIMS Mathematics, 2024, 9(2): 3150-3173. doi: 10.3934/math.2024153 |
[3] | Muhammad Aqmar Fiqhi Roslan, Nur Ezlin Zamri, Mohd. Asyraf Mansor, Mohd Shareduwan Mohd Kasihmuddin . Major 3 Satisfiability logic in Discrete Hopfield Neural Network integrated with multi-objective Election Algorithm. AIMS Mathematics, 2023, 8(9): 22447-22482. doi: 10.3934/math.20231145 |
[4] | Gaeithry Manoharam, Azleena Mohd Kassim, Suad Abdeen, Mohd Shareduwan Mohd Kasihmuddin, Nur 'Afifah Rusdi, Nurul Atiqah Romli, Nur Ezlin Zamri, Mohd. Asyraf Mansor . Special major 1, 3 satisfiability logic in discrete Hopfield neural networks. AIMS Mathematics, 2024, 9(5): 12090-12127. doi: 10.3934/math.2024591 |
[5] | Xiaoyan Liu, Mohd Shareduwan Mohd Kasihmuddin, Nur Ezlin Zamri, Yunjie Chang, Suad Abdeen, Yuan Gao . Higher order Weighted Random k Satisfiability (k=1,3) in Discrete Hopfield Neural Network. AIMS Mathematics, 2025, 10(1): 159-194. doi: 10.3934/math.2025009 |
[6] | Nur 'Afifah Rusdi, Nur Ezlin Zamri, Mohd Shareduwan Mohd Kasihmuddin, Nurul Atiqah Romli, Gaeithry Manoharam, Suad Abdeen, Mohd. Asyraf Mansor . Synergizing intelligence and knowledge discovery: Hybrid black hole algorithm for optimizing discrete Hopfield neural network with negative based systematic satisfiability. AIMS Mathematics, 2024, 9(11): 29820-29882. doi: 10.3934/math.20241444 |
[7] | Nurshazneem Roslan, Saratha Sathasivam, Farah Liyana Azizan . Conditional random k satisfiability modeling for k = 1, 2 (CRAN2SAT) with non-monotonic Smish activation function in discrete Hopfield neural network. AIMS Mathematics, 2024, 9(2): 3911-3956. doi: 10.3934/math.2024193 |
[8] | Jin Gao, Lihua Dai . Anti-periodic synchronization of quaternion-valued high-order Hopfield neural networks with delays. AIMS Mathematics, 2022, 7(8): 14051-14075. doi: 10.3934/math.2022775 |
[9] | S. Neelakandan, Sathishkumar Veerappampalayam Easwaramoorthy, A. Chinnasamy, Jaehyuk Cho . Fuzzy adaptive learning control network (FALCN) for image clustering and content-based image retrieval on noisy dataset. AIMS Mathematics, 2023, 8(8): 18314-18338. doi: 10.3934/math.2023931 |
[10] | Mohammed D. Kassim . Controlling stability through the rate of decay of the delay feedback kernels. AIMS Mathematics, 2023, 8(11): 26343-26356. doi: 10.3934/math.20231344 |
We focused on the quasi-projective synchronization (QPS) and finite-time synchronization (FNTS) for a class of fractional-order memristive complex-valued delay neural networks (FOMCVDNNs). Rather than decomposing the complex-valued system into its real and imaginary components, we adopted a more streamlined approach by introducing a lemma associated with the complex-valued sign function. This innovative technique enabled us to design a simpler discontinuous controller. Then, based on the finite-time Lemma, measurable selection theorem, Lyapunov function theory, properties of the Mittag-Leffler function, and the fractional-order Razumikhin theorem, various substantial results were derived using a novel hybrid control scheme. In conclusion, we presented numerical simulations to illustrate the practical effectiveness of our theoretical findings.
The Boolean satisfiability (SAT) problem is a classical issue in computational complexity theory and has been a significant research subject in computer science and artificial intelligence since the 1970s [1]. In 1971, S. A. Cook [2] proved that the SAT problem is the world's first NP-complete problem, meaning any NP problem can be reduced to the SAT problem for a polynomial-time solution. The SAT problem serves as a benchmark for the difficulty of a category of problems known as the core of NP-complete problems. It plays a crucial role in various areas of computer science, including theoretical computer science, complexity theory, cryptosystems, and artificial intelligence [3,4,5,6]. With the advancements in computer hardware performance and algorithm design, traditional SAT solvers have become effective in many practical applications [7,8,9,10]. However, as problems grow in size and complexity, traditional methods often face challenges such as inefficiency and high consumption of computational resources. This has prompted researchers to explore new solution methods and techniques. Among these, the discrete Hopfield neural network (DHNN) [11], a classical neural network model, has shown significant potential and effectiveness in solving combinatorial optimization problems since its inception. Hopfield [11] demonstrated the stability of network dynamics, highlighting that the evolution of network states is essentially a process of energy minimization. When the association weights are symmetric, the system reaches a stable state. This stable equilibrium point aligns with the correct storage state, providing a clear physical explanation for associative memory. The network, by emphasizing the collective function of neurons from a systems perspective, offers preliminary insights into the nature of associative memory. Due to its robust memory capabilities and parallel processing power, the DHNN is particularly effective in addressing combinatorial optimization problems such as SAT [12,13,14].
In the study of SAT problems, the 3-satisfiability (3SAT) logic has received significant attention from researchers because higher-order Boolean SAT can be converted or reduced to the 3SAT form [15]. In the 3SAT problem, each clause contains three literals, making it more complex and closer to practical logic constraint problems. To address the 3SAT problem, researchers have mapped the variables and clauses of a Boolean formula into the neurons and energy functions of a discrete Hopfield network. In this network, each variable and clause is encoded as a neuron's state and connection weights [16,17,18]. A satisfying solution to the Boolean formula is then found by adjusting the neuron states to minimize the energy function. This method of solving the 3-SAT problem implemented in a DHNN is referred to as the DHNN-3SAT model. The DHNN-3SAT model has garnered extensive attention and research interest due to its significant improvement in solving ability and effectiveness on 3SAT problems [19,20]. Early research efforts focused on basic discrete Hopfield network structures, utilizing simple connection weights and update rules. As research progressed, scholars proposed various improvement and optimization strategies to enhance the network's performance and efficiency. In 1992, Wan Abdullah successfully integrated special logic programming as symbolic rules into a DHNN [21], and in 2011, Sathasivam and Abdullah extended this approach and formally named it the Wan Abdullah method (WA method) [22]. In 2014, Sathasivam et al. embedded higher-order SAT into DHNN [23]. Kasihmuddin et al. [24] applied k-satisfiability planning in DHNN. In 2017, Mansor et al. [25] demonstrated the hybrid use of the DHNN artificial immune system for the 3-SAT problem. Subsequently, Kasihmuddin et al. [26] proposed a genetic algorithm for k-satisfiability logic programming based on DHNN. In 2021, Mansor and Sathasivam [12] proposed a DHNN-3SAT optimal performance index. In 2023, Azizan and Sathasivam [27] proposed a DHNN model with a 3SAT fuzzy logic model of DHNN. However, as researchers delved deeper into the DHNN-SAT model, they found that its computational efficiency is not optimal for large-scale problems due to the inherent limitations of the DHNN, with a tendency to fall into local minima. To address these issues, researchers have been working to integrate heuristic algorithms into the optimization process [28,29,30,31] to enhance the accuracy of the DHNN-SAT model. Currently, these research methods are achieving high global minimum ratios in DHNN-SAT models with fewer neurons. By adjusting the structure and parameters of the neural network, researchers [32] have been exploring various model variations and optimization strategies to further enhance the performance and generalizability of the model. These efforts not only offer a new perspective and approach to understanding and solving SAT problems but also make significant contributions and provide inspiration for the application of DHNNs in combinatorial optimization and discrete problem-solving.
Although the DHNN-3SAT model has been successful in addressing certain problems, it still has some challenges and limitations. First, the model's computational complexity and storage requirements may increase significantly with the problem size, leading to performance issues when dealing with large-scale SAT problems. Second, the model's training and optimization process may be sensitive to parameter tuning and initialization, necessitating more experimental validation and tuning. Third, it may take a longer time to reach a stable solution when dealing with complex problems, which can impact its practical application in engineering and other fields. Lastly, in real-life scenarios, the constraints of SAT problems often change over time, leading to the need for network redesign and the generation of a large number of redundant computations with the increase, decrease, and update of large-scale constraints, ultimately limiting the traditional DHNN-3SAT model's performance.
To address the changing constraints of the SAT problem and the increasing size and complexity of the network, this paper proposes a WA method based on basic logical clauses. This method utilizes information about the synaptic weights of the original SAT problem in the DHNN, leading to significant savings in repetitive calculations. In addition, to tackle the issue of increasing Boolean variables and logical clauses leading to a rapidly expanding solution space and the traditional DHNN-WA model being prone to oscillations and local minima, this paper introduces a DHNN 3SAT model, based on a genetic algorithm-optimized K-modes clustering. This approach uses the genetic optimization K-modes clustering algorithm to cluster the initial space, reducing the retrieval space and avoiding repeated searches, thus improving retrieval efficiency.
The paper is organized as follows: Section 2 introduces the knowledge related to the research, including 3SAT and DHNN. Section 3 details the implementation and workflow for determining the synaptic weights of the DHNN 3SAT model using the WA method. To address the issue of a large number of redundant computations caused by the changing constraints of the 3SAT problem, the basic logic clause-based WA (BLC-WA) method is proposed. Section 4 introduces the K-modes clustering algorithm optimized by a genetic algorithm. Section 5 details the implementation steps and development process of the DHNN 3SAT model based on genetic algorithm-optimized K-modes clustering. Section 6 presents an experimental comparative analysis of the DHNN-3SAT model based on the genetic optimization K-modes clustering algorithm (DHNN-3SAT-GAKM) model proposed in this paper, and the three models DHNN-3SAT-WA, the DHNN-3SAT-WA model by using the Genetic Algorithm (DHNN-3SAT-GA), and the DHNN-3SAT-WA model by using Competition Algorithm (DHNN-3SAT-ICA), which are comprehensively evaluated using four evaluation metrics. Finally, Section 7 summarizes the work presented in this paper.
Definition 2.1. 3SAT is a satisfiability problem for a set of logical clauses consisting strictly of 3 literal variables. 3SAT problems can be expressed in 3 conjunctive normal forms (CNFs). Let the set of Boolean variables be {S1,S2,⋯,Sn} and the set of logical clauses be {C1,C2,⋯,Cm}, then the general form of a CNF 3SAT formula P containing n Boolean variables and m logical clauses is defined as:
P=⋀mk=1Ck, | (1) |
where the clause Ck consists of 3 literals connected by the classical operator or (∨): Ck=Z(k,1)∨Z(k,2)∨Z(k,3), and the state of the literals can be either a positive variable or the negation of a positive variable, i.e., Z(k,i)=Sj or Z(k,i)=¬Sj, 1≤k≤m, 1≤i≤3, 1≤j≤n. Each literal variable takes on the binary discrete value {1,−1}, where 1 denotes true and −1 denotes false. Each clause in 3SAT contains unique variables, meaning there is no repetition of the same variable (variable or negation of a variable) in clause Ck. Additionally, there are no repeated logical clauses within logical rules.
The problem denoted by 3SAT can be formally described as follows: Given a 3SAT formula, the task is to determine if there is an assignment of Boolean variables that makes the entire formula true. In particular, each clause in the formula must have at least one true literal for the whole formula to be true.
Instance. Suppose that for given a 3SAT problem, the conversion to the CNF 3SAT formula is:
P=(S1∨S2∨S3)∧(¬S1∨S2∨S3)∧(S1∨¬S2∨S3)∧(S1∨S2∨¬S3)∧(¬S1∨¬S2∨S3)∧(¬S1∨S2∨¬S3)∧(S1∨¬S2∨¬S3)∧(S1∨S2∨S4). | (2) |
In Eq (2), P is satisfiable if there exists a set of values for the variable S1,S2,S3,S4 such that P=1; otherwise, P is unsatisfiable.
The problem regarding 3SAT is a fundamental issue in computational complexity theory. Its NP-completeness and wide range of applications make it a crucial subject of research in both theoretical and practical contexts. Through a thorough examination of the 3SAT problem, a better understanding of computational complexity theory can be achieved, and effective tools and methods for solving practical problems can be provided. This study contributes to the advancement of computer science by exploring solution methods for 3SAT problems.
Neural networks can be divided into two types based on the flow of information: Feed-forward and feedback neural networks. The output of a feedforward neural network depends only on the current input vector and weight matrix, independent of the network's previous input state. An example of this is the commonly used back propagation (BP) neural network. In 1982, physicist professor J. J. Hopfield proposed [11] a single-layer feedback neural network, later called the Hopfield neural network. This network is of two types: Continuous Hopfield neural network (CHNN) and discrete Hopfield neural network (DHNN) [33,34]. DHNN has garnered significant attention due to its concise network structure and powerful memory function. It holds potential practical value in image recovery and optimization problems [35,36,37]. Figure 1 depicts the topology of a DHNN network with n neurons. Each neuron is functionally identical and interconnected in pairs. The neurons are represented by the set O={o1,o2,⋯,on}, and their corresponding states are denoted by the vector X=(x1,x2,⋯,xn), and the value of xi takes binary discrete values, typically {−1,1} or {0,1}. The state of the network is described as X(t)=(x1(t),x1(t),⋯,xn(t)) at time t, and the DHNN is stimulated by an external input to start its evolution. The outputs of localized lots are generated before the final state. The output of the local lot of the double link is:
hi(t)=∑jwijxi(t)−wi, | (3) |
where wi denotes a predefined threshold. The output of higher-order linked local lots is represented by Eq (4) as proposed by Mansor et al [22].
hi(t)=⋯+∑j∑kwijkxi(t)xj(t)+∑nj=1wijxi(t)−wi. | (4) |
The output state of the neuron oi at the time t+1 is denoted as:
xi(t+1)=sgn(hi(t))={1,hi(t)≥0,−1,hi(t)<0, | (5) |
where "sgn" denotes the sign function and wij denotes the connection weights of neuron oi and neuron oj, with the weights specified as follows.
wij={wji,i≠j,0,i=j. | (6) |
In the network training phase, the Hebbian rule is usually used to calculate the weights wij as:
wij=∑ms=1(2xsi−1)(2xsj−1), | (7) |
where m denotes the number of samples to be memorized.
The DHNN is essentially a nonlinear dynamical system. The network starts evolving from an initial state, and the DHNN is considered stable when its state no longer changes after a finite number of iterations. In DHNN, stability is determined by introducing the Lyapunov function as the energy function, which serves as an indicator of stability [38]. The system reaches stability when the energy function reaches a minimum point of invariance. The energy function in DHNN is defined as:
E(X)=⋯−13∑i∑j∑kwijkxixjxk−12∑i∑jwijxixj−∑iwixi. | (8) |
In 1983, Cohen and S. Grossberg showed that DHNNs evolve with a decreasing energy function and that a stable state of the network corresponds to a minimal value of the energy function. Consequently, for each stable state, we can check whether this state represents a global minimum by determining whether the energy function has reached a minimum [28]. If Eq (9) is satisfied, the stable state is considered a global minimum; otherwise, it is a local minimum.
|E(X)−Emin|<δ, | (9) |
where Emin denotes the minimum value of the energy function and δ is the user-defined tolerance value.
In this section, we will start by determining the synaptic weights of the DHNN 3SAT model using the WA method [21]. This method is a computational approach for deriving the synaptic weights of a network by aligning the cost function with the DHNN energy function. Our study acknowledges some challenges in this comparative method of deriving network synaptic weights, particularly as the number of variables and logical clauses increase. Additionally, the addition, deletion, and updating of logical clauses result in a large number of redundant computations. To tackle these issues, this section will outline the cost function of the basic logical clauses and compute the network synaptic weights by establishing the basic logical clauses of the CNF 3SAT formulae. This approach will allow for a more adaptable implementation in computing the network synaptic weights of the 3SAT when incorporated in a DHNN. The method is termed the BLC-WA method. Furthermore, the detailed calculation process using the BLC-WA method will be demonstrated with specific examples as logical clauses are added, deleted, and updated.
The WA method introduces a cost function based on propositional logic rules for the first time. It derives the synaptic weights of the network by comparing the cost function with the DHNN energy function, presenting a novel approach to using DHNN for solving the SAT problem. In this study, the WA method is used to incorporate the 3SAT problem into the DHNN for computing the network synaptic weights. The flowchart illustrating the implementation of the WA method is shown in Figure 2. The specific steps are as follows:
Step 1. Given any 3SAT problem, transform it into a CNF 3SAT formula P. Suppose the formula P contains n Boolean variables and m logical clauses.
Step 2. The 3SAT formula P is embedded into the DHNN, and for each Boolean variable, a unique neuron is specified. At moment t, the state of these neurons is denoted by {St1,St2,⋯,Stn}.
Step 3. Applying De. Morgan's law to obtain ¬P. When ¬P=0, correspond to the consistency interpretation of P; when ¬P= 1, correspond to the fact that at least one clause of P is not satisfied.
Step 4. Deriving the cost function EP. When the literal variable in ¬P is represented by 12(1−Si) when it is ¬Si and 12(1+Si) when it is Si, the logical clauses are internally connected by the multiplication operation and between logical clauses by addition. This creates the cost function EP. The magnitude of EP corresponds to the degree to which all logical clauses are satisfied. When EP=0 it represents a consistent interpretation of P. A larger value of EP represents a larger number of unsatisfied logical clauses.
Step 5. Comparing the cost function EP with the energy function E(X), the DHNN synaptic weight matrix W corresponding to the 3SAT formula P is obtained.
In this section, we use the problem of Eq (2) in Section 2.1 as an example to illustrate the process of computing synaptic weights in the DHNN using the WA method of embedding logical clauses into the DHNN.
To determine whether Eq (2) is satisfiable, the negation of Eq (2) is applied to De Morgan's law, which results in:
¬P=(¬S1∧¬S2∧¬S3)∨(S1∧¬S2∧¬S3)∨(¬S1∧S2∧¬S3)∨(¬S1∧¬S2∧S3)∨(S1∧S2∧¬S3)∨(S1∧¬S2∧S3)∨(¬S1∧S2∧S3)∨(¬S1∧¬S2∧¬S4). | (10) |
Since seeking a consistent interpretation of the terms of Eq (2) is the same as finding the smallest combination of inconsistent interpretations of Eq (10), the cost function can be defined as follows:
EP=12(1−S1)12(1−S2)12(1−S3)+12(1+S1)12(1−S2)12(1−S3)+12(1−S1)12(1+S2)12(1−S3)+12(1−S1)12(1−S2)12(1+S3)+12(1+S1)12(1+S2)12(1−S3)+12(1+S1)12(1−S2)12(1+S3)+12(1−S1)12(1+S2)12(1+S3)+12(1−S1)12(1−S2)12(1−S4)=−18S1S2S3−18S1S2S4−18S1S3+18S1S4−18S2S3+18S2S4−14S1−14S2−18S3−18S4+1 | (11) |
When the formula of Eq (2) provided The 3SAT formula is satisfied, the cost function EP reaches the minimum value of 0. At this point, the energy function for the corresponding DHNN converges to the global minimum, causing both the cost function and energy function to reach their minimum values. The network's synaptic weight matrix WP, embedded in the DHNN by Eq (2), is derived by comparing the cost function (11) with the energy function (8) using the WA method, and the results are shown in Table 1.
Weights | w123 | w124 | w134 | w234 | w12 | w13 | w14 | w23 | w24 | w34 | w1 | w2 | w3 | w4 |
P | 116 | 116 | 0 | 0 | 0 | 18 | −18 | 18 | −18 | 0 | 14 | 14 | 18 | 18 |
Definition 3.1. For any CNF formula containing n Boolean variables and m logical clauses, it can be viewed as consisting of the following eight basic logical clauses:
Cl1=(Si∨Sj∨Sk),Cl2=(¬Si∨Sj∨Sk), |
Cl3=(Si∨¬Sj∨Sk),Cl4=(Si∨Sj∨¬Sk), |
Cl5=(¬Si∨¬Sj∨Sk),Cl6=(¬Si∨Sj∨¬Sk), |
Cl7=(Si∨¬Sj∨¬Sk),Cl8=(¬Si∨¬Sj∨¬Sk), | (12) |
where l denotes the index at which the basic logical clause is looked up.
Applying De Morgan's law to the eight basic logical clauses in Eq (12) yields the corresponding negative basic logical clauses:
¬Cl1=(¬Si∧¬Sj∧¬Sk),¬Cl2=(Si∧¬Sj∧¬Sk), |
¬Cl3=(¬Si∧Sj∧¬Sk),Cl4=(¬Si∧¬Sj∧Sk), |
¬Cl5=(Si∧Sj∧¬Sk),¬Cl6=(Si∧¬Sj∧Sk), |
¬Cl7=(¬Si∧Sj∧Sk),¬Cl8=(Si∧Sj∧Sk). | (13) |
The consistency clause of each basic logic clause in seeking pair Eq (12) is equal to the minimum of the inconsistency clause of the negated basic logic clause in seeking pair Eq (13). The corresponding cost function for each basic logic clause is defined as follows:
ECl1=12(1−Si)12(1−Sj)12(1−Sk),ECl2=12(1+Si)12(1−Sj)12(1−Sk), |
ECl3=12(1−Si)12(1+Sj)12(1−Sk),ECl4=12(1−Si)12(1−Sj)12(1+Sk), |
ECl5=12(1+Si)12(1+Sj)12(1−Sk),ECl6=12(1+Si)12(1−Sj)12(1+Sk), |
ECl7=12(1−Si)12(1+Sj)12(1+Sk),ECl8=12(1+Si)12(1+Sj)12(1+Sk). | (14) |
Each basic logic clause is embedded into a DHNN separately. When each basic logic clause is satisfiable, the corresponding DHNN converges to the global minimum. At this point, the cost function and the corresponding energy function of the basic logic clause reach their minimum values. By comparing the cost function (14) of the basic logic clauses with the energy function (8), the basic logic clause weight matrix of the 3SAT formula can be derived. This weight matrix is abbreviated as 3SAT-BLCWM, and the results are shown in Table 2.
Weights | Cl1 | Cl2 | Cl3 | Cl4 | Cl5 | Cl6 | Cl7 | Cl8 |
wi | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | −1/8 |
wj | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 |
wk | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 | −1/8 |
wij | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | −1/8 |
wik | −1/8 | −1/8 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 |
wjk | −1/8 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 |
wijk | 1/16 | −1/16 | −1/16 | −1/16 | 1/16 | 1/16 | 1/6 | −1/16 |
Any 3SAT formula can be seen as made up of the basic logical clauses in Eq (12). Each logical clause in the 3SAT formula corresponds to a basic logical clause. So, when the DHNN learns a new logical clause, it only needs to identify the corresponding basic logical clauses, then refer to Table 2, and combine and calculate the network weight values to derive the network synaptic weights after adding a new logical clause. Figure 3 illustrates the flowchart of calculating network synaptic weights using the BLC-WA method. The specific steps for calculating network synaptic weights using the BLC-WA method are as follows:
Step 1. Given any 3SAT problem, transform it into CNF 3SAT formula P, which is assumed to contain N Boolean variables and M logical clauses;
Step 2. The 3SAT-BLCWM was established using the WA method (Table 2);
Step 3. Analyze CNF 3SAT formula P to map each logical clause to the basic logical clause;
Step 4. Based on Table 2 (3SAT-BLCWM), the weights corresponding to each logical clause of the 3SAT formula P are found. These weights are then spelled out by columns into the indexed result weight matrix W';
Step 5 The weight matrix W of formula P for 3SAT is obtained by combining and summing the result weight matrices W' by columns.
Next, we compute the 3SAT instances of Section 2.1 using the BLC-WA method based on Eq (2) which can be written in correspondence with the basic logical clause as:
P=C11∧C12∧C13∧C14∧C15∧C16∧C17∧C21. | (15) |
According to Table 2 (3SAT-BLCWM), the network synaptic weights corresponding to each logical clause of formula P can be found by indexing the results of the weight matrix W' using the columns. The results are shown in Table 3. The network synaptic weight matrix WP for the 3SAT formula P can be obtained by merging and adding the indexing results according to the columns, which are also shown in Table 3.
Weights | C11 | C12 | C13 | C14 | C15 | C16 | C17 | C21 | P |
w1 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | 1/4 |
w2 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | 1/4 |
w3 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 | 0 | 1/8 |
w4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/8 | 1/8 |
w12 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 0 |
w13 | −1/8 | −1/8 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | 0 | 1/8 |
w14 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w23 | −1/8 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 1/8 | 0 | 1/8 |
w24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
w123 | 1/16 | −1/16 | −1/16 | −1/16 | 1/16 | 1/16 | 1/16 | 0 | 1/16 |
w124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/16 | 1/16 |
w234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
In SAT problems, the constraints often change, with constraints increasing, decreasing, and updating. The traditional DHNN-SAT method requires redesigning the weights of the SAT problem and constructing a new DHNN when facing this type of problem, which does not utilize the original SAT problem's information. As the original SAT problem's constraints increase, the corresponding original CNF formulation also increases the logical clauses, leading to a large number of redundant computations when dealing with large-scale logical clauses. This severely limits the effectiveness of the traditional DHNN-SAT method in solving problems with large-scale increasing constraints. To address this issue, this study proposes the BLC-WA method, a new design method for the SAT problem with changing constraints. This method utilizes the synaptic weight information of the original SAT problem in DHNN, saving a significant amount of repeated calculations. In the following section, the network synaptic weight design method for SAT problems with increasing, decreasing, and updating constraints will be introduced, providing a new approach for solving SAT problems with constantly changing constraints.
The addition of constraints to the original SAT problem is equivalent to adding logical clauses to the CNF SAT formula.
There is a 3SAT problem that translates into the CNF 3SAT formula:
P=Cl11∧Cl22∧⋯∧Clmm. | (16) |
When r logical clauses are added, the original CNF 3SAT formula becomes:
Padd=Cl11∧Cl22⋯∧Clmm∧Clm+1m+1∧Clm+2m+2∧⋯∧Clm+2m+r. | (17) |
Figure 4 depicts the flowchart of the BLC-WA method for solving the original 3SAT problem with additional constraints. This method is implemented as follows:
Step 1. Let the original CNF 3SAT formula P become Padd by adding r logical clauses (Eq 14);
Step 2. The basic logical clauses were mapped to the additional logical clauses, and the synaptic weights of the additional logical clauses were determined based on Table 2 (3SAT-BLCWM);
Step 3. The synaptic weights of the CNF 3SAT formula Padd after adding the r logical clauses were calculated using the following Eq (18).
Wadd=WP+Wadd(1)+Wadd(2)+⋯+Wadd(r). | (18) |
The following is a concrete demonstration of the implementation process using the 3SAT instance from Section 2.1.
Assuming that Eq (2) combines the logical clauses C22=¬S1∨S2∨S4 and C23=S1∨¬S2∨S4, the new CNF formula at this point is notated as Padd, specifically, as follows:
Padd=C11∧C12∧C13∧C14∧C15∧C16∧C17∧C21∧C22∧C23. | (19) |
In Section 3.3, the synaptic weights (WP) of the formula P in the network have been obtained using the BLC-WA method. Then, the synaptic weights of the newly added logical clauses (C22 and C23) are obtained by searching for Table 2 (3SAT-BLCWM) and then combined and summed with the synaptic weights (WP) of the formula P to obtain the new synaptic weights (Wadd) of the CNF formula 3SAT Padd. The calculation results are shown in Table 4.
Weights | P | C22 | C23 | Padd | C12 | C13 | Pdec | C17 | C21 | C18 | C31 | Pupd |
w1 | 1/4 | −1/8 | 1/8 | 1/4 | −1/8 | 1/8 | 1/4 | 1/8 | 1/8 | −1/8 | 0 | −1/8 |
w2 | 1/4 | 1/8 | −1/8 | 1/4 | 1/8 | −1/8 | 1/4 | −1/8 | 1/8 | −1/8 | 1/8 | 1/4 |
w3 | 1/8 | 0 | 0 | 1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 0 | −1/8 | 1/8 | 1/4 |
w4 | 1/8 | 1/8 | 1/8 | 3/8 | 0 | 0 | 1/8 | 0 | 1/8 | 0 | 1/8 | 1/8 |
w12 | 0 | 1/8 | 1/8 | 1/4 | 1/8 | 1/8 | −1/4 | 1/8 | −1/8 | −1/8 | 0 | −1/8 |
w13 | 1/8 | 0 | 0 | 1/8 | −1/8 | 1/8 | 1/8 | 1/8 | 0 | −1/8 | 0 | −1/8 |
w14 | −1/8 | −1/8 | 1/8 | −1/8 | 0 | 0 | −1/8 | 0 | −1/8 | 0 | 0 | 0 |
w23 | 1/8 | 0 | 0 | 1/8 | 1/8 | −1/8 | 1/8 | 1/8 | 0 | −1/8 | −1/8 | −1/4 |
w24 | −1/8 | 1/8 | −1/8 | −1/8 | 0 | 0 | −1/8 | 0 | −1/8 | 0 | −1/8 | −1/8 |
w34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w123 | 1/16 | 0 | 0 | 1/16 | −1/16 | −1/16 | 3/16 | 1/16 | 0 | −1/16 | 0 | −1/16 |
w124 | 1/16 | −1/16 | −1/16 | −1/16 | 0 | 0 | 1/16 | 0 | 1/16 | 0 | 0 | 0 |
w234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/16 | 1/16 |
Setting the original CNF 3SAT formula (16) reduces d logical clauses, and the original CNF 3SAT formula becomes:
Pdec=Cl11∧Cl22∧⋯∧Clm−dm−d. | (20) |
The flowchart for solving the original 3SAT problem with reduced constraints based on the BLC-WA method is also shown in Figure 4. It is implemented as follows:
Step 1. The original CNF 3SAT formula P is reduced by d logical clauses to Pdec;
Step 2. The basic logical clauses were mapped to the reduced logical clauses, and the synaptic weights of the reduced logical clauses were determined based on Table 2 (3SAT-BLCWM);
Step 3. The synaptic weights of the CNF 3SAT formula Pdec after declining the d logical clauses were calculated using the following Eq (21).
Wdec=WP−Wdec(1)−Wdec(2)−⋯−Wdec(d). | (21) |
The following is a concrete demonstration of the implementation process using the 3SAT instance from Section 2.1.
Assuming that Eq (2) reduces the logical clauses C12=¬S1∨S2∨S3 and C13=S1∨¬S2∨S3, the new CNF formula at this point is notated as Pdec, specifically, as follows:
Pdec=C11∧C14∧C15∧C16∧C17∧C21. | (22) |
To start, the synaptic weights for the reduced logical clauses C12 and C13 are determined by searching for Table 2 (3SAT-BLCWM). Next, the synaptic weights WP of the original 3SAT formula P are subtracted from the synaptic weights of the reduced logical clauses. This process yields the synaptic weights Wdec for the new CNF 3SAT formula Pdec. The computational results are also displayed in Table 4.
When the original CNF formula (16) is updated with u logical clauses, it can be regarded as a reduction of u logical clauses from the original formula and the addition of u new logical clauses. The updated CNF formula is:
Pupd=Cl11∧Cl22∧⋯∧Clm−um−u∧Cl'm−u+1m−u+1∧Cl'm−u+2m+2∧⋯∧Cl'mm | (23) |
The flowchart when updating the constraints based on the BLC-WA method is also shown in Figure 4, which is implemented as follows:
Step 1. The original CNF 3SAT formula P is updated by u logical clauses to Pupd;
Step 2. The basic logical clauses were mapped to the updated logical clauses, and the synaptic weights of the updated logical clauses were determined based on Table 2 (3SAT-BLCWM);
Step 3. The synaptic weights of the CNF 3SAT formula Pupd after updating the u logical clauses were calculated using the following Eq (24).
Wupd=WP−Wdec(1)−Wdec(2)−⋯−Wdec(u)+Wadd(1)+Wadd(2)+⋯+Wadd(u). | (24) |
The following is a concrete demonstration of the implementation process using the 3SAT instance from Section 2.1.
Suppose the logical clauses C17=S1∨¬S2∨¬S3 and C21=S1∨S2∨S4 in the original CNF 3SAT formula P are updated to C18=¬S1∨¬S2∨¬S3 and C31=S2∨S3∨S4, and the updated CNF 3SAT formula is now denoted as Pupd, specifically for:
Pupd=C11∧C12∧C13∧C14∧C15∧C16∧C18∧C31. | (25) |
To begin, find the synaptic weights of logical clauses C17, C21, C18 and C31 by searching for Table 2 (3SAT-BLCWM). Then, subtract the synaptic weights of logical clause C17, C21 from the original SAT formula P. Finally, the synaptic weights of logical clause C18, C31 are added to obtain the network synaptic weights Wupd of the updated 3SAT formula Pupd. The results of the computation are also displayed in Table 4.
The K-modes clustering algorithm is a method specifically designed for handling discrete data [39,40,41,42]. It extends the traditional K-means algorithm, which is mainly used for datasets with continuous attributes. The K-modes algorithm uses the Hamming distance as a metric [43], where this distance measures the number of differing attribute values between two sample points. In this algorithm, the Hamming distance is computed by adding the number of different attribute values between two samples, representing the degree of difference for a given sample compared to a clustering center. Finally, the samples are classified into the category that belongs to the clustering center with the smallest degree of difference. We can see the clustering process of the K-modes algorithm in Figure 5.
Let X={X1,X2,⋯,Xm} represent the set of samples to be clustered, and Xi=(x1,x2,⋯,xn) represent the n -dimensional vector with each component taking discrete values. Z={Z1,Z2,⋯,Zk} represents the clustering center and Zj=(z1,z2,⋯,zn),j=1,2,⋯,k. The objective function of the K-modes clustering algorithm is defined as:
F(Φ,Z)∑kj=1∑mi=1φijD(Xi,Zj), | (26) |
where φij∈{0,1} , ∑kj=1φij=1,1≤i≤n,Φ is the matrix of one, k denotes the number of clusters, φij=1 if the i th object is classified in the j -th class, otherwise φij=0. Zj is the center of the j -th class. D(Xi,Zj) denotes the computation of the Hamming distance between Xi and Zj :
D(Xi,Zj)=∑ni=1d(xi,zi), | (27) |
where d(xi,zi)={0,xi=zi1,xi≠zi.
The classification process must meet the following conditions: (1) every family must contain at least one sample; (2) each sample must belong to one and only one class. The fundamental steps of the K-modes clustering algorithm are as follows:
Step 1. Randomly identifying k clustering centers Z1,Z2,⋯,Zk.
Step 2. For each sample Xi(i=1,2,⋯,m) in the dataset, its Hamming distance from the k clustering centers is calculated separately using Eq (27), and the sample Xi is classified into the category closest to the centroid.
Step 3. After dividing all the samples into clusters, the cluster center " Zj " is recalculated, and each center component is updated to its plural.
Step 4. Repeat the process of Steps 2 and 3 above until the objective function F no longer changes.
To address the limitations of the K-modes clustering algorithm, which make it difficult to determine the optimal number of clusters and easy to get stuck at a local optimum, researchers have incorporated a genetic algorithm with adaptive global optimization search capabilities into the K-modes clustering algorithm [44,45]. This involves using a fitness function to carry out genetic operations, primarily mutation, to automatically learn the cluster centroids for the K-modes algorithm. Figure 6. shows the workflow diagram of the K-modes clustering algorithm for genetic optimization, which was developed in the following steps:
Step 1. Parameter initialization. Set relevant parameters: Initial cluster number k, population size m, crossover probability pc, variation probability pm, maximum number of iterations t.
Step 2. Randomly generate the initial population. Randomly generate k initial clustering centers Z1,Z2,⋯,Zk as initial population individuals.
Step 3. Take the population individual Z1,Z2,⋯,Zk as the clustering center and use K-modes clustering algorithm for clustering.
Step 4. Calculate the fitness value of individuals in the population. Here the fitness function is defined as follows:
f=Dmin¯D(X), | (28) |
where Dmin is the minimum class spacing and ¯D(X) is the average class spacing which is defined as follows:
Dmin=mini,j=1D(Zi,Zj). | (29) |
¯D(X)=1k∑kj=1∑mji=1D(Xi,Zj)mj. | (30) |
This fitness function is based on the idea that class separation should be maximized while intra-class spacing should be minimized. In other words, the goal is to maximize the distance between classes (Dmin) and minimize the variability within classes (¯D(X)). Throughout the evolutionary process, the individual population size is represented by the k value. If the k value is less than the optimal number of clusters, increasing k leads to a decrease in Dmin and ¯D(X), but the clustering division is not optimal. The decrease in ¯D(X) is more significant than Dmin, resulting in an increase in the fitness function value. Conversely, if the k value exceeds the optimal number of clusters, the change in ¯D(X) is not significant, and the intra-class spacing becomes very small due to secondary clustering. As a result, Dmin becomes very small, leading to a decrease in the overall fitness function value. Therefore, this fitness function can guide the k value toward the optimal number of clusters when the initial clustering center is optimized.
Step 5. Perform selection, crossover, and mutation operations to generate a new generation population.
Step 6. Repeat Step 3 to Step 5 until the maximum number of iterations is reached.
Step 7. Calculate the fitness value for each individual in the population and select the output with the highest fitness value.
The conventional DHNN-3SAT-WA model uses an exhaustive search (ES) during the retrieval phase [46], aiming to conduct a random search among individual candidate solutions to find a consistent interpretation that satisfies the 3SAT terms. Some researchers have proposed optimizing the traditional DHNN-3SAT-WA model by using heuristic algorithms such as the GA and ICA, denoted as DHNN-3SAT-GA [26] and DHNN-3SAT-ICA [38]. These methods can expedite the search for global or feasible solutions. However, unguided random initial assignment of candidate solutions leads to numerous repeated invalid solutions and fails to converge, often falling into a local optimum after DHNN evolution. Furthermore, as the number of Boolean variables and logical clauses increases, the size and logical complexity of the network expands, resulting in a rapid growth of the solution space. This makes the model susceptible to oscillations and more likely to land in local minima. Therefore, reducing the DHNN-3SAT-WA model's search time and preventing it from falling into local minima is a significant challenge in this field. The implementation process of the traditional DHNN-3SAT-WA, DHNN-3SAT-GA, and DHNN-3SAT-ICA models is depicted in Figure 7.
This study proposes a new solution to address these challenges: the DHNN-3SAT model based on the genetic optimization K-modes clustering algorithm referred to as DHNN-3SAT-GAKM. In this model, candidate solutions in the allocation space are clustered using the K-modes clustering algorithm, leading to initial allocation through a random search from each class. By reducing repeated initial candidate solutions and avoiding local optima to some extent, this process accelerates the search for the global minimum, improving the efficiency of global minimum retrieval. To determine the optimal number of clusters for the K-modes clustering algorithm, the genetic algorithm with adaptive global optimization search capability is introduced. The number of clusters is determined by calculating the value of the constructed fitness function, further enhancing global search capability.
The DHNN-3SAT-GAKM model aims to find a consistent set of Boolean variable values for the 3SAT problem. During the model's initialization phase, each neuron in the DHNN is connected to a specific Boolean variable in the CNF, and the connection weights represent the relationship between the variable and the clause. A WA method using basic logical clauses will be employed to determine the cost during the learning phase. In the retrieval phase, the DHNN is utilized to evolve, update, and iterate until the network reaches a stable equilibrium state, signified by a minimal energy function value. The energy function's primary purpose is to indicate whether this stable state corresponds to a global minimum of the 3SAT problem, which in turn represents a consistent interpretation of the CNF. Please see Figure 8 for the flowchart of the DHNN-3SAT-GAKM model development, and the implementation steps are summarized as follows:
Step 1. Model Preparation. For a given 3SAT problem, transform it into the corresponding CNF formulation, denoted as P. Assume it contains n Boolean variables and m logical clauses. Initialize the optimization algorithm parameters.
Step 2. Each Boolean variable of the 3SAT formula is uniquely assigned a Hopfield neuron in the DHNN design, which consists of n neurons O={o1,o2,⋯,on}, with the state at moment t denoted X(t)=(x1(t),x1(t),⋯,xn(t)).
Step 3. The BLC-WA method was used to calculate the 3SAT formula P synaptic weights and derive its cost function Ep. When P=1, Ep=0, at which time the energy function reaches its minimum value, giving Emin=−m8.
Step 4. Generate an initial candidate solution space by randomly creating m initial candidate solutions {X1(t),X2(t),⋯,Xm(t)}.
Step 5. The initial candidate solution {X1(t),X2(t),⋯,Xm(t)} is clustered using the K-modes clustering algorithm based on genetic optimization to obtain the optimal number of clusters k.
Step 6. Determine the candidate subset for retrieval. Candidate subset denoted as {Y1(t),Y2(t),⋯,Yc(t)}, where c=m/k , Yl(t)=(y1(t),x、y2(t),⋯,yn(t)),l=1,2,⋯,c. yi(t) corresponds to the state of t at the moment of the neuron oi.
Step 7. DHNN Evolution. For Yl(t)=(y1(t),y1(t),⋯,yn(t)), l=0,t=0, state updates are performed using Eq (5) until a stable state is reached. If Yl(t+1)≠Yl(t), then t=t+1, and if Yl(t+1)=Yl(t), the network reaches a steady state. Proceed to the next step.
Step 8. Retrieval Phase. Check if the energy of the steady state satisfies |E−Emin|<δ. If it does, store the steady-state Yl(t) as a global minimum. If it doesn't, l=l+1 , consider it as a local minimum and go back to Step 6.
Step 9. Model Evaluation. The model is assessed using the metrics of global minimum ratio, Hamming distance, CPU time, steady-state retrieval rate, and global minimum retrieval rate.
To thoroughly evaluate the performance of the DHNN-3SAT-GAKM model and its ability to solve real-world application problems, this section examines its performance alongside the conventional DHNN-3SAT-WA, DHNN-3SAT-GA, and DHNN-3SAT-ICA models on a benchmark dataset. Experimental analyses were conducted to compare their performance and demonstrate the superiority of the DHNN-3SAT-GAKM model proposed in this study. The experiments were carried out using MATLAB R2023b on a laptop computer running the Windows 10 operating system, equipped with an AMD Razor R5-3500U processor and 8 GB of RAM.
This study utilizes the DIMACS Benchmark Instances AIM dataset from SATLIB, provided by Kazuo Iwama et al. (https://www.cs.ubc.ca/~hoos/SATLIB/benchm.html). The AIM dataset comprises of 48 instances, with 24 being satisfiable and 24 unsatisfiable. To create a representative set of instances, 12 of these satisfiable instances are chosen for this study. Each instance contains three clauses, and you can find specific descriptions of the instances in Table 5.
No. | Instance | variables | Clauses | No. | Instance | variables | Clauses |
1 | aim-50-1_6-yes1-1 | 50 | 80 | 7 | aim-100-3_4-yes1-1 | 100 | 340 |
2 | aim-50-2_0-yes1-1 | 50 | 100 | 8 | aim-100-6_0-yes1-1 | 100 | 600 |
3 | aim-50-3_4-yes1-1 | 50 | 170 | 9 | aim-200-1_6-yes1-1 | 200 | 320 |
4 | aim-50-6_0-yes1-1 | 50 | 300 | 10 | aim-200-2_0-yes1-1 | 200 | 400 |
5 | aim-100-1_6-yes1-1 | 100 | 160 | 11 | aim-200-3_4-yes1-1 | 200 | 680 |
6 | aim-100-2_0-yes1-1 | 100 | 200 | 12 | aim-200-6_0-yes1-1 | 200 | 1200 |
In the search phase, the traditional DHNN-3SAT-WA model directly examines 10, 000 different combinations of initial neuron assignments [46]. DHNN-3SAT-GA and DHNN-3SAT-ICA guide the search among these 10, 000 combinations using a genetic algorithm and an imperialistic competition algorithm, respectively. This paper introduces the DHNN-3SAT-GAKM model, which utilizes genetic optimization K-modes clustering to preprocess these 10000 neuron initial allocation combinations. It then selects a candidate subset for search. This approach reduces the actual search space and minimizes repeated local searches to avoid getting stuck in local minima, thereby improving the efficiency of retrieving the global minimum. The tolerance values for the conventional DHNN-3SAT-WA model align with Sathasivam's work [16]. The CPU time thresholds are based on Zamri's settings [47]. The parameter settings can be found in Table 6. The parameter settings for the DHNN-3SAT-GA model are in line with Kasihmuddin's work [26] and are listed in Table 7. The parameter settings for the DHNN-3SAT-ICA model remain consistent with Shazli's work [38], as shown in Table 8. Table 9 details the parameter settings of the model in this paper, with optimization of the relevant parameters through iterative tuning.
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | tolerance value δ | 0.001 |
CPU time threshold | 24 hours | - | - |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | probability of mutation pm | 0.05 |
population size | 50 | Maximum Iterations t | 100 |
crossover probability pc | 0.6 | - | - |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | revolutionary rate α | 0.3 |
population size | 50 | Maximum Iterations t | 100 |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | crossover probability pc | 0.6 |
population size | 50 | probability of mutation pm | 0.05 |
Initial number of clusters | 3 | Maximum Iterations t | 100 |
We use the global minimum ratio (GMR) [16] and the mean CPU time (MCT) to assess the model's performance in this paper. To provide a more comprehensive evaluation of the model's ability to find the global minimum, we introduce 2 new evaluation metrics: the mean minimum Hamming distance (MMHD) and the mean logical satisfiability ratio (MLSR). This study will utilize a total of 4 evaluation metrics, as detailed in Table 10. The calculations are based on the average of 100 repeated experiment runs for each instance, and the results are displayed in Tables 11 and 12. Figures 9 to 12 compare our model, DHNN-3SAT-GAKM, with the models DHNN-3SAT-WA, DHNN-3SAT-GA, and DHNN-3SAT-ICA across the 4 evaluation metrics.
Indicators | calculation formula | instructions |
GMR | GMR=NGMT | GMR represents the ratio of the global minimum solution to the total number of runs [16]. GMR is an effective metric for assessing the efficiency of an algorithm. A model is considered robust when its GMR value is close to 1 [23]. Here, NGM represents the number of times the global minimum is converged, and T represents the total number of runs. |
MCT | MCT=1NGM∑TGMiNTi | The MCT refers to the average time needed for each model to reach the global minimum. A smaller MCT indicates that the model is more efficient in finding the global minimum. NTi represents the CPU time needed to find the global minimum at the i th retrieval result, and NGM represents the number of times the global minimum converged. |
MMHD | MMHD=1T∑TiminjD(Xi,Zj) | The MMHD value represents the mean minimum Hamming distance, which is the average of the smallest bit difference between the retrieval result of each run and the global minimum. When the MMHD value is closer to 0, it indicates that the model retrieves a result closer to the global minimum. In this context, D(Xi,Zj) represents the Hamming distance between the retrieval result of the i th run and the global minimum, and T represents the total number of runs. |
MLSR | MLSR=1T∑TiNsat(i)m | The MLSR value indicates the average proportion of the total number of clauses that can be satisfied by the retrieval results. The closer the MLSR value is to 1, the closer the model retrieval results are to the global minimum. Here, Nsat(i) denotes the number of satisfying clauses for the i th retrieval result, and m denotes the total number of clauses. |
No. |
GMR | MCT | MMHD | MLSR | ||||
DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | |
1 | 1.0000 | 1.0000 | 4.83 | 2.51 | 0.0000 | 0.0000 | 1 | 1 |
2 | 0.9123 | 0.9323 | 16.67 | 11.70 | 1.1000 | 1.0900 | 0.9744 | 0.9745 |
3 | 0.8234 | 0.8456 | 26.67 | 22.83 | 3.6000 | 3.4900 | 0.9355 | 0.9578 |
4 | 0.5802 | 0.6204 | 48.82 | 45.80 | 3.7200 | 3.7100 | 0.8923 | 0.8966 |
5 | 1.0000 | 1.0000 | 11.72 | 6.31 | 0.0000 | 0.0000 | 1 | 1 |
6 | 0.8812 | 0.9011 | 46.51 | 16.01 | 2.5000 | 2.4000 | 0.9433 | 0.9533 |
7 | 0.5467 | 0.6041 | 113.28 | 35.50 | 3.6200 | 3.6100 | 0.9288 | 0.9363 |
8 | 0.2018 | 0.2188 | 440.39 | 171.23 | 4.7400 | 4.2300 | 0.9139 | 0.9231 |
9 | 0.4114 | 0.4222 | 537.92 | 431.04 | 4.8600 | 4.4500 | 0.9835 | 0.9844 |
10 | 0.2261 | 0.2352 | 978.78 | 773.75 | 5.9800 | 5.6700 | 0.9312 | 0.9474 |
11 | 0.1616 | 0.1653 | 1369.44 | 1100.94 | 7.1000 | 6.8900 | 0.8537 | 0.8775 |
12 | 0.1413 | 0.1518 | 1566.18 | 1198.85 | 8.2200 | 8.1100 | 0.8234 | 0.8641 |
No. |
GMR | MCT | MMHD | MLSR | ||||
DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | |
1 | 1.0000 | 1.0000 | 2.49 | 2.21 | 0.0000 | 0.0000 | 1 | 1 |
2 | 0.9441 | 0.9658 | 10.64 | 8.29 | 1.0700 | 1.0400 | 0.9761 | 0.9783 |
3 | 0.8542 | 0.9126 | 20.45 | 15.08 | 3.2700 | 1.1400 | 0.9662 | 0.9751 |
4 | 0.6356 | 0.7229 | 40.18 | 26.87 | 3.3700 | 2.9700 | 0.8978 | 0.9354 |
5 | 1.0000 | 1.0000 | 5.49 | 4.68 | 0.0000 | 0.0000 | 1 | 1 |
6 | 0.9124 | 0.9256 | 14.02 | 11.06 | 2.2000 | 2.1000 | 0.9644 | 0.9881 |
7 | 0.6205 | 0.6898 | 30.59 | 22.03 | 3.2000 | 2.9300 | 0.9375 | 0.9523 |
8 | 0.2291 | 0.3211 | 141.54 | 74.60 | 3.9800 | 3.7600 | 0.9251 | 0.9336 |
9 | 0.4301 | 0.4503 | 435.12 | 396.82 | 4.0800 | 3.8900 | 0.9851 | 0.9928 |
10 | 0.2488 | 0.2632 | 752.20 | 678.91 | 5.0800 | 4.7200 | 0.9488 | 0.9557 |
11 | 0.1689 | 0.2203 | 1108.03 | 935.02 | 6.0800 | 5.5500 | 0.8809 | 0.9028 |
12 | 0.1612 | 0.2097 | 1160.96 | 982.28 | 7.0800 | 6.3800 | 0.8732 | 0.8822 |
Tables 11 and 12 show the computational results of the DHNN-3SAT-GAKM model in this paper, as well as the DHNN-3SAT-WA, DHNN-3SAT-GA, and DHNN-3SAT-ICA models. The results are presented in terms of GMR, MCT, MMHD, and MLSR. These calculations are based on the metrics formulas provided in Table 10. Figures 9 to 12 illustrate the performance differences between this paper's DHNN-3SAT-GAKM model and the DHNN-3SAT-WA, DHNN-3SAT-GA, and DHNN-3SAT-ICA models. These differences are shown in the four evaluation metrics through radargram-based visualization. The DHNN-3SAT-GAKM model outperforms the DHNN-3SAT-WA, DHNN-3SAT-GA, and DHNN-3SAT-ICA models.
Figure 9 shows the GMR of each model in solving 3-SAT instances with varying levels of complexity. In this paper, the DHNN-3SAT-GAKM model achieved the highest GMR value, indicating its superior global retrieval ability and ability to avoid falling into local minima to some extent. Additionally, the DHNN-3SAT-GA and DHNN-3SAT-ICA models also demonstrated an improved ability over the traditional DHNN-3SAT-WA model to retrieve the global minimum to some extent. As the complexity of SAT problems, Boolean variables, and logical clauses increases, the GMRs of each model decrease rapidly. Hence, further optimization and improvement of the algorithms and architectures of the DHNN-3SAT models are needed to enhance their performance and efficiency when dealing with large-scale and complex SAT problems in the future.
In Figure 10, we can see the average time taken by each model to reach the global minimum. The MTC value of the DHNN-3SAT-GAKM model in this paper is the smallest, indicating that this model is more efficient in finding the global minimum. This is because the model initially clusters the allocation space using the K-modes clustering algorithm, enabling it to escape local minima more quickly and avoid repetitive retrieval of local minima. As a result, a large number of redundant calculations are reduced, leading to improved efficiency in converging to the global minimum. On the other hand, the traditional DHNN-3SAT-WA is more prone to getting stuck in local minima, especially as the number of local minimum solutions increases, resulting in a substantial number of repetitive evolutions and computations, ultimately affecting the efficiency of converging to the global minimum. While the DHNN-3SAT-GA and DHNN-3SAT-ICA models also use heuristic algorithms for guided search to some extent, helping to reduce the search space and speed up retrieval of the global minimum, the rapidly expanding search space due to the increasing complexity of the SAT problem can lead to longer retrieval times or even search failure. Consequently, for large-scale SAT problems, further improving the efficiency of searching for the global minimum is a future priority.
In dealing with large-scale SAT problems, the increasing logic complexity results in a progressively smaller solution space, making it very challenging to find a global minimum solution within a limited timeframe. Most attempts only end up finding the local minimum solution. The goal of model optimization at this stage is to make each retrieval result as close as possible to the global minimum solution. To evaluate the proximity of the model retrieval results to the global minimum solution, two new evaluation criteria are introduced in this study: the MMHD and the MLSR. These criteria reflect the proximity of the model retrieval results to the global optimum. A lower MMHD value and a higher MLSR value indicate that the model retrieval results are closer to the global minimum solution. Figures 11 and 12 illustrate the relationship between the MMHD and MLSR values of each model. These figures show that the DHNN-3SAT-GAKM model in this paper has the smallest MMHD value and the largest MLSR value, indicating that its retrieval results are closer to the global minimum than those of the DHNN-3SAT-WA, DHNN-3SAT-GA, and DHNN-3SAT-ICA models. This suggests that the retrieval results of the DHNN-3SAT-GAKM model are closer to the global minimum solution overall. Conversely, the DHNN-3SAT-GA and DHNN-3SAT-ICA models are closer to the global minimum than the overall retrieval results of the conventional DHNN-3SAT-WA.
Based on the combined analyses above, it can be observed that the DHNN-3SAT-GAKM model, introduced in this paper, exhibits significant improvements when compared to the traditional DHNN-3SAT-WA model, as well as the DHNN-3SAT-GA and DHNN-3SAT-ICA models that directly utilize heuristic algorithms for bootstrap retrieval. This demonstrates the superior performance of the DHNN-3SAT-GAKM model in retrieving the global minima in the SAT problem, while also highlighting its potential for practical applications.
This paper introduces a method for designing network synaptic weights based on basic logical clauses to handle dynamic changes in constraints in the SAT problem. This method aims to utilize synaptic weight information efficiently, reducing the need for repetitive calculations in the DHNN network. Additionally, it proposes a DHNN-3SAT model based on genetic algorithm optimized K-modes clustering to address the limitations of the traditional DHNN-3SAT-WA, which tends to get stuck in local minima. The new model uses genetic algorithms to cluster the initial space, effectively reducing the retrieval space and improving retrieval efficiency. Experimental results show that the DHNN-3SAT-GAKM model outperforms DHNN-3SAT-WA, DHNN-3SAT-GA, and DHNN-3SAT-ICA in terms of various evaluation metrics, including GMR, MCT, MMHD, and MLSR. This study not only expands the application of DHNN in solving the SAT problem but also offers valuable insights for future research.
The DHNN-3SAT model is an innovative approach to using deep learning technology to solve SAT problems, offering insights and potential for future research and applications. There are several areas for future work: first, optimizing and enhancing the algorithm and architecture of the DHNN-3SAT model to improve its performance on large-scale and complex SAT problems; second, exploring the model's extension to other NP-complete problems to demonstrate its versatility and applicability; and finally, conducting thorough research on the model in specific domains and practical applications to further promote the use of deep learning techniques in combinatorial optimization and decision-making problems. In summary, the proposal and study of the DHNN-3SAT model not only enhances methods in the field of SAT problem-solving but also provides new ideas and tools for solving complex problems using deep learning techniques. With ongoing technological and theoretical progress, the application of deep learning in combinatorial optimization problem-solving is expected to bring about broader development and deliver effective solutions for real-life complex problems.
Xiaojun Xie: Writing-review & editing, Writing-original draft, Methodology, Formal analysis, Conceptualization. Saratha Sathasivam: Writing-review & editing, Methodology, Funding acquisition, Conceptualization, Validation, Supervision. Hong Ma: Writing-review & editing, Writing-original draft, Methodology, Investigation, Formal analysis, Conceptualization. All authors have read and approved the final version of the manuscript for publication.
This research was supported by the Ministry of Higher Education Malaysia (MOHE) through the Fundamental Research Grant Scheme (FRGS), FRGS/1/2022/STG06/USM/02/11, and University Sains Malaysia.
All authors declare no conflicts of interest in this paper.
[1] |
W. Xu, J. Cao, M. Xiao, D. W. Ho, G. Wen, A new framework for analysis on stability and bifurcation in a class of neural networks with discrete and distributed delays, IEEE Trans. Cybern., 45 (2015), 2224–2236. https://doi.org/10.1109/TCYB.2014.2367591 doi: 10.1109/TCYB.2014.2367591
![]() |
[2] |
L. Wang, T. Dong, M. F. Ge, Finite-time synchronization of memristor chaotic systems and its application in image encryption, Appl. Math. Comput., 347 (2019), 293–305. https://doi.org/10.1016/j.amc.2018.11.017 doi: 10.1016/j.amc.2018.11.017
![]() |
[3] |
F. C. Hoppensteadt, E. M. Izhikevich, Pattern recognition via synchronization in phase-locked loop neural networks, IEEE Trans. Neural Netw., 11 (2000), 734–738. https://doi.org/10.1109/72.846744 doi: 10.1109/72.846744
![]() |
[4] |
H. Shen, Y. Zhu, L. Zhang, J. H. Park, Extended dissipative state estimation for markov jump neural networks with unreliable links, IEEE Trans. Neural Netw. Learn. Syst., 28 (2016), 346–358. https://doi.org/10.1109/TNNLS.2015.2511196 doi: 10.1109/TNNLS.2015.2511196
![]() |
[5] |
C. Xu, M. Liao, P. Li, Y. Guo, Q. Xiao, S. Yuan, Influence of multiple time delays on bifurcation of fractional-order neural networks, Appl. Math. Comput., 361 (2019), 565–582. https://doi.org/10.1016/j.amc.2019.05.057 doi: 10.1016/j.amc.2019.05.057
![]() |
[6] |
P. Liu, Z. Zeng, J. Wang, Asymptotic and finite-time cluster synchronization of coupled fractional-order neural networks with time delay, IEEE Trans. Neural Netw. Learn. Syst., 31 (2020), 4956–4967. https://doi.org/10.1109/TNNLS.2019.2962006 doi: 10.1109/TNNLS.2019.2962006
![]() |
[7] |
A. Pratap, R. Raja, J. Alzabut, J. Cao, G. Rajchakit, C. Huang, Mittag-leffler stability and adaptive impulsive synchronization of fractional order neural networks in quaternion field, Math. Methods Appl. Sci., 43 (2020), 6223–6253. https://doi.org/10.1002/mma.6367 doi: 10.1002/mma.6367
![]() |
[8] |
L. Chua, Memristor-the missing circuit element, IEEE Trans. Circuit Theory, 18 (1971), 507–519. https://doi.org/10.1109/TCT.1971.1083337 doi: 10.1109/TCT.1971.1083337
![]() |
[9] |
J. Zhou, X. Ma, Z. Yan, S. Arik, Non-fragile output-feedback control for time-delay neural networks with persistent dwell time switching: A system mode and time scheduler dual-dependent design, Neural Netw., 169 (2024), 733–743. https://doi.org/10.1016/j.neunet.2023.11.007 doi: 10.1016/j.neunet.2023.11.007
![]() |
[10] | J. W. Smith, Complex-valued neural networks for data-driven signal processing and signal understanding, arXiv: 2309.07948, 2023. https://doi.org/10.48550/arXiv.2309.07948 |
[11] |
R. Trabelsi, I. Jabri, F. Melgani, F. Smach, N. Conci, A. Bouallegue, Indoor object recognition in rgbd images with complex-valued neural networks for visually-impaired people, Neurocomputing, 330 (2019), 94–103. https://doi.org/10.1016/j.neucom.2018.11.032 doi: 10.1016/j.neucom.2018.11.032
![]() |
[12] |
M. Z. Khan, A. Sarkar, A. Noorwali, Memristive hyperchaotic system-based complex-valued artificial neural synchronization for secured communication in industrial internet of things, Eng. Appl. Artif. Intell., 123 (2023), 106357. https://doi.org/10.1016/j.engappai.2023.106357 doi: 10.1016/j.engappai.2023.106357
![]() |
[13] |
H. Zhang, J. Cheng, H. Zhang, W. Zhang, J. Cao, Quasi-uniform synchronization of caputo type fractional neural networks with leakage and discrete delays, Chaos Solitons Fractals, 152 (2021), 111432. https://doi.org/10.1016/j.chaos.2021.111432 doi: 10.1016/j.chaos.2021.111432
![]() |
[14] |
S. Yang, J. Yu, C. Hu, H. Jiang, Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks, Neural Netw., 104 (2018), 104–113. https://doi.org/10.1016/j.neunet.2018.04.007 doi: 10.1016/j.neunet.2018.04.007
![]() |
[15] |
H. Zhang, Y. Cheng, H. Zhang, W. Zhang, J. Cao, Hybrid control design for mittag-leffler projective synchronization on foqvnns with multiple mixed delays and impulsive effects, Math. Comput. Simul., 197 (2022), 341–357. https://doi.org/10.1016/j.matcom.2022.02.022 doi: 10.1016/j.matcom.2022.02.022
![]() |
[16] |
H. L. Li, J. Cao, H. Jiang, A. Alsaedi, Finite-time synchronization of fractional-order complex networks via hybrid feedback control, Neurocomputing, 320 (2018), 69–75. https://doi.org/10.1016/j.neucom.2018.09.021 doi: 10.1016/j.neucom.2018.09.021
![]() |
[17] |
H. L. Li, J. Cao, H. Jiang, A. Alsaedi, Finite-time synchronization and parameter identification of uncertain fractional-order complex networks, Physica A, 533 (2019), 122027. https://doi.org/10.1016/j.physa.2019.122027 doi: 10.1016/j.physa.2019.122027
![]() |
[18] |
H. Yan, Y. Qiao, L. Duan, J. Miao, New results of quasi-projective synchronization for fractional-order complex-valued neural networks with leakage and discrete delays, Chaos Solitons Fractals, 159 (2022), 112121. https://doi.org/10.1016/j.chaos.2022.112121 doi: 10.1016/j.chaos.2022.112121
![]() |
[19] |
H. L. Li, C. Hu, J. Cao, H. Jiang, A. Alsaedi, Quasi-projective and complete synchronization of fractional-order complex-valued neural networks with time delays, Neural Netw., 118 (2019), 102–109. https://doi.org/10.1016/j.neunet.2019.06.008 doi: 10.1016/j.neunet.2019.06.008
![]() |
[20] |
X. Li, X. Liu, F. Wang, Anti-synchronization of fractional-order complex-valued neural networks with a leakage delay and time-varying delays, Chaos Solitons Fractals, 174 (2023), 113754. https://doi.org/10.1016/j.chaos.2023.113754 doi: 10.1016/j.chaos.2023.113754
![]() |
[21] |
Y. Cheng, T. Hu, W. Xu, X. Zhang, S. Zhong, Fixed-time synchronization of fractional-order complex-valued neural networks with time-varying delay via sliding mode control, Neurocomputing, 505 (2022), 339–352. https://doi.org/10.1016/j.neucom.2022.07.015 doi: 10.1016/j.neucom.2022.07.015
![]() |
[22] |
X. Song, X. Sun, J. Man, S. Song, Q. Wu, Synchronization of fractional-order spatiotemporal complex-valued neural networks in finite-time interval and its application, J. Franklin Inst., 358 (2021), 8207–8225. https://doi.org/10.1016/j.jfranklin.2021.08.016 doi: 10.1016/j.jfranklin.2021.08.016
![]() |
[23] |
K. Udhayakumar, R. Rakkiyappan, F. A. Rihan, S. Banerjee, Projective multi-synchronization of fractional-order complex-valued coupled multi-stable neural networks with impulsive control, Neurocomputing, 467 (2022), 392–405. https://doi.org/10.1016/j.neucom.2021.10.003 doi: 10.1016/j.neucom.2021.10.003
![]() |
[24] |
J. Yang, H. L. Li, L. Zhang, C. Hu, H. Jiang, Quasi-projective and finite-time synchronization of delayed fractional-order bam neural networks via quantized control, Math. Methods Appl. Sci., 46 (2023), 197–214. https://doi.org/10.1002/mma.8504 doi: 10.1002/mma.8504
![]() |
[25] | N. Yao, M. Hui, J. Zhang, J. Yan, W. Wu, Complete synchronization of delayed fractional-order complex-valued neural networks via adaptive control, In: 2022 5th International conference on artificial intelligence and big data, 2022, 173–178. https://doi.org/10.1109/ICAIBD55127.2022.9820317 |
[26] | A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and applications of fractional differential equations, New York: Elsevier, 2006. |
[27] |
X. Yang, J. Cao, Finite-time stochastic synchronization of complex networks, Appl. Math. Model., 34 (2010), 3631–3641. https://doi.org/10.1016/j.apm.2010.03.012 doi: 10.1016/j.apm.2010.03.012
![]() |
[28] |
L. Feng, J. Yu, C. Hu, C. Yang, H. Jiang, Nonseparation method-based finite/fixed-time synchronization of fully complex-valued discontinuous neural networks, IEEE Trans. Cybern., 51 (2020), 3212–3223. https://doi.org/10.1109/TCYB.2020.2980684 doi: 10.1109/TCYB.2020.2980684
![]() |
[29] |
Q. Gan, L. Li, J. Yang, Y. Qin, M. Meng, Improved results on fixed-/preassigned-time synchronization for memristive complex-valued neural networks, IEEE Trans. Neural Netw. Learn. Syst., 33 (2021), 5542–5556. https://doi.org/10.1109/TNNLS.2021.3070966 doi: 10.1109/TNNLS.2021.3070966
![]() |
[30] |
B. Zheng, C. Hu, J. Yu, H. Jiang, Finite-time synchronization of fully complex-valued neural networks with fractional-order, Neurocomputing, 373 (2020), 70–80. https://doi.org/10.1016/j.neucom.2019.09.048 doi: 10.1016/j.neucom.2019.09.048
![]() |
[31] |
A. A. Kilbas, M. Saigo, R. K. Saxena, Generalized mittag-leffler function and generalized fractional calculus operators, Integral Transform. Spec. Funct., 15 (2004), 31–49. https://doi.org/10.1080/10652460310001600717 doi: 10.1080/10652460310001600717
![]() |
[32] | J. P. Aubin, A. Cellina, Differential inclusions: Set-valued maps and viability theory, Berlin, Heidelberg: Springer, 2012. https://doi.org/10.1007/978-3-642-69512-4 |
[33] | A. F. Filippov, Differential equations with discontinuous righthand sides: Control systems, Dordrecht: Springer, 2013. https://doi.org/10.1007/978-94-015-7793-9 |
[34] |
G. Zhang, Z. Zeng, D. Ning, Novel results on synchronization for a class of switched inertial neural networks with distributed delays, Inf. Sci., 511 (2020), 114–126. https://doi.org/10.1016/j.ins.2019.09.048 doi: 10.1016/j.ins.2019.09.048
![]() |
[35] |
D. Baleanu, S. Sadati, R. Ghaderi, A. Ranjbar, T. Abdeljawad, F. Jarad, Razumikhin stability theorem for fractional systems with delay, Abstr. Appl. Anal., 2010 (2010), 124812. https://doi.org/10.1155/2010/124812 doi: 10.1155/2010/124812
![]() |
[36] |
J. Jia, Z. Zeng, Lmi-based criterion for global mittag-leffler lag quasi-synchronization of fractional-order memristor-based neural networks via linear feedback pinning control, Neurocomputing, 412 (2020), 226–243. https://doi.org/10.1016/j.neucom.2020.05.074 doi: 10.1016/j.neucom.2020.05.074
![]() |
[37] |
Y. Fan, X. Huang, Z. Wang, Y. Li, Improved quasi-synchronization criteria for delayed fractional-order memristor-based neural networks via linear feedback control, Neurocomputing, 306 (2018), 68–79. https://doi.org/10.1016/j.neucom.2018.03.060 doi: 10.1016/j.neucom.2018.03.060
![]() |
[38] |
Y. Shen, S. Zhu, X. Liu, S. Wen, Multiple mittag-leffler stability of fractional-order complex-valued memristive neural networks with delays, IEEE Trans. Cybern., 53 (2022), 5815–5825. https://doi.org/10.1109/TCYB.2022.3194059 doi: 10.1109/TCYB.2022.3194059
![]() |
[39] |
M. Syed Ali, G. Narayanan, Z. Orman, V. Shekher, S. Arik, Finite time stability analysis of fractional-order complex-valued memristive neural networks with proportional delays, Neural Process. Lett., 51 (2020), 407–426. https://doi.org/10.1007/s11063-019-10097-7 doi: 10.1007/s11063-019-10097-7
![]() |
[40] |
J. Zhou, J. Dong, S. Xu, Asynchronous dissipative control of discrete-time fuzzy markov jump systems with dynamic state and input quantization, IEEE Trans. Fuzzy Syst., 31 (2023), 3906–3920. https://doi.org/10.1109/TFUZZ.2023.3271348 doi: 10.1109/TFUZZ.2023.3271348
![]() |
Weights | w123 | w124 | w134 | w234 | w12 | w13 | w14 | w23 | w24 | w34 | w1 | w2 | w3 | w4 |
P | 116 | 116 | 0 | 0 | 0 | 18 | −18 | 18 | −18 | 0 | 14 | 14 | 18 | 18 |
Weights | Cl1 | Cl2 | Cl3 | Cl4 | Cl5 | Cl6 | Cl7 | Cl8 |
wi | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | −1/8 |
wj | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 |
wk | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 | −1/8 |
wij | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | −1/8 |
wik | −1/8 | −1/8 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 |
wjk | −1/8 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 |
wijk | 1/16 | −1/16 | −1/16 | −1/16 | 1/16 | 1/16 | 1/6 | −1/16 |
Weights | C11 | C12 | C13 | C14 | C15 | C16 | C17 | C21 | P |
w1 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | 1/4 |
w2 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | 1/4 |
w3 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 | 0 | 1/8 |
w4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/8 | 1/8 |
w12 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 0 |
w13 | −1/8 | −1/8 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | 0 | 1/8 |
w14 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w23 | −1/8 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 1/8 | 0 | 1/8 |
w24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
w123 | 1/16 | −1/16 | −1/16 | −1/16 | 1/16 | 1/16 | 1/16 | 0 | 1/16 |
w124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/16 | 1/16 |
w234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Weights | P | C22 | C23 | Padd | C12 | C13 | Pdec | C17 | C21 | C18 | C31 | Pupd |
w1 | 1/4 | −1/8 | 1/8 | 1/4 | −1/8 | 1/8 | 1/4 | 1/8 | 1/8 | −1/8 | 0 | −1/8 |
w2 | 1/4 | 1/8 | −1/8 | 1/4 | 1/8 | −1/8 | 1/4 | −1/8 | 1/8 | −1/8 | 1/8 | 1/4 |
w3 | 1/8 | 0 | 0 | 1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 0 | −1/8 | 1/8 | 1/4 |
w4 | 1/8 | 1/8 | 1/8 | 3/8 | 0 | 0 | 1/8 | 0 | 1/8 | 0 | 1/8 | 1/8 |
w12 | 0 | 1/8 | 1/8 | 1/4 | 1/8 | 1/8 | −1/4 | 1/8 | −1/8 | −1/8 | 0 | −1/8 |
w13 | 1/8 | 0 | 0 | 1/8 | −1/8 | 1/8 | 1/8 | 1/8 | 0 | −1/8 | 0 | −1/8 |
w14 | −1/8 | −1/8 | 1/8 | −1/8 | 0 | 0 | −1/8 | 0 | −1/8 | 0 | 0 | 0 |
w23 | 1/8 | 0 | 0 | 1/8 | 1/8 | −1/8 | 1/8 | 1/8 | 0 | −1/8 | −1/8 | −1/4 |
w24 | −1/8 | 1/8 | −1/8 | −1/8 | 0 | 0 | −1/8 | 0 | −1/8 | 0 | −1/8 | −1/8 |
w34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w123 | 1/16 | 0 | 0 | 1/16 | −1/16 | −1/16 | 3/16 | 1/16 | 0 | −1/16 | 0 | −1/16 |
w124 | 1/16 | −1/16 | −1/16 | −1/16 | 0 | 0 | 1/16 | 0 | 1/16 | 0 | 0 | 0 |
w234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/16 | 1/16 |
No. | Instance | variables | Clauses | No. | Instance | variables | Clauses |
1 | aim-50-1_6-yes1-1 | 50 | 80 | 7 | aim-100-3_4-yes1-1 | 100 | 340 |
2 | aim-50-2_0-yes1-1 | 50 | 100 | 8 | aim-100-6_0-yes1-1 | 100 | 600 |
3 | aim-50-3_4-yes1-1 | 50 | 170 | 9 | aim-200-1_6-yes1-1 | 200 | 320 |
4 | aim-50-6_0-yes1-1 | 50 | 300 | 10 | aim-200-2_0-yes1-1 | 200 | 400 |
5 | aim-100-1_6-yes1-1 | 100 | 160 | 11 | aim-200-3_4-yes1-1 | 200 | 680 |
6 | aim-100-2_0-yes1-1 | 100 | 200 | 12 | aim-200-6_0-yes1-1 | 200 | 1200 |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | tolerance value δ | 0.001 |
CPU time threshold | 24 hours | - | - |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | probability of mutation pm | 0.05 |
population size | 50 | Maximum Iterations t | 100 |
crossover probability pc | 0.6 | - | - |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | revolutionary rate α | 0.3 |
population size | 50 | Maximum Iterations t | 100 |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | crossover probability pc | 0.6 |
population size | 50 | probability of mutation pm | 0.05 |
Initial number of clusters | 3 | Maximum Iterations t | 100 |
Indicators | calculation formula | instructions |
GMR | GMR=NGMT | GMR represents the ratio of the global minimum solution to the total number of runs [16]. GMR is an effective metric for assessing the efficiency of an algorithm. A model is considered robust when its GMR value is close to 1 [23]. Here, NGM represents the number of times the global minimum is converged, and T represents the total number of runs. |
MCT | MCT=1NGM∑TGMiNTi | The MCT refers to the average time needed for each model to reach the global minimum. A smaller MCT indicates that the model is more efficient in finding the global minimum. NTi represents the CPU time needed to find the global minimum at the i th retrieval result, and NGM represents the number of times the global minimum converged. |
MMHD | MMHD=1T∑TiminjD(Xi,Zj) | The MMHD value represents the mean minimum Hamming distance, which is the average of the smallest bit difference between the retrieval result of each run and the global minimum. When the MMHD value is closer to 0, it indicates that the model retrieves a result closer to the global minimum. In this context, D(Xi,Zj) represents the Hamming distance between the retrieval result of the i th run and the global minimum, and T represents the total number of runs. |
MLSR | MLSR=1T∑TiNsat(i)m | The MLSR value indicates the average proportion of the total number of clauses that can be satisfied by the retrieval results. The closer the MLSR value is to 1, the closer the model retrieval results are to the global minimum. Here, Nsat(i) denotes the number of satisfying clauses for the i th retrieval result, and m denotes the total number of clauses. |
No. |
GMR | MCT | MMHD | MLSR | ||||
DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | |
1 | 1.0000 | 1.0000 | 4.83 | 2.51 | 0.0000 | 0.0000 | 1 | 1 |
2 | 0.9123 | 0.9323 | 16.67 | 11.70 | 1.1000 | 1.0900 | 0.9744 | 0.9745 |
3 | 0.8234 | 0.8456 | 26.67 | 22.83 | 3.6000 | 3.4900 | 0.9355 | 0.9578 |
4 | 0.5802 | 0.6204 | 48.82 | 45.80 | 3.7200 | 3.7100 | 0.8923 | 0.8966 |
5 | 1.0000 | 1.0000 | 11.72 | 6.31 | 0.0000 | 0.0000 | 1 | 1 |
6 | 0.8812 | 0.9011 | 46.51 | 16.01 | 2.5000 | 2.4000 | 0.9433 | 0.9533 |
7 | 0.5467 | 0.6041 | 113.28 | 35.50 | 3.6200 | 3.6100 | 0.9288 | 0.9363 |
8 | 0.2018 | 0.2188 | 440.39 | 171.23 | 4.7400 | 4.2300 | 0.9139 | 0.9231 |
9 | 0.4114 | 0.4222 | 537.92 | 431.04 | 4.8600 | 4.4500 | 0.9835 | 0.9844 |
10 | 0.2261 | 0.2352 | 978.78 | 773.75 | 5.9800 | 5.6700 | 0.9312 | 0.9474 |
11 | 0.1616 | 0.1653 | 1369.44 | 1100.94 | 7.1000 | 6.8900 | 0.8537 | 0.8775 |
12 | 0.1413 | 0.1518 | 1566.18 | 1198.85 | 8.2200 | 8.1100 | 0.8234 | 0.8641 |
No. |
GMR | MCT | MMHD | MLSR | ||||
DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | |
1 | 1.0000 | 1.0000 | 2.49 | 2.21 | 0.0000 | 0.0000 | 1 | 1 |
2 | 0.9441 | 0.9658 | 10.64 | 8.29 | 1.0700 | 1.0400 | 0.9761 | 0.9783 |
3 | 0.8542 | 0.9126 | 20.45 | 15.08 | 3.2700 | 1.1400 | 0.9662 | 0.9751 |
4 | 0.6356 | 0.7229 | 40.18 | 26.87 | 3.3700 | 2.9700 | 0.8978 | 0.9354 |
5 | 1.0000 | 1.0000 | 5.49 | 4.68 | 0.0000 | 0.0000 | 1 | 1 |
6 | 0.9124 | 0.9256 | 14.02 | 11.06 | 2.2000 | 2.1000 | 0.9644 | 0.9881 |
7 | 0.6205 | 0.6898 | 30.59 | 22.03 | 3.2000 | 2.9300 | 0.9375 | 0.9523 |
8 | 0.2291 | 0.3211 | 141.54 | 74.60 | 3.9800 | 3.7600 | 0.9251 | 0.9336 |
9 | 0.4301 | 0.4503 | 435.12 | 396.82 | 4.0800 | 3.8900 | 0.9851 | 0.9928 |
10 | 0.2488 | 0.2632 | 752.20 | 678.91 | 5.0800 | 4.7200 | 0.9488 | 0.9557 |
11 | 0.1689 | 0.2203 | 1108.03 | 935.02 | 6.0800 | 5.5500 | 0.8809 | 0.9028 |
12 | 0.1612 | 0.2097 | 1160.96 | 982.28 | 7.0800 | 6.3800 | 0.8732 | 0.8822 |
Weights | w123 | w124 | w134 | w234 | w12 | w13 | w14 | w23 | w24 | w34 | w1 | w2 | w3 | w4 |
P | 116 | 116 | 0 | 0 | 0 | 18 | −18 | 18 | −18 | 0 | 14 | 14 | 18 | 18 |
Weights | Cl1 | Cl2 | Cl3 | Cl4 | Cl5 | Cl6 | Cl7 | Cl8 |
wi | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | −1/8 |
wj | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 |
wk | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 | −1/8 |
wij | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | −1/8 |
wik | −1/8 | −1/8 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 |
wjk | −1/8 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 |
wijk | 1/16 | −1/16 | −1/16 | −1/16 | 1/16 | 1/16 | 1/6 | −1/16 |
Weights | C11 | C12 | C13 | C14 | C15 | C16 | C17 | C21 | P |
w1 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | 1/4 |
w2 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | −1/8 | 1/8 | 1/4 |
w3 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | −1/8 | −1/8 | 0 | 1/8 |
w4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/8 | 1/8 |
w12 | −1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 0 |
w13 | −1/8 | −1/8 | 1/8 | 1/8 | 1/8 | −1/8 | 1/8 | 0 | 1/8 |
w14 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w23 | −1/8 | 1/8 | −1/8 | 1/8 | 1/8 | −1/8 | 1/8 | 0 | 1/8 |
w24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
w123 | 1/16 | −1/16 | −1/16 | −1/16 | 1/16 | 1/16 | 1/16 | 0 | 1/16 |
w124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/16 | 1/16 |
w234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Weights | P | C22 | C23 | Padd | C12 | C13 | Pdec | C17 | C21 | C18 | C31 | Pupd |
w1 | 1/4 | −1/8 | 1/8 | 1/4 | −1/8 | 1/8 | 1/4 | 1/8 | 1/8 | −1/8 | 0 | −1/8 |
w2 | 1/4 | 1/8 | −1/8 | 1/4 | 1/8 | −1/8 | 1/4 | −1/8 | 1/8 | −1/8 | 1/8 | 1/4 |
w3 | 1/8 | 0 | 0 | 1/8 | 1/8 | 1/8 | −1/8 | −1/8 | 0 | −1/8 | 1/8 | 1/4 |
w4 | 1/8 | 1/8 | 1/8 | 3/8 | 0 | 0 | 1/8 | 0 | 1/8 | 0 | 1/8 | 1/8 |
w12 | 0 | 1/8 | 1/8 | 1/4 | 1/8 | 1/8 | −1/4 | 1/8 | −1/8 | −1/8 | 0 | −1/8 |
w13 | 1/8 | 0 | 0 | 1/8 | −1/8 | 1/8 | 1/8 | 1/8 | 0 | −1/8 | 0 | −1/8 |
w14 | −1/8 | −1/8 | 1/8 | −1/8 | 0 | 0 | −1/8 | 0 | −1/8 | 0 | 0 | 0 |
w23 | 1/8 | 0 | 0 | 1/8 | 1/8 | −1/8 | 1/8 | 1/8 | 0 | −1/8 | −1/8 | −1/4 |
w24 | −1/8 | 1/8 | −1/8 | −1/8 | 0 | 0 | −1/8 | 0 | −1/8 | 0 | −1/8 | −1/8 |
w34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −1/8 | −1/8 |
w123 | 1/16 | 0 | 0 | 1/16 | −1/16 | −1/16 | 3/16 | 1/16 | 0 | −1/16 | 0 | −1/16 |
w124 | 1/16 | −1/16 | −1/16 | −1/16 | 0 | 0 | 1/16 | 0 | 1/16 | 0 | 0 | 0 |
w234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1/16 | 1/16 |
No. | Instance | variables | Clauses | No. | Instance | variables | Clauses |
1 | aim-50-1_6-yes1-1 | 50 | 80 | 7 | aim-100-3_4-yes1-1 | 100 | 340 |
2 | aim-50-2_0-yes1-1 | 50 | 100 | 8 | aim-100-6_0-yes1-1 | 100 | 600 |
3 | aim-50-3_4-yes1-1 | 50 | 170 | 9 | aim-200-1_6-yes1-1 | 200 | 320 |
4 | aim-50-6_0-yes1-1 | 50 | 300 | 10 | aim-200-2_0-yes1-1 | 200 | 400 |
5 | aim-100-1_6-yes1-1 | 100 | 160 | 11 | aim-200-3_4-yes1-1 | 200 | 680 |
6 | aim-100-2_0-yes1-1 | 100 | 200 | 12 | aim-200-6_0-yes1-1 | 200 | 1200 |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | tolerance value δ | 0.001 |
CPU time threshold | 24 hours | - | - |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | probability of mutation pm | 0.05 |
population size | 50 | Maximum Iterations t | 100 |
crossover probability pc | 0.6 | - | - |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | revolutionary rate α | 0.3 |
population size | 50 | Maximum Iterations t | 100 |
parametric | parameter value | parametric | parameter value |
Initial assigned amount | 10000 | crossover probability pc | 0.6 |
population size | 50 | probability of mutation pm | 0.05 |
Initial number of clusters | 3 | Maximum Iterations t | 100 |
Indicators | calculation formula | instructions |
GMR | GMR=NGMT | GMR represents the ratio of the global minimum solution to the total number of runs [16]. GMR is an effective metric for assessing the efficiency of an algorithm. A model is considered robust when its GMR value is close to 1 [23]. Here, NGM represents the number of times the global minimum is converged, and T represents the total number of runs. |
MCT | MCT=1NGM∑TGMiNTi | The MCT refers to the average time needed for each model to reach the global minimum. A smaller MCT indicates that the model is more efficient in finding the global minimum. NTi represents the CPU time needed to find the global minimum at the i th retrieval result, and NGM represents the number of times the global minimum converged. |
MMHD | MMHD=1T∑TiminjD(Xi,Zj) | The MMHD value represents the mean minimum Hamming distance, which is the average of the smallest bit difference between the retrieval result of each run and the global minimum. When the MMHD value is closer to 0, it indicates that the model retrieves a result closer to the global minimum. In this context, D(Xi,Zj) represents the Hamming distance between the retrieval result of the i th run and the global minimum, and T represents the total number of runs. |
MLSR | MLSR=1T∑TiNsat(i)m | The MLSR value indicates the average proportion of the total number of clauses that can be satisfied by the retrieval results. The closer the MLSR value is to 1, the closer the model retrieval results are to the global minimum. Here, Nsat(i) denotes the number of satisfying clauses for the i th retrieval result, and m denotes the total number of clauses. |
No. |
GMR | MCT | MMHD | MLSR | ||||
DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | DHNN-3SAT-WA | DHNN-3SAT-GA | |
1 | 1.0000 | 1.0000 | 4.83 | 2.51 | 0.0000 | 0.0000 | 1 | 1 |
2 | 0.9123 | 0.9323 | 16.67 | 11.70 | 1.1000 | 1.0900 | 0.9744 | 0.9745 |
3 | 0.8234 | 0.8456 | 26.67 | 22.83 | 3.6000 | 3.4900 | 0.9355 | 0.9578 |
4 | 0.5802 | 0.6204 | 48.82 | 45.80 | 3.7200 | 3.7100 | 0.8923 | 0.8966 |
5 | 1.0000 | 1.0000 | 11.72 | 6.31 | 0.0000 | 0.0000 | 1 | 1 |
6 | 0.8812 | 0.9011 | 46.51 | 16.01 | 2.5000 | 2.4000 | 0.9433 | 0.9533 |
7 | 0.5467 | 0.6041 | 113.28 | 35.50 | 3.6200 | 3.6100 | 0.9288 | 0.9363 |
8 | 0.2018 | 0.2188 | 440.39 | 171.23 | 4.7400 | 4.2300 | 0.9139 | 0.9231 |
9 | 0.4114 | 0.4222 | 537.92 | 431.04 | 4.8600 | 4.4500 | 0.9835 | 0.9844 |
10 | 0.2261 | 0.2352 | 978.78 | 773.75 | 5.9800 | 5.6700 | 0.9312 | 0.9474 |
11 | 0.1616 | 0.1653 | 1369.44 | 1100.94 | 7.1000 | 6.8900 | 0.8537 | 0.8775 |
12 | 0.1413 | 0.1518 | 1566.18 | 1198.85 | 8.2200 | 8.1100 | 0.8234 | 0.8641 |
No. |
GMR | MCT | MMHD | MLSR | ||||
DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | DHNN-3SAT-ICA | DHNN-3SAT-GAKM | |
1 | 1.0000 | 1.0000 | 2.49 | 2.21 | 0.0000 | 0.0000 | 1 | 1 |
2 | 0.9441 | 0.9658 | 10.64 | 8.29 | 1.0700 | 1.0400 | 0.9761 | 0.9783 |
3 | 0.8542 | 0.9126 | 20.45 | 15.08 | 3.2700 | 1.1400 | 0.9662 | 0.9751 |
4 | 0.6356 | 0.7229 | 40.18 | 26.87 | 3.3700 | 2.9700 | 0.8978 | 0.9354 |
5 | 1.0000 | 1.0000 | 5.49 | 4.68 | 0.0000 | 0.0000 | 1 | 1 |
6 | 0.9124 | 0.9256 | 14.02 | 11.06 | 2.2000 | 2.1000 | 0.9644 | 0.9881 |
7 | 0.6205 | 0.6898 | 30.59 | 22.03 | 3.2000 | 2.9300 | 0.9375 | 0.9523 |
8 | 0.2291 | 0.3211 | 141.54 | 74.60 | 3.9800 | 3.7600 | 0.9251 | 0.9336 |
9 | 0.4301 | 0.4503 | 435.12 | 396.82 | 4.0800 | 3.8900 | 0.9851 | 0.9928 |
10 | 0.2488 | 0.2632 | 752.20 | 678.91 | 5.0800 | 4.7200 | 0.9488 | 0.9557 |
11 | 0.1689 | 0.2203 | 1108.03 | 935.02 | 6.0800 | 5.5500 | 0.8809 | 0.9028 |
12 | 0.1612 | 0.2097 | 1160.96 | 982.28 | 7.0800 | 6.3800 | 0.8732 | 0.8822 |