
Currently, the discrete Hopfield neural network deals with challenges related to searching space and limited memory capacity. To address this issue, we propose integrating logical rules into the neural network to regulate neuron connections. This approach requires adopting a specific logic framework that ensures the network consistently reaches the lowest global energy state. In this context, a novel logic called major 1,3 satisfiability was introduced. This logic places a higher emphasis on third-order clauses compared to first-order clauses. The proposed logic is trained by the exhaustive search algorithm, aiming to minimize the cost function toward zero. To evaluate the proposed model effectiveness, we compare the model's learning and retrieval errors with those of the existing non-systematic logical structure, which primarily relies on first-order clauses. The similarity index measures the similarity benchmark neuron state with the existing and proposed model through extensive simulation studies. Certainly, the major random 1,3 satisfiability model exhibited a more extensive solution space when the ratio of third-order clauses exceeds 0.7% compared to first-order clauses. As we compared the experimental results with other state-of-the-art models, it became evident that the proposed model achieved significant results in capturing the overall neuron state. These findings emphasize the notable enhancements in the performance and capabilities of the discrete Hopfield neural network.
Citation: Gaeithry Manoharam, Azleena Mohd Kassim, Suad Abdeen, Mohd Shareduwan Mohd Kasihmuddin, Nur 'Afifah Rusdi, Nurul Atiqah Romli, Nur Ezlin Zamri, Mohd. Asyraf Mansor. Special major 1, 3 satisfiability logic in discrete Hopfield neural networks[J]. AIMS Mathematics, 2024, 9(5): 12090-12127. doi: 10.3934/math.2024591
[1] | Muhammad Aqmar Fiqhi Roslan, Nur Ezlin Zamri, Mohd. Asyraf Mansor, Mohd Shareduwan Mohd Kasihmuddin . Major 3 Satisfiability logic in Discrete Hopfield Neural Network integrated with multi-objective Election Algorithm. AIMS Mathematics, 2023, 8(9): 22447-22482. doi: 10.3934/math.20231145 |
[2] | Nurshazneem Roslan, Saratha Sathasivam, Farah Liyana Azizan . Conditional random k satisfiability modeling for k = 1, 2 (CRAN2SAT) with non-monotonic Smish activation function in discrete Hopfield neural network. AIMS Mathematics, 2024, 9(2): 3911-3956. doi: 10.3934/math.2024193 |
[3] | Xiaoyan Liu, Mohd Shareduwan Mohd Kasihmuddin, Nur Ezlin Zamri, Yunjie Chang, Suad Abdeen, Yuan Gao . Higher order Weighted Random k Satisfiability (k=1,3) in Discrete Hopfield Neural Network. AIMS Mathematics, 2025, 10(1): 159-194. doi: 10.3934/math.2025009 |
[4] | Farah Liyana Azizan, Saratha Sathasivam, Nurshazneem Roslan, Ahmad Deedat Ibrahim . Logic mining with hybridized 3-satisfiability fuzzy logic and harmony search algorithm in Hopfield neural network for Covid-19 death cases. AIMS Mathematics, 2024, 9(2): 3150-3173. doi: 10.3934/math.2024153 |
[5] | Caicai Feng, Saratha Sathasivam, Nurshazneem Roslan, Muraly Velavan . 2-SAT discrete Hopfield neural networks optimization via Crow search and fuzzy dynamical clustering approach. AIMS Mathematics, 2024, 9(4): 9232-9266. doi: 10.3934/math.2024450 |
[6] | Nur 'Afifah Rusdi, Nur Ezlin Zamri, Mohd Shareduwan Mohd Kasihmuddin, Nurul Atiqah Romli, Gaeithry Manoharam, Suad Abdeen, Mohd. Asyraf Mansor . Synergizing intelligence and knowledge discovery: Hybrid black hole algorithm for optimizing discrete Hopfield neural network with negative based systematic satisfiability. AIMS Mathematics, 2024, 9(11): 29820-29882. doi: 10.3934/math.20241444 |
[7] | Jin Gao, Lihua Dai . Anti-periodic synchronization of quaternion-valued high-order Hopfield neural networks with delays. AIMS Mathematics, 2022, 7(8): 14051-14075. doi: 10.3934/math.2022775 |
[8] | Xiaojun Xie, Saratha Sathasivam, Hong Ma . Modeling of 3 SAT discrete Hopfield neural network optimization using genetic algorithm optimized K-modes clustering. AIMS Mathematics, 2024, 9(10): 28100-28129. doi: 10.3934/math.20241363 |
[9] | Nurul Atiqah Romli, Nur Fariha Syaqina Zulkepli, Mohd Shareduwan Mohd Kasihmuddin, Nur Ezlin Zamri, Nur 'Afifah Rusdi, Gaeithry Manoharam, Mohd. Asyraf Mansor, Siti Zulaikha Mohd Jamaludin, Amierah Abdul Malik . Unsupervised logic mining with a binary clonal selection algorithm in multi-unit discrete Hopfield neural networks via weighted systematic 2 satisfiability. AIMS Mathematics, 2024, 9(8): 22321-22365. doi: 10.3934/math.20241087 |
[10] | Mohammed D. Kassim . Controlling stability through the rate of decay of the delay feedback kernels. AIMS Mathematics, 2023, 8(11): 26343-26356. doi: 10.3934/math.20231344 |
Currently, the discrete Hopfield neural network deals with challenges related to searching space and limited memory capacity. To address this issue, we propose integrating logical rules into the neural network to regulate neuron connections. This approach requires adopting a specific logic framework that ensures the network consistently reaches the lowest global energy state. In this context, a novel logic called major 1,3 satisfiability was introduced. This logic places a higher emphasis on third-order clauses compared to first-order clauses. The proposed logic is trained by the exhaustive search algorithm, aiming to minimize the cost function toward zero. To evaluate the proposed model effectiveness, we compare the model's learning and retrieval errors with those of the existing non-systematic logical structure, which primarily relies on first-order clauses. The similarity index measures the similarity benchmark neuron state with the existing and proposed model through extensive simulation studies. Certainly, the major random 1,3 satisfiability model exhibited a more extensive solution space when the ratio of third-order clauses exceeds 0.7% compared to first-order clauses. As we compared the experimental results with other state-of-the-art models, it became evident that the proposed model achieved significant results in capturing the overall neuron state. These findings emphasize the notable enhancements in the performance and capabilities of the discrete Hopfield neural network.
Artificial neural networks (ANN), drawing inspiration from the complex operations of the human brain, have triggered a global revolution in the field of artificial intelligence [1]. They act as a powerful tool across various sectors including medicine [2], economics [3], transportation [4], education [5], and business [6] to facilitate the development of advanced systems to solve complex intelligent computations. These networks, resembling the neurons in the human brain, consist of interconnected neurons that "learn" from examples and generalize beyond learning data. Each neuron acts as a simple processing unit, with inputs weighted to determine network strength [7,8]. Following this, the neurons combine all inputs and generate an output, the fundamental principle established by [9]. By employing a specific rule or activation function, the neuron produces an output based on the principle of Hebbian learning, emphasizing the significance of recurrent activation patterns in strengthening neural connections. The Hopfield neural network (HNN) is a prominent example of an ANN that was characterized using an associative learning model. Within this network, data is stored using a bipolar representation and linked by synaptic weights. Over time, great improvements have been made to this network to address optimization challenges. Through the implementation of performance metric tasks, ANN practitioners craft the most effective structure for the HNN. Although substantial progress has been made in recent years, still more findings are needed in the optimal configuration of HNN [10]. Therefore, identifying the most suitable symbolic arrangement to govern the HNN requires special attention, especially when dealing with limited capacity and addressing the challenge of local minimum energy in resolving complex optimization problems.
Boolean satisfiability (SAT) is known as the task of determining a configuration that fulfills a given Boolean formula [11,12]. A Boolean formula involves operations with Boolean variables that result in outcomes of either TRUE or FALSE, often represented as {–1, 1}. For example, as described by [13], a conjunctive normal form (CNF) formula is composed of multiple clauses. Every clause is composed of a set of combined literals. The satisfaction of a CNF formula depends on setting the Boolean variables in a manner that satisfies all the clauses simultaneously. In their study, [14] utilized the highest capabilities of the discrete Hopfield neural network (DHNN) to enhance a structured SAT logical function referred to as 3-satisfiability (3SAT). In this specific logical rule, each clause consists of exactly three literals. This study explores pattern satisfiability with a specific focus on 3SAT and the model integration with DHNN. Furthermore, in their study, [15] introduced an evolved iteration of 3SAT called maximum k satisfiability (MAXkSAT) within the structure of the DHNN. The presented research was enhanced through the application of the clonal selection algorithm derived in the DHNN, achieving a notably superior global minima ratio. However, in the retrieval phase of the DHNN, the inclusion of non-redundant literals in MAXkSAT led to imperfect outcomes. The introduction of various non-systematic logical rules that allow for different orders and systematic logical rules is being overtaken by their non-systematic counterparts in dominance. Continuously, the research by [16] utilized a hybrid technique to optimize the 2SAT logical rule with DHNN. It is important to note that DHNN was used to minimize logical inconsistencies when interpreting the logic clauses. The introduction of 2SAT brought about a revolution in logical rules, giving way for the development of maximum 2 satisfiability (MAX2SAT). This new framework adopts a systematic logic structure with negative literals, giving exciting possibilities and outcomes. Additionally, it is interesting to note that the DHNN has the remarkable ability to reproduce MAX2SAT behavior across various performance measurements. These growths are truly revolutionizing the field of systematic logic. The work by [17] presented noteworthy research that explores the implementation of the 3SAT logical rule in DHNN. This work is part of a modified clonal selection algorithm (CSA) to accomplish the effectiveness of 3SAT by successfully evaluating real data sets using the reverse analysis method. This demonstrates the power of systematic SAT in accurately representing and analyzing real-world data sets. The strategy of 2SAT [18] incorporated a mutation operator within the retrieval phase of the DHNN. This new method adds another layer of complexity and effectiveness to their methodology. The exhaustive search (ES) algorithm indicates that utilizing an optimizer during the retrieval phase can greatly enhance the ability to uncover additional solutions within various search areas. Based on the observations from the previous study, there is a very negative impact on the global minimum ratio when the number of neurons increases in the systematic logic or the non-systematic logic. Therefore, structuring the logical rules is a crucial method to get a global solution to the network. The logical rule may be systematic or non-systematic with clauses that contain positive or negative literals, which can have a huge impact on the network.
The new exploration by [13] proposed using logical structure as a paradigm for DHNN instead of a tool for optimization. Logic was implemented due to the ability of the database capacity to explicitly state information with the systematic approach and incorporate resolution techniques. These attributes greatly assist the DHNN in providing evidence and challenging the desired objective of the network. The integration of logic as a symbolic rule within DHNN has paved the way for the development of various other representations of logic. One noteworthy example is the Abdullah method [13], which enables the identification of the optimal synaptic weight linked to the embedded logical rule. The Abdullah method shows how logic-driven approaches continue to advance and enhance neural network technologies. In their study, [19] introduced the innovative use of 2SAT logic in DHNN. To enhance the efficiency of 2SAT, they cleverly utilized a genetic algorithm during the learning phase. This optimization technique proved to be highly effective in their research. The study of [20] developed a unique approach called random 2 satisfiability (RAN2SAT) that combines first-order and second-order clauses as non-systematic logical rules. Another exploration in systematic logic called weighted random 2 satisfiability (r2SAT) in DHNN was a concept introduced by [21]. The involvement of the logic phase in this logical structure ensures that the appropriate ratio of negated literals is imposed on RAN2SAT. This innovation has significant implications for enhancing problem-solving capabilities. However, the binary neurons in a DHNN may offer simplicity but it is also important to note that the use of continuous values within the range of –1 to 1 can sometimes result in precision issues and inaccuracies.
The research by [22] embarked on an interesting study. The study discovered ways to expand and explore more solution spaces in neural networks. Building upon this work, [22] went a step further by introducing a third-order clause. This allowed for the creation of higher-order non-systematic logic. This new research opens up exciting possibilities for advancing the field of logic into DHNN. The exploration into the elaboration of a novel logical rule known as random 3 satisfiability (RAN3SAT) merges 3SAT logic with the established RAN2SAT concept. This exploration represents an exciting leap forward in this field of research. The results of the integration of RAN3SAT into DHNN have led to the exciting discovery of surpassing RAN2SAT when the RAN3SAT increases toward the ratio of global minima. This finding highlights the superior performance of RAN3SAT in this regard. However, it is important to note that RAN3SAT struggles with the capacity to handle a lower number of neurons. To address this issue, [23,24] proposed a revolutionary election algorithm (EA). This algorithm was specifically designed to enhance the learning phase of RAN3SAT by optimizing the model for maximum efficiency and effectiveness. By incorporating the EA into the process, they were able to overcome the limitations previously faced in this area of study. This incorporation enables the DHNN to achieve an optimal final neuron state with higher neuron variation and a low similarity index. The EA enhances the storage capacity of the DHNN through a non-systematic logical rule. Another study by [25] has introduced a variant in logical structure called major 2 satisfiability (MAJ2SAT) leading to further advancement in logical rules. This novel approach incorporates a significant proportion of 2SAT clauses relative to the total number of clauses that enhanced problem-solving capabilities. The increased presence of 2SAT clauses in non-systematic logical rules significantly impacts the final neuron state, creating a wider dimension of possibilities. In this regard, MAJ2SAT surpasses the current systematic logical rule by achieving a higher ratio of global minima and diversity while utilizing an equivalent number of neurons. This demonstrates the superior performance and efficiency of MAJ2SAT in comparison to its counterparts. In a recent study, [26] explored stimulation data in the domain of non-systematic logical rule structures. They expertly merged the characteristics of systematic and non-systematic logic to introduce Y random 2 satisfiability (YRAN2SAT). This novel approach offers promising new insights and possibilities in the field of logical rules. This method combines logical rules with symbolic instruction, significantly improving the effectiveness of the DHNN. The advantage of both these techniques is that they offer unparalleled benefits and enhance the overall performance of DHNN. In their research, [27] introduced a new higher-order non-systematic logic called G random k satisfiability (GRAN3SAT). GRAN3SAT randomly generates a set of clauses of first-, second-, and third-orders. The integration of GRAN3SAT into a DHNN shows the wide storage capacity and capability to handle difficult dimensional problems. Another innovative non-systematic logic developed by [28] in the S-type random k satisfiability model introduces a unique approach by assigning negative literals in the logical structure. This method relies on statistical parameters to ensure an accurate and efficient neural network. The introduction of a probability-based system enhances the overall logical analysis process to improve the results. This integration aims to reduce logical rules associated with the zero-cost function. The S-type random k satisfiability also involves identifying the synaptic weight configuration of the DHNN that yields a cost function equivalent to the fulfilled δ2SAT. While new approaches in non-systematic logic have been developed to explore the storage capacity of neural networks, there is little research being conducted utilizing major higher-order clauses within neural networks. These major higher-order clauses have the potential to significantly expand the search space, allowing for increased storage capacity.
In this research study, we propose the application of major higher-order clauses instead of first-order clauses, which has not been thoroughly explored in previous literature. The performance of the DHNN in terms of the combination of major higher-order and minimum lower-order clauses has received limited attention. This research aims to examine how the integration of major higher-order and minimum lower-order clauses can help reduce errors in both the learning and retrieval phases. By examining the impact of this integration, the study seeks novel findings on a potential solution for reducing errors and improving overall performance. This research holds significant promise for enhancing the effectiveness and accuracy of future learning and retrieval processes. The major random 1, 3 satisfiability (MR1, 3SAT) is an innovative approach that combines major higher-order and lower-order non-systematic logical rules within the DHNN. By utilizing the higher-order logical rule, MR1, 3SAT achieves wider storage capacity. The introduction of MR1, 3SAT aims to significantly enhance the storage capacity of DHNN by carefully combining higher-order and lower-order clauses. This research seeks to understand the impact of this integration on reducing errors during both the learning and retrieval phases of the network. By presenting novel findings in this unexplored area, we contribute valuable insights that can potentially revolutionize the effectiveness and accuracy of future learning and retrieval processes by offering an outperforming solution to improve the overall performance in DHNN. On the other hand, the lower-order logical rule aids in making precise predictions and classifications. Our work represents an initial effort to combine clauses as a logical rule within a DHNN to contain all previously proposed sets of logical rules. The main objectives of this study are stated below:
(a) To formulate a new non-systematically logical rule named MR1, 3SAT by combining major higher-order with minimum first-order logical rules into a single formula. This logic involves the selection of major third-order clauses based on a ratio and accommodates non-redundant variables.
(b) The implementation of MR1, 3SAT in a discrete Hopfield neural network involves finding the minimum cost function of the sub-logical rule that is satisfiable. In this approach, neurons are represented as a literal in the logical rule, and their optimal synaptic weights are determined by comparing the cost function with the Lyapunov energy function.
(c) To evaluate the performance of the proposed hybrid network, we use simulated datasets. This hybrid network comprises MR1, 3SAT, the ES algorithm, and the discrete Hopfield neural network. This method will evaluate how this logical structure is able to perform well with less error in the learning and retrieval phase.
(d) To compare the performance of the newly proposed MR1, 3SAT approach with the existing non-systematic logical rules during both the learning and retrieval phases, the similarity index is used to measure the lower similarity between the final neuron and the benchmark neuron state.
The organization of this paper is outlined as follows: Section 2 presents the motivation for this research. Progressing to Section 3, we delve into the complete integration of MR1, 3SAT into a DHNN. The specifics of the experimental setup and the metrics utilized to assess performance throughout the simulation are expressed in Section 4. The exploration of the behavior and effectiveness of DHNN-MR1, 3SAT across parameters and stages along with a comparative analysis against established logical rules is conducted in Section 5. Finally, Section 6 contains the discussion of the results obtained in this research, and Section 7 concludes and discusses the future work of this research.
In this section, we will discuss the motivation behind our work. Each motivation addresses the new exploration of existing works and how the proposed logical structure can fill the gaps in the field. This study aims to introduce a new way of organizing or understanding information in the field of discrete Hopefield neural networks.
The establishment of the logical rule facilitates the transition of DHNN neuron performance from the initial state to the final state. The absence of this rule exposes the DHNN to the risk of becoming trapped in a cycle of trial and error, lacking a clear bias toward converging into a state of absolute minimal energy. In such scenarios, the initial neuron state attained through a specific DHNN configuration presents a potential solution to an optimization problem. Previous research by [14,15] has effectively integrated the k-satisfiability logical rule into the DHNN. Nevertheless, the final neuron state in these engages by a fixed number of dimensions per clause. To illustrate, [14] generated a final neuron state that approaches a hundred percent of the local minima ratio. However, their approach limitation is two dimensions, which restricts the formulation variables in defined clauses and consequently limits the capacity of the logical rules. This constraint becomes more visible when higher-order logical rules as demonstrated by [15] are integrated into the DHNN. The satisfiability of the logical structure increases when the tendency of neuron connection across different clauses rises in the overall neuron count.
Following the findings of [26], they introduced a novel strategy of introducing random second-order and first-order clauses to assist issues related to overfitting. Their methodology diverges notably from that of random k satisfiability (RANkSAT), which imposes constraints on clause utilization. Similarly, [27] follows a comparable approach by incorporating randomly generated third-order, second-order, and first-order clauses within the DHNN to amplify the adaptability of the logical framework during the retrieval phase. In our current study, we present a logical rule that directly addresses these limitations. Our approach entails the utilization of a major random generator which produces third-order clauses as well as minimal first-order clauses. This innovation empowers the DHNN to proficiently represent non-systematic logical rules that effectively transcend the previously mentioned problems.
In DHNN the higher-order logical rules take into account the connections between three variables at the same time that become better at representing complex patterns. The increased logical structure capability has the potential to enhance performance in tasks that demand the capturing of complex relationships. Non-systematic logic splits structurally from systematic logic by accommodating fluctuations in the number of variables per clause randomly upon the specific logical formula. Typically, non-systematic logical rules can be classified into two overarching viewpoints, utilizing first-order clauses and not involving first-order clauses to express the rules. However, the involvement of first-order clauses within non-systematic logic has been identified as presenting challenges when it comes to achieving satisfaction. This is because of some limitations of the interpretations available which lead to the logical rule containing first-order clauses to minimize the logical rule cost function. For instance, an illustrative case is RAN2SAT [24], which integrates first-order clauses. This particular approach was deemed ineffective during the learning phase as it requires a greater number of iterations to attain the desired cost function. When the HNN fails to reach the absolute minimum cost function during the learning process, the network could potentially retrieve inaccurate synaptic weights. Consequently, this increases the probability of becoming trapped within local minimum energy states. Conversely, the second perspective of non-systematic logic entails the exclusion of first-order clauses that bears a direct connection to the principal logic.
The authors in [22] proposed a variant of RAN3SAT that combines second-order and third-order clauses. This new logical formulation was successfully integrated into the HNN, which resulted in the highest global minima ratio among all RAN3SAT variants. The introduction of third-order clauses into the formulation aids in augmenting the storage capacity of the HNN. As a result, the DHNN exhibits heightened efficiency in recovering neuron states across various logical orders. While the capacity to satisfy the cost function with potential interpretations surpasses that of the non-systematic logic of the RANkSAT, accurate interpretations remain confined to the predetermined count of first-, second-, or third-order clauses allocated to the logical formulation. Consequently, this situation prompted our study to adopt the strategy of randomly generating major third-order clauses. This approach aims to address the limitation observed in RANkSAT, which confines the storage capacity for retrieving diverse solution states that fulfill the cost function. This complication can be effectively addressed by proposing a different order logical rule that includes both third-order and first-order clauses, thereby enabling the retrieval of an expanded range of final neuron states to achieve the global minimum energy.
The work [25] introduced MAJ2SAT clauses integrated into the DHNN as logical structure MAJ2SAT. The aim of implementing MAJ2SAT is to emphasize the incorporation of more features from 2SAT rather than 3SAT into DHNN. By constructing an efficient MAJ2SAT, we can expand our exploration of a broad solution space. In another attempt, [29] introduces major 3 satisfiability by exploring the previous work by [30]. Their study incorporates a multi-objective election algorithm into the learning phase of major 3 satisfiability logic. This addition aims to improve the learning process by utilizing the exploration and exploitation mechanism while replacing the ES. This leads to the major higher-order clauses having the potential to increase the storage capacity, accuracy, and diversity of DHNN. To achieve this, it becomes imperative to contemplate the integration of higher content addressable memory (CAM). The CAM is greatly influenced by the availability and quality of the retrieval phase. It is crucial to have a sample and accurate final neuron state to achieve optimal results. Sufficient and diverse learning examples that cover the range of logical rules and their combinations can help the network learn effective weight configurations. Adequate representation of both major third-order and lower first-order clauses in the retrieval phase is crucial for the network to generalize well to the unseen behavior of the final neuron state.
The major random k satisfiability (MRkSAT) is an incredibly useful non-systematic logical structure presented in the easily understandable CNF. It consists of a sequence of clauses with random literals, and the numbers of clauses and literals are randomly determined. MRkSAT is mainly comprised of k-SAT, where the value of k is either 1 or equal to 3. In the context of k-SAT, there is a collection of p literals and q clauses. Each literal in this context can have a value of either TRUE or FALSE, denoted as [–1, 1]. The well-defined structure of MR1, 3SAT is as follows:
(a) A set of p variables, A1,A2,A3,....,An.
(b) A set of clauses, denoted as Z, is defined as Z={NC1,NC2,NC3,...,NCt}, whereby
NCi=[mini]T,i∈[1,t], | (1) |
σ=mimi+ni,σ⩾0.7, | (2) |
q=σmi+ni,mi>ni, | (3) |
where mi represent the number of the third-order clauses, and ni represents the number of the first-order clauses.
(c) The third logic clause is represented as q=mi+ni,where mi>ni.
(d) A set of literals, where each literal can be either a variable A or the negation of variable ¬A.
(e) The number of q distinct clauses presented, interconnected by the logical AND (∧) operator. Each clause within this collection is composed of precisely three literal variables forming the basis of the k-SAT clause, and in which each mi consists of exactly three variables forming the k-SAT clause, and the ni consist of first-order clauses that every logical clause normally has exactly k variables that are linked with the OR (∧) operator. The general formula for the MR1, 3SAT expression is
LMR1,3SAT=∧mi=jCi∨∧ni=jCi, | (4) |
where
Cki={(Ai∨Bi∨Ci),k=3,(Di),k=1. | (5) |
Where Cki∈{Ai,¬Ai,Bi,¬Bi,Ci,¬Ci,} and Cki={Di,¬Di} are variables in LMR1,3SAT. The C3i refers to the third-order clauses, and the C1i refers to the first-order clauses. A possible example of the formulation of the logical structure given with the mi more than 0.7 is
LMR1,3SAT=(A1∨B1∨C1)∧(A2∨B2∨C2)∧(A3∨B3∨C3)∧(D1)∧(D2), | (6) |
LMR1,3SAT=(A1∨B1∨C1)∧(A2∨B2∨C2)∧(D1). | (7) |
As presented in the above equation, LMR1,3SAT is satisfiable when (Ai,Bi,Ci,Di) in the initial neuron state is {−1,1,1,1,−1,1,1,1,−1,1,1}, which represents true, or when the (Ai,Bi,Ci,Di) initial neuron state is {−1,1,1,1,−1,1,1,1,−1,1,1}, it is not satisfied. The major third-order logical structure does not consider redundant literals, as it is not satisfied when the initial neuron state represents true.
A DHNN is a computational model consisting of interconnected neurons without any hidden layers. Unlike traditional neural networks, the neurons in a DHNN are updated asynchronously, meaning they are updated one at a time rather, simultaneously. This asynchronous updating approach helps eliminate the possibility of neuron oscillations, ensuring stable and reliable computations [25]. The neurons in DHNN are represented in bipolar values of –1 and 1 [19]. The neuron updates in the DHNN are as follows:
Si={1,ifn∑iWijkSjSk⩾∂,−1,otherwise. | (8) |
Here, Si is represents the neuron state, while the Wijk is the synaptic weight from neuron state i to k, and the ∂ is the threshold value of the neuron state. The DHNN strength lies in its parallel computing capabilities and quick convergence, which can exhibit an effective capacity for CAM. This property makes DHNN suitable for tasks that involve pattern recognition, associative memory, and information retrieval. The synaptic weight does not have any interconnection with others, i.e., Wijk=Wkij=0. When the first-order clauses connection is added to Eq (8), any similar neuron connection will result in a synaptic weight of zero. LMR1,3SAT is assigned as a logic into the DHNN as each neuron is a variable. The variables in the LMR1,3SAT will be randomly applied as a clause in Eq (6) until the total number of neurons is satisfied. The cost functions CLMR1,3SAT of putting the logical rule into the DHNN are given as below in Eq (9):
CLMR1,3SAT=NC∑i=1m+n⋂j=1Lij, | (9) |
where NC is the number of clauses, and m+n is the number of variables in LMR1,3SAT. The inconsistency of LMR1,3SAT denoted as per Eq (10):
LMR1,3SATij={12(1−SA),if¬A,12(1+SA),ifA. | (10) |
In order to successfully incorporate LMR1,3SAT into the DHNN model, it is essential to ensure that the associated cost function CLMR1,3SAT is minimized to zero. The DHNN is designed to search for an interpretation that results in a cost function value of zero. By minimizing the cost function, the network aims to find a solution where all constraints are satisfied, and the objective is achieved. This ensures that the DHNN produces an optimal outcome, providing valuable insights and accurate results. By identifying a consistent interpretation, it becomes possible to determine the optimal synaptic weight for LMR1,3SAT [25]. However, if CLMR1,3SAT is not equal to zero (CLMR1,3SAT≠0), it indicates that LMR1,3SAT cannot be considered as satisfiable. This leads to the synaptic weight being nearly random. In order to ensure accurate retrieval of the final neuron state in the DHNN, it is crucial to employ effective learning methods during the learning phase with the primary objective of achieving CLMR1,3SAT=0.
In this study, the weights for DHNN-MR1, 3SAT are obtained using the Wan Abdullah method, which relies on identifying logical inconsistencies [13]. Each neuron is assigned a truth value, and the objective is to minimize the cost function by maximizing the number of satisfied clauses. In the retrieval stage, before establishing the final state of the neuron the local field plays a significant role in fine-tuning the obtained output [31]. The neurons state is asynchronously modified by applying the local field equation. Eq (11) represents the local field
hi(t)=n∑k≠in∑j≠iWijkSjSk+n∑j≠iWijSj+Wi, | (11) |
where Si represents the initial state and updated state of neuron i, and Wijk and Wi denote the synaptic weights for the third- and first-orders of the DHNN, respectively. The primary objective of incorporating LMR1,3SAT into MR1, 3SAT is to achieve a more final state during the retrieval phase which adheres to various logical rules. For example, Eq (11) is utilized to connect the final neuron state with more third-order clauses than first-order clauses as per Eq (6) or (7). The local field plays a crucial role in determining the effectiveness of the final neuron states generated by the DHNN. Its impact on the overall performance cannot be underestimated. Subsequently, the retrieved final states are interpreted to determine if the final solution is overfit or not. This interpretation is carried out based on the updating equation, which can be described by
Si(t+1)={1,iftanh(hi)⩾0,−1,otherwise, | (12) |
tanh(hi)=ehi−e−hiehi+e−hi. | (13) |
hi is the local field of the network, and the hyperbolic tangent function (tanh) is applied to hi representing the hyperbolic activation function (HTAF), which is a squashing function that ensures the neural network operates efficiently. During the learning phase, the cost function is compared with the Lyapunov energy function of the DHNN to determine the synaptic weight. Consequently, the magnitude of the final neuron state can be assessed using the Lyapunov energy function. The equation ΥLMR1,3SAT, as demonstrated in Eqs (14) and (15), refers to the minimum values from the energy function as per the formula for ΥminLMR1,3SAT in Eq (15):
ΥLMR1,3SAT=−13n∑in∑j≠in∑k≠i,jWijkSiSjSk−12n∑j≠iWijSiSj−n∑iWiSi, | (14) |
ΥminLMR1,3SAT=mi8+ni2. | (15) |
After the DHNN-MR1, 3SAT learning phase is finished, the synaptic weights derived from it are applied during the retrieval phase. It is essential to highlight that network relaxation is paramount in reaching an accurate final state [21]. As the number of neurons increases, more interconnected neurons are involved in the process of saving and receiving information. However, it should be acknowledged that an inefficient relaxation mechanism can result in numerous local minima solutions. In order to achieve a state of relaxation and stability within the network, the neuron updates are executed utilizing the highly effective Sathasivam relaxation method [25]. The exchange of information between neurons is computed using the formula
dhnewidt=Rdhidt. | (16) |
In Eq (16), hnewi is a new local field after the relaxation, while hi represents the local field of the network and R denotes the relaxation rate. In our research, we utilize a relaxation rate of R = 3. It is crucial to differentiate between the global minimum solution and the local minimum solution to ensure that the convergence property of the proposed MR1, 3SAT is satisfied. Additionally, the performance of the final neuron state must satisfy
|ΥLMR1,3SAT−ΥminLMR1,3SAT|⩽Tol, | (17) |
where Tol represents a pre-established tolerance value of MR1, 3SAT. Eq (17) determines whether the final neuron state satisfies the constraints of the LMR1,3SAT.
Figure 1 illustrates the schematic diagram showing the DHNN-MR1, 3SAT implementation. The diagram is divided into two primary phases: the learning phase, and the retrieval phase. During the learning phase, the random arrangement of clauses for LMR1,3SAT is determined and then translated into a Boolean algebra representation. Each clause within is associated with a neuron. In this phase, the objective of MR1, 3SAT is to assign the neuron states that satisfy the cost function described in Eq (10). By achieving the optimal neuron assignment, we can calculate the optimal synaptic weight which will later be employed in the retrieval phase. The retrieval phase involves utilizing the obtained synaptic weight to retrieve the desired information.
Figure 2 illustrates the methodological flowcharts employed in this paper. The green-coloured border signifies the learning phase processes conducted in this study. The clear presentation demonstrates how MR1, 3SAT is initialized and progresses through the learning phase to attain optimal synaptic weights using the respective equations. Conversely, the red-coloured border indicates the retrieval phase of MR1, 3SAT within the DHNN. This section also distinctly illustrates the sequence of steps involved in achieving minimum energy at the conclusion of the DHNN process in MR1, 3SAT. Algorithm 1 shows the pseudocode for the DHNN employed in the context of MR1, 3SAT in this study.
Algorithm 1. Pseudocode of DHNN-MR1, 3SAT |
Input: Parameters, COMBMAX, trial number, relaxation rate and tolerance value; |
Output: The final neuron state and global minimum solutions |
![]() |
This section provides an explanation of the proposed logic output and the assessment through multiple evaluation metrics across all phases. The aim is to verify the efficacy of integrating a statistical parameter into MR1, 3SAT, which is designed for logic generation. Furthermore, detailed explanations are given for the simulation platform, parameter assignment and performance metrics. In all models, the ES algorithm was utilized, and we employed a trial-and-error approach to minimize the cost function [25]. Table 1 provides the summary of the parameters involved in the MR1, 3SAT.
Parameter | Parameter values |
Number of learning (v) | 100 [18] |
Number of combination (n) | 100 [18] |
Number of trials (φ) | 100 [32] |
Number of neurons (NN) | 1<NN<130 |
Tolerance value (Tol) | 0.001 [11] |
Synaptic weight method | Abdullah [13] |
Relaxation rate (R) | 3 [11] |
Threshold CPU time | 24 hours [20] |
Learning iteration (ϕ) | ϕ⩽υ[26] |
Initialization of neuron states | Random [27] |
Learning algorithm | ES |
Threshold constraint of DHNN (θ) | 0 [18] |
Activation function | HTAF [22] |
Order of clauses | First and third-order logic |
The experiments were conducted using Dev C++ Version 5.11 and executed using open-source software and a 64-bit Windows 10 operating system. To ensure unbiased interpretation of the findings, the simulations were performed on a single personal computer with an Intel Core i5 processor. To ensure impartiality, the same medium specifications were employed. Figure 2 illustrates each configuration of DHNN-MR1, 3SAT, where the green blocks denote the learning phase, while the red blocks signify the retrieval phase.
Third-order clauses in the DHNN play a vital role in capturing complex non-linear relationships among variables which extend two-way interactions. These clauses facilitate the high capacity of complex data patterns that can enhance the precision of models [33]. These third-order clauses introduce heightened interactions between three variables, which introduce an augmented realm of logical combinations and relationships. Consequently, the network of logical structure becomes more complex by involving a more advanced stimulation strategy. Nonetheless, the incorporation of major third-order clauses augments network complexity, demanding larger network structures, expanded learning stimulation data and prolonged training durations. The information gathered from major third-order clauses introduces specific solutions by ensemble techniques that involve combining multiple models to improve overall performance without falling into the overfitting trap. The complexity grows in the higher-order model in terms of the interpretability, and is able to reduce unsatisfied models. However, strategies such as error analysis and neuron similarity analysis can predict the major third-order clauses performance in every mechanism in the DHNN.
In this section, we provide an explanation of the performance metrics employed for evaluating the efficacy of the proposed DHNN-MR1, 3SAT in comparison to other established approaches. The performance assessment focuses on two essential phases of the DHNN: the learning phase and the retrieval phase. In the learning phase, the DHNN model aims to minimize inconsistency, which is represented by the network cost function. The retrieval phase of the DHNN plays a vital role in assessing the proposed MR1, 3SAT approach. It consists of evaluating the final state that achieves the absolute minimum energy, and also considers the similarity of the neuron states. Analyzing this phase is essential as it provides a broad understanding of how flexible and effective the proposed MR1, 3SAT.
This subsection introduces a range of performance metrics for evaluation, which include root mean square error (RMSE), mean absolute error (MAE), sum of squared error (SSE), and mean absolute percentage error (MAPE). These metrics offer robust measures to accurately assess the performance and effectiveness of the proposed models or systems. These metrics have been selected based on previous studies [22,25,27] and are deemed relevant for this study. Eqs (18)–(25) represent the performance metrics for both the learning and retrieval phases. A zero value for these errors indicates an optimal learning and retrieval phase. In the learning process, we utilized a string of 130 neurons for learning iterations, as well as the same number of neurons for learning samples. For this process, it is important to note that the learning iterations and the number of learning samples can vary depending on the number of neurons in the neural network, as different numbers of neurons result in different numbers of iterations.
In this analysis, there are eight performance metrics to assess the effectiveness of both the learning and retrieval phase of the suggested network. During the learning phase, we aim to evaluate the network ability to minimize the cost function associated with the logical rule. On the other hand, during the retrieval phase the network is able to achieve the ideal final neuron state through convergence. The RMSE and MAE in the learning phase are used to measure the gap between the maximum clauses and the satisfied clauses in the proposed network model [34]. While in the retrieval phase the RMSE and MAE are used to measure the accuracy achieved by the proposed model. The RMSE and MAE in the learning and retrieval phase are described in Eq (18) as
RMSELearn=ϑ∑i=1√1ϑ(Γmax−Γi)2, | (18) |
where ϑ is the number of neuron combinations, Γmax refers to the maximum clauses in the model, and Γi represents the total number of satisfied clauses in the model. In the retrieval phase, the RMSE equation is given by
RMSERetrieve=ϑ∑i=1√(GLMR1,3SAT−LLMR1,3SAT)2ωρ, | (19) |
where GLMR1,3SAT is the number of global minimum solutions, and LLMR1,3SAT is the number of local minimum solutions. The metric ω is the number of combinations, while ρ is number of trials in the DHNN-MR1, 3SAT model, which evaluates the quality of the proposed model. The MAE is a metric defined as the average of the absolute gap between the maximum clauses and the satisfied clauses. The formula for calculating MAE in the learning and retrieval phase is
MAELearn=ϑ∑i=1√1ϑ(Γmax−Γi), | (20) |
MAERetrieve=ϑ∑i=1√1ϑ(Γmax−Γi). | (21) |
The MAPE measures the magnitude of the error in percentage in the learning and retrieval phase to evaluate the current performance of the DHNN-MR1, 3SAT model by measuring the difference in the percentage between the maximum number of clauses and the actual satisfied clauses [35]. The formula of MAPE in both phases is given by
MAPELearn=100ϑϑ∑i=1√|Γmax−Γi||Γi|, | (22) |
MAPERetrieve=100ϑϑ∑i=1√|GLMR1,3SAT−LLMR1,3SAT||ωρ|. | (23) |
The SSE used in this research evaluates and quantifies the overall error or discrepancy between the maximum number of clauses and the total satisfied clauses by the proposed DHNN-MR1, 3SAT model. The SSE provides a single and easily interpretable number that represents the total error [36]. Equation (24) in the learning phase and Eq (25) in the retrieval phase are used to calculate the SSE in this work.
SSELearn=ϑ∑i=1(Γmax−Γi)2, | (24) |
SSERetrieve=ϑ∑i=1(GLMR1,3SAT−LLMR1,3SAT)2. | (25) |
To facilitate the energy analysis, we have employed the approach of [17,30], utilizing both global minima and local minima ratios. In the retrieval phase the energy minimization achieved by DHNN-MR1, 3SAT was recorded as similar to the previous performance metrics. The energy analysis can be determined by utilizing Eqs (26)–(28).
QGlobal=ϑ∑i=1GLMR1,3SAT, | (26) |
QLocal=ϑ∑i=1LLMR1,3SAT, | (27) |
SSEEnergy=(ΥLMR1,3SAT−ΥminLMR1,3SAT)2. | (28) |
As stated in [37], when the DHNN has an initial neuron state, it may lead to bias in the retrieval phase. This bias emerges due to the DHNN learning to memorize the ultimate neuron state outright. To tackle this concern, we can implement a potential solution by following a systematic approach. This involves generating all neuron states using Eq (29), which helps minimize any potential positive (PP) and potential negative (PN) biases. The similarity index serves as a metric for quantifying the correlation between the neurons ultimate state and the desired neuron state in the retrieval phase of MR1, 3SAT. The definition of the optimal neuron state, denoted as Soptimaliis
Soptimali={1,ifA,−1,if¬A. | (29) |
The equation includes A, which represents the positive literal, and ¬A, which represents the negative literal present in every clause of MR1, 3SAT. Thereby, Soptimali represents the ideal neuron state. It is important to note that Eq (29) takes into account the final neuron state that achieves the global minimum energy. In this context, the quality of the final neuron state will be evaluated using the Jaccard index, SJaccard [24]
SJaccard=1e+f+g. | (30) |
Moreover, it is crucial to note that during this experiment, the instances of false positives and false negatives may result in a trap in the local minimum ratio [38]. As a result, an examination of the similarity index (SI) defined in Eq (30) will be conducted. Random clauses and literals will be generated in LMR1,3SAT logic. Additionally, Eqs (31) and (32) represent the total variation (TV) of DHNN-MR1, 3SAT. As mentioned previously, the similarity analysis will be a comparison between the final neuron states obtained and the reference neuron states listed in Table 2.
TV=ϑ∑i=1ωρ∑n=1(Ji)n, | (31) |
(Ji)n={−1,if(LMR13SAT)2=(LMR1,3SAT)n+1,1,if(LMR1,3SAT)2≠(LMR1,3SAT)n+1. | (32) |
Variable | Smaxi | Si |
e | 1 | 1 |
f | 1 | –1 |
g | –1 | 1 |
Given that the main objective of this paper is to evaluate the performance of the logic structure by measuring the effectiveness of major third-order clauses, in order to assess this experiment, a comparison is made with existing logics in the DHNN. This comparison focuses on three aspects: logic structure, retrieval phase and quality of the solution. By examining these behaviors in the logic, we can gain insights into their effectiveness and determine their impact on the overall performance of the system [39].
To examine the impact of controlling a majority of clauses in the third order and non-systematic logic structure, we investigate how manipulating the majority of clauses influences the behavior and performance of the logic. To evaluate the capability of the logical rule in controlling lower order clauses and accurately reflecting the behavior of the dataset, we aim to determine how effectively major clauses can control the lower order clauses and ensure that the logic aligns with the behavior of the given dataset. By conducting these investigations and comparisons, we can gain insights into the performance and effectiveness of the non-systematic major higher-order logical structure behaviors.
After the implementation of MR1, 3SAT in the DHNN, it is necessary to assess its performance by comparing the quality of the final neuron state with baseline models. Furthermore, an evaluation is conducted to analyze the neuron variations in the retrieval phase, which can identify the presence of global minima solutions and the level of neuron variation. By conducting these evaluations and comparisons, we can obtain a deeper understanding of how well the selected logic in the DHNN performs in comparison to the chosen recent logics with non-systematic structures.
(a) RAN2SAT, as introduced by [24], is a logical rule that combines second- and first-order clauses and is implemented in the DHNN as the first non-systematic logic. The logic under consideration, denoted as MR1, 3SAT, exhibits structural differences compared to RAN2SAT while incorporating major higher-order clauses. RAN2SAT is known to provide a wider range of synaptic weights due to the connection of the first-order clause. In this approach, each literal state is randomly defined, but the number of clauses for each order can be predetermined. Specifically, the range of the number of neurons extends from 3<NN<50.
(b) RAN3SAT, by [22], extended the previous research by [24] by introducing RAN3SAT. This new approach incorporated higher-order logic specifically 3SAT clauses, 2SAT clauses, and first-order clauses within a non-systematic SAT structure. The goal was to address the interpretability limitations of existing non-systematic SAT structures by increasing the number of neurons per clause. While the number of clauses for each sequence was chosen randomly and the literal state was explicitly defined, the number of neurons was limited within a specific range which spanned from 6<NN<50.
(c) RAN3, 1SAT, by [22], features different combinations from the RAN3SAT that compared the different order clauses within the logical structure. This innovative approach extended the non-systematic SAT structure by incorporating higher-order logic, specifically 3SAT and first-order clauses exclusively. The primary objective of RAN3, 1SAT was to explore the potential of non-systematic SAT structures by increasing the allocation of neurons per clause. While the number of clauses for each sequence was randomly chosen compared to the MR1, 3SAT, there was a limitation in the selection of clauses. Moreover, the number of neurons of RAN3, 1SAT was limited within a specific range, which varied between 6<NN<50.
(d) YRAN2SAT, introduced by [26], is a special logical rule referred to as Y-type random 2-satisfiability. YRAN2SAT stands out due to its unique approach toward randomly generating first- and second-order clauses. It combines both systematic and non-systematic logic that results in a novel framework. YRAN2SAT offers significant flexibility in exploring the search space, allowing for a high potential for diverse solutions by incorporating features from both types of clauses. In this approach, the logical structure defines the total number of clauses while the literal states are randomly assigned. The range of the number of neurons is specified as 1<NN<50, providing a defined scope for the number of neurons in the YRAN2SAT model.
(e) GRAN3SAT, was introduced by [27], as an innovative logical framework, referred to as G-type RAN3SAT. This framework entails the random creation of first-order, second-order, and third-order satisfiability logical rules. The involvement of a third-order clause contributes to enhancing the expressive power of this proposed logic. The effectiveness of this logic was evaluated through four distinct simulation scenarios differing in terms of clause counts, ratios of positive and negative literals, learning attempts, and iterations. The range for the number of neurons is specified as 6<NN<150.
This paper proposes a model that generates bipolar interpretations from simulated datasets in a random manner with a particular focus on major higher-order clauses. The logical representation employed in the simulations serves as the foundational structure for the generated simulated data. Simulated datasets are frequently employed to model and analyze the effectiveness of SAT logical structure as demonstrated in the works of [22,24,25,27].
In this section, we propose a logical output and measure the effectiveness through the utilization of diverse evaluation metrics in each phase. The objective is to clarify the efficacy of integrating a MR1, 3SAT structure. Furthermore, we delve into discussions concerning the simulation platform, parameter assignments, and metric performance. The simulations were executed with a designated count of neurons denoted as NN. To be precise, the experiments were conducted up to NN = 100 to analyze training error, testing error, and energy analysis. In this case, the similarity index metric placed the simulations at NN = 130 as the simulation reached a 100% local minimum ratio.
This section focuses on assessing the errors of all models during the learning phase of the DHNN. In the learning phase, the ES plays a crucial role in facilitating the learning phase by verifying clause satisfaction and the model achieved zero-cost function. This section provides insight into the proposed model synaptic weight management for all logical combinations of MR1, 3SAT.
In this section, we employed RMSE, MAE, MAPE, and SSE as metrics to quantify the error in the learning phase. These metrics evaluate the capability of SAT structures to be learned in the DHNN by assessing the fitness of exhaustive search. Figures 3–6 display the errors attained by various DHNN models in the restricted learning environment with NH = 100. Tables 3–6 shows that the "+" symbol represents the proposed logic MR1, 3SAT surplus when compared to existing non-systematic logic. On the other hand, the "–" symbol represents the existing losses when compared with existing methods. The "Ð" column indicates the difference in error between the proposed MR1, 3SAT and existing non-systematic logics.
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
RMSE | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | |
1-10 | 1.97 | 2.84 | -0.88 | 1.29 | 0.67 | 2.56 | -0.60 | 5.07 | -3.11 | 2.14 | -0.18 |
11-20 | 6.05 | 9.63 | -3.58 | 8.10 | -2.05 | 7.17 | -1.11 | 10.65 | -4.60 | 5.53 | 0.52 |
21-30 | 9.48 | 15.85 | -6.37 | 11.44 | -1.96 | 11.54 | -2.06 | 13.66 | -4.18 | 11.44 | -1.96 |
31-40 | 11.67 | 23.88 | -12.21 | 17.85 | -6.18 | 15.90 | -4.23 | 20.90 | -9.22 | 14.57 | -2.90 |
41-50 | 16.34 | 29.85 | -13.51 | 20.88 | -4.54 | 21.88 | -5.54 | 30.85 | -14.51 | 21.89 | -5.55 |
51-60 | 20.67 | 35.82 | -15.15 | 26.85 | -6.17 | 25.87 | -5.20 | 32.84 | -12.16 | 25.86 | -5.19 |
61-70 | 24.72 | 41.79 | -17.07 | 32.84 | -8.11 | 31.84 | -7.12 | 37.81 | -13.09 | 30.85 | -6.12 |
71-80 | 31.79 | 49.75 | -17.96 | 38.81 | -7.02 | 35.82 | -4.03 | 52.74 | -20.95 | 35.82 | -4.03 |
81-90 | 32.83 | 55.72 | -22.89 | 41.79 | -8.96 | 41.79 | -8.96 | 55.72 | -22.89 | 36.81 | -3.97 |
91-100 | 36.79 | 61.69 | -24.90 | 47.76 | -10.97 | 45.77 | -8.98 | 59.70 | -22.91 | 43.78 | -6.99 |
101-110 | 37.65 | 67.66 | -30.01 | 50.75 | -13.10 | 51.74 | -14.09 | 68.66 | -31.01 | 45.77 | -8.12 |
111-120 | 45.77 | 75.62 | -29.85 | 56.72 | -10.95 | 55.72 | -9.95 | 73.63 | -27.86 | 48.76 | -2.99 |
121-130 | 46.77 | 83.58 | -36.82 | 62.69 | -15.92 | 61.69 | -14.93 | 77.61 | -30.85 | 57.71 | -10.95 |
+/=/- | 13/0/0 | 12/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | ||||||
Avg | 24.81 | 42.59 | 32.14 | 31.49 | 41.53 | 29.30 | |||||
Min | 1.97 | 2.84 | 1.29 | 2.56 | 5.07 | 2.14 | |||||
Max | 46.77 | 83.58 | 62.69 | 61.69 | 77.61 | 57.71 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAE | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | |
1-10 | 1.50 | 2.43 | -0.92 | 0.94 | 0.56 | 2.14 | -0.63 | 4.57 | -3.07 | 1.69 | -0.19 |
11-20 | 5.10 | 9.40 | -4.30 | 7.53 | -2.43 | 6.63 | -1.53 | 10.44 | -5.34 | 4.80 | 0.30 |
21-30 | 8.69 | 15.72 | -7.03 | 11.06 | -2.37 | 11.25 | -2.56 | 13.49 | -4.80 | 11.06 | -2.37 |
31-40 | 10.90 | 23.76 | -12.87 | 17.71 | -6.82 | 15.81 | -4.91 | 20.79 | -9.89 | 14.18 | -3.28 |
41-50 | 15.90 | 29.70 | -13.81 | 20.77 | -4.87 | 21.76 | -5.87 | 30.69 | -14.80 | 21.78 | -5.88 |
51-60 | 20.37 | 35.64 | -15.27 | 26.69 | -6.32 | 25.74 | -5.37 | 32.67 | -12.30 | 25.72 | -5.36 |
61-70 | 24.46 | 41.58 | -17.12 | 32.67 | -8.21 | 31.68 | -7.22 | 37.62 | -13.16 | 30.69 | -6.23 |
71-80 | 31.58 | 49.50 | -17.92 | 38.61 | -7.03 | 35.64 | -4.06 | 52.48 | -20.89 | 35.64 | -4.06 |
81-90 | 32.67 | 55.45 | -22.78 | 41.58 | -8.92 | 41.58 | -8.92 | 55.45 | -22.78 | 36.61 | -3.95 |
91-100 | 36.59 | 61.39 | -24.80 | 47.52 | -10.94 | 45.54 | -8.96 | 59.41 | -22.82 | 43.56 | -6.98 |
101-110 | 37.37 | 67.33 | -29.96 | 50.50 | -13.13 | 51.49 | -14.12 | 68.32 | -30.95 | 45.54 | -8.18 |
111-120 | 45.54 | 75.25 | -29.70 | 56.44 | -10.89 | 55.45 | -9.90 | 73.27 | -27.72 | 48.51 | -2.97 |
121-130 | 46.53 | 83.17 | -36.63 | 62.38 | -15.84 | 61.39 | -14.85 | 77.23 | -30.69 | 57.43 | -10.89 |
+/=/- | 13/0/0 | 13/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | ||||||
Avg | 24.40 | 42.33 | 31.88 | 31.24 | 41.26 | 29.02 | |||||
Min | 1.50 | 2.43 | 0.94 | 2.14 | 4.57 | 1.69 | |||||
Max | 46.53 | 83.17 | 62.38 | 61.39 | 77.23 | 57.43 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAPE | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | |
1-10 | 37.62 | 60.72 | -23.10 | 31.34 | 6.28 | 53.42 | -15.80 | 76.24 | -38.62 | 42.28 | -4.66 |
11-20 | 63.73 | 93.99 | -30.26 | 83.70 | -19.97 | 82.83 | -19.09 | 94.87 | -31.13 | 68.50 | -4.77 |
21-30 | 79.00 | 98.26 | -19.25 | 92.16 | -13.15 | 93.77 | -14.77 | 96.37 | -17.37 | 92.19 | -13.18 |
31-40 | 83.82 | 99.01 | -15.19 | 98.41 | -14.58 | 98.79 | -14.96 | 99.01 | -15.19 | 94.50 | -10.68 |
41-50 | 93.51 | 99.01 | -5.50 | 98.89 | -5.37 | 98.92 | -5.41 | 99.01 | -5.50 | 99.00 | -5.48 |
51-60 | 96.99 | 99.01 | -2.02 | 98.87 | -1.87 | 99.01 | -2.02 | 99.01 | -2.02 | 98.94 | -1.94 |
61-70 | 97.86 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 |
71-80 | 98.70 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 |
81-90 | 98.98 | 99.01 | -0.03 | 99.01 | -0.03 | 99.01 | -0.03 | 99.01 | -0.03 | 98.96 | 0.03 |
91-100 | 98.88 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 |
101-110 | 98.34 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 |
111-120 | 99.01 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 |
121-130 | 99.01 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 |
+/=/- | 11/2/0 | 10/2/1 | 11/2/0 | 11/2/0 | 11/2/0 | ||||||
Avg | 88.11 | 95.62 | 92.03 | 93.83 | 96.74 | 91.42 | |||||
Min | 37.62 | 60.72 | 31.34 | 53.42 | 76.24 | 42.28 | |||||
Max | 99.01 | 99.01 | 99.01 | 99.01 | 99.01 | 99.01 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | ||||
1-10 | 32 | 119 | -87 | 16 | 16 | 77 | -45 | 481 | -450 | 44 | -12 | |||
11-20 | 514 | 7279 | -6765 | 2347 | -1833 | 1677 | -1164 | 9633 | -9119 | 511 | 3 | |||
21-30 | 2585 | 24589 | -22004 | 8408 | -5824 | 9841 | -7256 | 17652 | -15067 | 8453 | -5868 | |||
31-40 | 5183 | 57600 | -52417 | 30683 | -25500 | 24809 | -19626 | 44100 | -38917 | 14621 | -9437 | |||
41-50 | 18184 | 90000 | -71816 | 43306 | -25122 | 47819 | -29635 | 96100 | -77916 | 48308 | -30124 | |||
51-60 | 36308 | 129600 | -93292 | 71989 | -35681 | 67600 | -31292 | 108900 | -72592 | 66829 | -30522 | |||
61-70 | 57656 | 176400 | -118744 | 108900 | -51244 | 102400 | -44744 | 144400 | -86744 | 96100 | -38444 | |||
71-80 | 99430 | 250000 | -150570 | 152100 | -52670 | 129600 | -30170 | 280900 | -181470 | 129600 | -30170 | |||
81-90 | 108639 | 313600 | -204961 | 176400 | -67761 | 176400 | -67761 | 313600 | -204961 | 135955 | -27317 | |||
91-100 | 134408 | 384400 | -249992 | 230400 | -95992 | 211600 | -77192 | 360000 | -225592 | 193600 | -59192 | |||
101-110 | 142653 | 462400 | -319747 | 260100 | -117447 | 270400 | -127747 | 476100 | -333447 | 211600 | -68947 | |||
111-120 | 211600 | 577600 | -366000 | 324900 | -113300 | 313600 | -102000 | 547600 | -336000 | 240100 | -28500 | |||
121-130 | 220900 | 705600 | -484700 | 396900 | -176000 | 384400 | -163500 | 608400 | -387500 | 336400 | -115500 | |||
+/=/- | 13/0/0 | 13/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | |||||||||
Avg | 79853 | 244553 | 138958 | 133863 | 231374 | 114009 | ||||||||
Min | 32 | 119 | 16 | 77 | 481 | 44 | ||||||||
Max | 220900 | 705600 | 396900 | 384400 | 608400 | 336400 |
ES enables the learning phase to verify the fulfilment of clauses. In Table 3 and Figure 3, it can be observed how the proposed model manages synaptic weights for various logical combinations, including both major the higher-order and first-order clauses. The results indicate that the RMSE learning values were at their lowest when the range of higher-order neural network activations were generated in the MR1, 3SAT formula. This suggests that accurate synaptic weights can be obtained by incorporating more third-order clauses in the MR1, 3SAT logical rules. On the other hand, the MR1, 3SAT model outperformed when the NN increases in the DHNN without required additional learning iterations with the cost function. A low RMSE value can also indicate the ability of the model to avoid overfitting, which means that the model has learned to fit the learning data too closely and generalize to new data. This leads to obtaining optimal synaptic weights.
Table 4 and Figure 4 below show the impressive results achieved by MR1, 3SAT in terms of MAE during the learning phases. It is worth noting that when we minimize MAE learning, we can observe a similar pattern to what has been observed in the existing logical structure. This further highlights the effectiveness and reliability of MR1, 3SAT in producing accurate outcomes. With the help of the DHNN, the ideal final neuron state aligns perfectly with the performance of the MR1, 3SAT model. This dynamic capability ensures that optimal results are achieved, unlocking the maximum potential of the MR1, 3SAT.
Table 5 reveals the MAPE values for MR1, 3SAT and the existing proposed method. In these findings, specifically within the range of 11⩽NN⩽130, there is an observable trend as the MAPE value increases in the proposed model tends to converge towards 100%. A MAPE value of 100% signifies a notable distinction between the maximum fitness and the achieved neuron fitness. On the contrary, research by [40] indicates that a lower MAPE percentage signifies a heightened level of accuracy. Thus, it can be deduced that a smaller MAPE value is indicative of a superior forecasting performance. Figure 5 is a variation graph representing the results from Table 5.
Additionally, Table 6 and Figure 6 highlight a crucial point: the formulation of the SSE learning phase. The values in Table 6 are in the form of logarithms *10,000 for the axis of y. These changes recorded the overall performance errors throughout the learning phase. These results enhance the accuracy and effectiveness of the learning process with the minimum error in the learning phase of the proposed model. The minimum SSE error in the DHNN-MR1, 3SAT learning phase reduced the magnitude of errors.
This section focuses on evaluating the performance of various logical structures during the retrieval phase of the DHNN. This section is divided into two parts, namely an analysis of retrieval errors, and an assessment of the quality of the final neuron states. The quality analysis includes measures such as total variation and similarity analysis.
Retrieval error analysis holds great significance in investigating the behavior of MR1, 3SAT concerning synaptic weight management and the relationship with global or local minima solutions. Once the DHNN completes the verification of clause satisfaction involving the minimization of the cost function, the synaptic weights are generated using the Abdullah method [13]. When the cost function reaches zero, it signifies the retrieval of optimal synaptic weights during the retrieval phase, leading to a global minimum solution. Tables 7–10 show that the "+" symbol represents the proposed logic MR1, 3SAT surplus when compared to existing non-systematic logic. On the other hand, the "–" symbol represents the existing losses when compared with existing methods. The Ð column indicates the difference in error between the proposed MR1, 3SAT and existing non-systematic logics.
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||
RMSE | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | ||
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
11-20 | 0.00 | 52.00 | -52.00 | 2.00 | -2.00 | 3.00 | -3.00 | 67.00 | -67.00 | 0.00 | 0.00 | |
21-30 | 0.00 | 94.00 | -94.00 | 33.00 | -33.00 | 47.00 | -47.00 | 84.00 | -84.00 | 27.00 | -27.00 | |
31-40 | 3.00 | 100.00 | -97.00 | 89.00 | -86.00 | 93.00 | -90.00 | 100.00 | -97.00 | 43.00 | -40.00 | |
41-50 | 41.00 | 100.00 | -59.00 | 96.95 | -55.95 | 98.00 | -57.00 | 99.80 | -58.80 | 99.00 | -58.00 | |
51-60 | 68.00 | 100.00 | -32.00 | 98.00 | -30.00 | 100.00 | -32.00 | 100.00 | -32.00 | 98.00 | -30.00 | |
61-70 | 87.00 | 99.90 | -12.90 | 100.00 | -13.00 | 100.00 | -13.00 | 100.00 | -13.00 | 100.00 | -13.00 | |
71-80 | 95.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | |
81-90 | 98.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 99.00 | -1.00 | |
91-100 | 97.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | |
101-110 | 98.00 | 99.99 | -1.99 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | |
111-120 | 99.56 | 99.92 | -0.36 | 99.94 | -0.38 | 100.00 | -0.44 | 100.00 | -0.44 | 100.00 | -0.44 | |
121-130 | 100.00 | 99.95 | 0.05 | 100.00 | 0.00 | 99.80 | 0.20 | 99.97 | 0.03 | 99.97 | 0.03 | |
+/=/- | 11/1/1 | 11/2/0 | 11/1/1 | 11/1/1 | 10/2/1 | |||||||
Avg | 60.50 | 88.14 | 78.38 | 80.06 | 88.52 | 74.31 | ||||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | ||||||
Max | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAE | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.52 | -0.52 | 0.02 | -0.02 | 0.03 | -0.03 | 0.67 | -0.67 | 0.00 | 0.00 |
21-30 | 0.00 | 0.94 | -0.94 | 0.33 | -0.33 | 0.47 | -0.47 | 0.84 | -0.84 | 0.27 | -0.27 |
31-40 | 0.03 | 1.00 | -0.97 | 0.89 | -0.86 | 0.93 | -0.90 | 1.00 | -0.97 | 0.43 | -0.40 |
41-50 | 0.41 | 1.00 | -0.59 | 0.97 | -0.56 | 0.98 | -0.57 | 1.00 | -0.59 | 0.99 | -0.58 |
51-60 | 0.68 | 1.00 | -0.32 | 0.98 | -0.30 | 1.00 | -0.32 | 1.00 | -0.32 | 0.98 | -0.30 |
61-70 | 0.87 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 |
71-80 | 0.95 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 |
81-90 | 0.98 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 0.99 | -0.01 |
91-100 | 0.97 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 |
101-110 | 0.98 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 |
111-120 | 1.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 |
121-130 | 1.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 |
+/=/- | 10/3/0 | 10/3/0 | 10/3/0 | 10/3/0 | 9/4/0 | ||||||
Avg | 0.61 | 0.88 | 0.78 | 0.80 | 0.89 | 0.74 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAPE | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 |
21-30 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 |
31-40 | 0.00 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.00 | 0.00 |
41-50 | 0.00 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 |
51-60 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
61-70 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
71-80 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
81-90 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
91-100 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
101-110 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
111-120 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
121-130 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
+/=/- | 4/9/0 | 2/10/0 | 2/11/0 | 4/9/0 | 1/12/0 | ||||||
Avg | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 |
NN | MR1, 3SATT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | |
1-10 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 |
11-20 | 0.0E+00 | 2.7E+07 | -2.7E+07 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 |
21-30 | 0.0E+00 | 8.8E+07 | -8.8E+07 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 7.3E+06 | -7.3E+06 |
31-40 | 9.0E+04 | 1.0E+08 | -1.0E+08 | 0.0E+00 | 9.0E+04 | 0.0E+00 | 9.0E+04 | 0.0E+00 | 9.0E+04 | 1.8E+07 | -1.8E+07 |
41-50 | 1.7E+07 | 1.0E+08 | -8.3E+07 | 2.4E+01 | 1.7E+07 | 0.0E+00 | 1.7E+07 | 4.0E+02 | 1.7E+07 | 9.8E+07 | -8.1E+07 |
51-60 | 4.6E+07 | 1.0E+08 | -5.4E+07 | 0.0E+00 | 4.6E+07 | 0.0E+00 | 4.6E+07 | 0.0E+00 | 4.6E+07 | 9.6E+07 | -5.0E+07 |
61-70 | 7.6E+07 | 1.0E+08 | -2.4E+07 | 0.0E+00 | 7.6E+07 | 0.0E+00 | 7.6E+07 | 0.0E+00 | 7.6E+07 | 1.0E+08 | -2.4E+07 |
71-80 | 9.0E+07 | 1.0E+08 | -9.8E+06 | 0.0E+00 | 9.0E+07 | 0.0E+00 | 9.0E+07 | 0.0E+00 | 9.0E+07 | 1.0E+08 | -9.8E+06 |
81-90 | 9.6E+07 | 1.0E+08 | -4.0E+06 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 9.8E+07 | -2.0E+06 |
91-100 | 9.4E+07 | 1.0E+08 | -5.9E+06 | 0.0E+00 | 9.4E+07 | 0.0E+00 | 9.4E+07 | 0.0E+00 | 9.4E+07 | 1.0E+08 | -5.9E+06 |
101-110 | 9.6E+07 | 1.0E+08 | -3.9E+06 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 1.0E+08 | -4.0E+06 |
111-120 | 9.9E+07 | 1.0E+08 | -7.2E+05 | 3.2E+01 | 9.9E+07 | 0.0E+00 | 9.9E+07 | 0.0E+00 | 9.9E+07 | 1.0E+08 | -8.8E+05 |
121-130 | 1.0E+08 | 1.0E+08 | 1.0E+05 | 0.0E+00 | 1.0E+08 | 4.0E+02 | 1.0E+08 | 8.0E+00 | 1.0E+08 | 1.0E+08 | 6.0E+04 |
+/=/- | 11/1/1 | 0/13/0 | 0/13/0 | 0/13/0 | 10/1/2 | ||||||
Avg | 5.5E+07 | 8.6E+07 | 4.31 | 30.77 | 31.38 | 7.1E+07 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.0E+08 | 1.0E+08 | 32.00 | 400.00 | 400.00 | 1.0E+08 |
Based on Figure 7 and Table 7, MR1, 3SAT demonstrated minimal errors in the range of 1⩽NN⩽130. Meanwhile, during this specific interval, the GRAN3SAT is also in its retrieval phase and shows excellent performance by generating global minimum solutions with the proposed logic.
When a small number of neurons is used for learning and retrieval, the model experiences underfitting during the learning and retrieving of the neuron variations. This is why the model needs to increase the number of neurons. Interestingly, from the retrieval errors it can be observed that, after NN = 40, there were also lower RMSE retrieval errors in the corresponding proposed model compared to the existing models. This is proof that the existence of major order third-order clauses is able to retrieve the ideal final neuron state by exploring more of the solution space.
Conversely, in the range of 1⩽NN⩽30, MR1, 3SAT experienced an optimal retrieval phase in Table 8 and Figure 8 that shows the results in the lowest errors compared to existing logical structures. The results can assess the optimal and suboptimal retrieval phases by examining the number of global and local minima solutions generated by the proposed model. In summary, when considering all the non-systematic logic discussed, the combinations of RANkSAT exhibit a similar energy profile characterized by a continuous decrease toward the equilibrium neuron state [41]. This suggests that at NN=130, the proposed model generates a comparable number of global minima solutions.
The choice of the search algorithm plays a critical role in determining the quality of the solution for synaptic weight management. However, it is important to note that the "trial and error" nature of ES can impact the minimization of the cost function for such searching algorithms [18]. If ES fails to identify the optimal synaptic weight, it may have a negative impact on the retrieval phase, potentially leading to a local minimum solution. Alternatively, increasing the number of learning iterations can assist the proposed MR1, 3SAT in achieving an optimal retrieval phase its shown in Figures 8–10. This is because a higher number of iterations has the potential to yield a global minimal solution. One might question why the local minima solution is considered "bad" in the approach. In our simulation, the local minima solution is deemed insignificant because a higher number of local solutions can disrupt the measurement of the similarity in the final states of the neurons. Therefore, this experimental setup aids the network in avoiding several local minima solutions.
It is worth noting that different combinations with non-systematic logical structures yield varying results. Specifically, RAN2SAT exhibits the highest errors compared to other combinations. Conversely, GRAN3SAT demonstrates greater stability and produces fewer errors in terms of synaptic weight management in the 1<NN<30only. This discrepancy can be attributed to the logical structure of SAT with different orders. The involvement of third-order logic proves to be the most effective combination as the probability of obtaining a satisfied interpretation for third-order logic is higher than that of first-order logic. On the other hand, the logical structure of first-order logic can disrupt the process of retrieving correct synaptic weights, which can consequently result in higher retrieval errors. In fact, the MR1, 3SAT restricted the first-order existence clauses.
This section delves into a detailed exploration of the energy profile and the types of solutions, whether global or local, generated by MR1, 3SAT. Tables 11 and 12 shows the analysis of the energy profile involved examining the differences in energy by comparing the values of RMSE, MAE, MAPE, and SSE between the minimum energy and the final energy. These comparisons can be observed in Figures 11 and 12. As the NN increases for all non-systematic logical structures, the number of global minimum solutions decreases. In the minimum energy analysis, it is important to note that logical structures with more literals correspond to the presence of more clauses and require a greater number of iterations to generate feasible solutions. Figure 11 provides further insight demonstrating that higher-order logical structures with major clauses achieve more consistent global minima solutions. Another perceptive is that the absence of logical structures containing first-order clauses have the lowest probability of obtaining satisfying interpretations compared to randomly selected clauses in non-systematic logic.
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||
Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | |
1-10 | 10000 | 0 | 8400 | 1600 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 |
11-20 | 10000 | 0 | 0 | 10000 | 9400 | 600 | 10000 | 0 | 2100 | 7900 | 10000 | 0 |
21-30 | 8700 | 1300 | 100 | 9900 | 8100 | 1900 | 7000 | 3000 | 3000 | 7000 | 7900 | 2100 |
31-40 | 5600 | 4400 | 0 | 10000 | 3100 | 6900 | 1200 | 8800 | 0 | 10000 | 7800 | 2200 |
41-50 | 3300 | 6700 | 0 | 10000 | 500 | 9500 | 0 | 10000 | 100 | 9900 | 6600 | 3400 |
51-60 | 1000 | 9000 | 80 | 9920 | 100 | 9900 | 300 | 9700 | 0 | 10000 | 0 | 10000 |
61-70 | 203 | 9797 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 200 | 9800 |
71-80 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 6 | 9994 |
81-90 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
91-100 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
101-110 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 58 | 9942 |
111-120 | 0 | 10000 | 1 | 9999 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
121-130 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 16 | 9984 |
Avg Zm | 2984.85 | 660.08 | 2400.00 | 2192.31 | 1169.23 | 3275.38 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.24 | -0.24 | 0.01 | -0.01 | 0.02 | -0.02 | 0.81 | -0.81 | 0.00 | 0.00 |
21-30 | 0.00 | 0.54 | -0.54 | 0.24 | -0.24 | 0.23 | -0.23 | 0.55 | -0.55 | 0.21 | -0.21 |
31-40 | 0.01 | 1.20 | -1.19 | 0.85 | -0.84 | 0.59 | -0.58 | 0.76 | -0.75 | 0.41 | -0.40 |
41-50 | 0.31 | 1.35 | -1.05 | 1.09 | -0.79 | 0.78 | -0.48 | 1.28 | -0.98 | 0.91 | -0.60 |
51-60 | 0.61 | 1.66 | -1.05 | 1.01 | -0.40 | 0.89 | -0.28 | 1.63 | -1.02 | 1.06 | -0.45 |
61-70 | 0.93 | 1.68 | -0.75 | 1.39 | -0.45 | 1.05 | -0.12 | 1.83 | -0.90 | 1.29 | -0.36 |
71-80 | 1.34 | 1.91 | -0.58 | 1.74 | -0.40 | 1.13 | 0.21 | 1.72 | -0.39 | 1.51 | -0.17 |
81-90 | 1.17 | 2.27 | -1.10 | 1.76 | -0.59 | 1.20 | -0.03 | 1.92 | -0.75 | 1.85 | -0.68 |
91-100 | 1.49 | 2.31 | -0.82 | 1.92 | -0.43 | 1.30 | 0.19 | 2.41 | -0.93 | 2.22 | -0.74 |
101-110 | 1.64 | 2.21 | -0.57 | 2.15 | -0.51 | 1.45 | 0.19 | 2.56 | -0.92 | 2.65 | -1.01 |
111-120 | 1.65 | 2.30 | -0.65 | 2.54 | -0.89 | 1.49 | 0.16 | 2.68 | -1.03 | 2.47 | -0.82 |
121-130 | 1.87 | 3.14 | -1.27 | 2.79 | -0.93 | 1.63 | 0.23 | 2.55 | -0.69 | 2.62 | -0.75 |
+/=/- | 12/1/0 | 12/0/1 | 8/1/4 | 12/1/0 | 11/2/0 | ||||||
Avg | 0.85 | 1.60 | 1.34 | 0.90 | 1.59 | 1.32 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.87 | 3.14 | 2.79 | 1.63 | 2.68 | 2.65 |
This indicates that non-systematic logical structures are more susceptible to neuron oscillations. Additionally, an interesting finding is observed in Figure 11, where within the range of 1⩽NN⩽50we see that MR1, 3SAT can retrieve the highest number of global minimum solutions without any difference in energy. This can be attributed to optimal synaptic weight management, which facilitates the optimal retrieval phase and ensures consistent final neuron states. The significance of the energy profile is to know how well the model has performed in the neural network. The major higher-order clauses significantly improve the solution space in the neural network and is able to store a greater capacity.
The findings of [18] suggest that the Lyapunov energy function is bounded and plays a crucial role in determining the dynamics of the Hopfield neural network. In the case of DHNN-MR1, 3SAT, the energy function serves as an indicator of the optimality of the produced solutions. This finding is consistent with the research conducted by [42], which explores the quality of solutions in other types of Hopfield networks, including the kernel Hopfield neural network (KHNN) and the mean field theory Hopfield neural network (MFTHNN). According to their study, the Lyapunov energy function is a significant factor in observing the convergence of DHNN. In Table 12 and Figure 13, it can be observed that the SSE energy increases with NN. This phenomenon occurs due to a lower probability of obtaining a satisfying interpretation for the minimum cost function that causes the higher level of energy. The minimum energy of various order non-systematic logic can be influenced by the existence and manipulation of different SAT clauses which affect the constraints and behavior of the logic circuit ultimately impacting its energy consumption or efficiency.
The findings of this paper suggest that the MR1, 3SAT demonstrates the lowest energy difference among various non-systematic logic orders. This observation may explain why this particular logic is capable of achieving minimum energy compared to other orders. The study employed a bipolar neuron representation with values ranging from –1 to 1. Interestingly, the presence of a zero value eliminates a specific coefficient which can be in zero energy. However, the study emphasizes the significance of the Lyapunov energy function which demonstrates the process of energy minimization by the proposed model. The focus of this study was on major clauses which were restricted in the learning phase. In a similar study conducted by [18], multiple DHNN models were examined in a non-restricted learning environment that signified a lower energy profile. Therefore, the conclusion drawn is that a higher number of learning iterations can lead to the attainment of a global minimum energy. It is possible to make modifications to enhance the quality of solutions provided by non-systematic logic in DHNN.
The similarity index (SI) serves as a metric that allows for the assessment of the similarity or dissimilarity between different data items. The study by [24] proposed the use of SI, including the Jaccard index, Sockel Sneat, Dice, and the TV parameter, to evaluate the performance and characteristics of the DHNN model when applied to logical satisfiability. These indexing parameters can help analyze the similarity, dissimilarity, and variations within the neural network that provide valuable information about the effectiveness and behavior of the model in solving SAT instances [43].
According to Figure 14, the MR1, 3SAT model exhibits the lowest Jaccard similarity index (JSI) within the range of NN = 1 to NN = 130. This suggests a significant deviation and bias in the generated final states. The elevated JSI value indicates that the model is prone to overfitting since the MR1, 3SAT model is able to produce detectable differences in the final state pictured in Figure 14. The JSI value, which is a measure of similarity, is produced by the MR1, 3SAT model. However, there is no value in between NN = 60 to NN = 100 in RAN2SAT, RAN3SAT RAN3, 1SAT, and YRAN2SAT, indicating that the generated final neuron states vary as the number of neurons increases [18]. The decreases in JSI are attributed to the fewer benchmark neurons generated during the retrieval phase by the proposed model.
Beyond that, the JSI decreases to provide meaningful values because all the solutions retrieved by the network are local solutions. This is attributed to the nature of the ES algorithm, which operates based on trial and error and may hinder the minimization of the cost function. As a result, ES fails to produce the optimal synaptic weights during the learning phase, subsequently impacting the final neuron states generated by the model at the end of the computation.
According to [44], if the total neuron variation TV increases during the retrieval phase, it could mean that the model is converging well or is experiencing stability. This can lead to avoid overfitting, where the model becomes too specialized to the learning data and does generalize well when the number of neurons increases. In contrast, if the total neuron variation increases during the learning phase, it could indicate that the model is learning to generalize better and is making progress toward minimizing the error in the learning data.
Referring to Figure 15, for NN≥81 it can be observed that the value of TV for MR1, 3SAT reached its lowest point. This indicates that all the final states of neurons converged to local solutions, resulting in a TV value of 0. Furthermore, MR1, 3SAT exhibited the highest TV value before reaching NN≤90 models compared to other existing models when the higher-order clauses were able to retrieve a wider range of solutions. We can see that the existing model GRAN3SAT achieves the highest TV in the smaller NN due to the higher probabilities of the third-order and second-order clauses in the particular model. Conversely, the effectiveness of other non-systematic models declined as the global minimum energy decreased (as shown in Figure 11). Consequently, this leads to a decrease in TV value, and eventually TV reached 0 since there were no final neuron states that achieved the global minimum solution.
In this research, a preliminary analysis examined the effectiveness of a novel non-systematic logical rule called MR1, 3SAT integrated with DHNN. The DHNN-MR1, 3SAT approach possesses several notable strengths, which can be further elaborated upon as follows: First, the combination of first-order and third-order clauses demonstrated that MR1, 3SAT is capable of generating a wider capacity of neuron variations as the ratio of major third-order clauses increase. This observation suggests that the model has the potential to enhance the self-learning process, indicated by [22].
Second, the DHNN-MR1, 3SAT approach exhibits the ability to retrieve accurate synaptic weights leading to the attainment of global minimum solutions. This is particularly noteworthy because it remains effective even within a restrictive learning environment. In other words, the model can adapt and find optimal solutions regardless of the limitations imposed during learning. Finally, when comparing different ratios of MR1, 3SAT (with specific total clauses), the results show fewer errors, more stable errors, a greater variety of neuron variations, and overall energy minimization. Specifically, the DHNN-MR1, 3 SAT approach outperformed the existing non-systematic logic that consists of first-order logical combination in terms of error reduction when the number of learning sets was set to 100.
In conclusion, our research highlights the significant strengths of the DHNN-MR1, 3SAT approach. By formulating the combination of third-order and first-order, the model can provide a wider range of neuron variations, enhance the self-learning process, retrieve accurate synaptic weights, and achieve global minimum solutions. Additionally, the major third-order clauses of MR1, 3SAT offer improved error reduction, stable errors, diverse neuron variations, and energy minimization. The novel logical structure MR1, 3SAT existence of the major higher-order clauses hold the potential to advance pattern recognition in real-world data analysis. This advancement can benefit applications in image processing, natural language processing, and other domains that rely on sophisticated data interpretation. It is crucial not only for these applications, but also for sectors such as finance, healthcare, and manufacturing, where rapid and precise decision-making is paramount. These findings contribute to our understanding of efficient logical rule integration and the potential applications in the field.
In future work, with the MR1, 3SAT we will conduct an examination using metaheuristics in the learning and retrieval phases to optimize diversity and achieve global solutions. The application of metaheuristics in both phases is advantageous for generating diverse solutions. During the learning phases, synaptic weight analysis can be employed to address the impact of the energy function and global solutions on synaptic weights. Additionally, according to the existing work proposed by [44,45,46], fractional-order systems can capture complex behaviors of the neural network. These studies also proved that integer-order systems could be advantageous in modeling the dynamics of neutral networks.
Notably, the resilient architecture of artificial neural networks combined with our proposed logic provides a solid foundation for practical applications, such as predicting natural disasters. In this context, each neuron represents data attributes such as rainfall trends, river levels, drainage, and ground conditions. These attributes are embedded into the logic-mining approach suggested by [32], leading to the development of induced logic with predictive and classificatory capabilities. This logic mining also emphasizes the integration of fractional-order dynamics and bifurcation control mechanisms to enhance the representation of complex dynamics [46,47,48] and better capture real-world behaviors within neural networks. We will evaluate their applicability and effectiveness in stabilizing and controlling neural network dynamics considering potential applications in areas such as pattern recognition and information processing.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research was supported by Ministry of Higher Education Malaysia for Transdisciplinary Research Grant Scheme (TRGS) with Project Code: TRGS/1/2022/USM/02/3/3.
All authors declare no conflicts of interest in this paper.
[1] |
M Soori, B. Arezoo, R. Dastres, Artificial intelligence, machine learning and deep learning in advanced robotics, Cognit. Rob. , 3 (2023), 54–70.https://doi.org/10.1016/j.cogr.2023.04.001 doi: 10.1016/j.cogr.2023.04.001
![]() |
[2] |
J. L. Patel, R. K. Goyal, Applications of artificial neural networks in medical science, Curr. Clin. Pharmacol. , 2 (2007), 217–226.https://doi.org/10.2174/157488407781668811 doi: 10.2174/157488407781668811
![]() |
[3] |
L. Feng, J. Zhang, Application of artificial neural networks in tendency forecasting of economic growth, Econ. Modell. , 40 (2014), 76–80.https://doi.org/10.1016/j.econmod.2014.03.024 doi: 10.1016/j.econmod.2014.03.024
![]() |
[4] |
A. Nikitas, K. Michalakopoulou, E. T. Njoya, D. Karampatzakis, Artificial intelligence, transport and the smart city: definitions and dimensions of a new mobility era, Sustainability, 12 (2020), 2789.https://doi.org/10.3390/su12072789 doi: 10.3390/su12072789
![]() |
[5] |
M. Ozbey, M. Kayri, Investigation of factors affecting transactional distance in E-learning environment with artificial neural networks, Educ. Inf. Technol. , 28 (2023), 4399–4427.https://doi.org/10.1007/s10639-022-11346-4 doi: 10.1007/s10639-022-11346-4
![]() |
[6] |
M. Tkac, R. Verner, Artificial neural networks in business: two decades of research, Appl. Soft Comput. , 38 (2016), 788–804.https://doi.org/10.1016/j.asoc.2015.09.040 doi: 10.1016/j.asoc.2015.09.040
![]() |
[7] | N. J. Nilsson, Probabilistic logic, Artif. Intell. , 28 (1986), 71–87. |
[8] |
C. Nebauer, Evaluation of convolutional neural networks for visual recognition, IEEE Trans. Neual Networks, 9 (1998), 685–696.https://doi.org/10.1109/72.701181 doi: 10.1109/72.701181
![]() |
[9] | D. O. Hebb, The organization of behavior: a neuropsychological theory, John Wiley and Sons, Inc, 1949. |
[10] |
S. F. Ahmed, M. S. B. Alam, M. Hassan, Deep learning modelling techniques: current progress, applications, advantages, and challenges, Artif. Intell. Rev. , 56 (2023), 13521–13617.https://doi.org/10.1007/s10462-023-10466-8 doi: 10.1007/s10462-023-10466-8
![]() |
[11] | S. Sathasivam, Upgrading logic programming in Hopfield network, Sains Malays. , 39 (2010), 115–118. |
[12] |
S. Sathasivam, W. A. T. W. Abdullah, Logic mining in neural network: reverse analysis method, Computing, 91 (2011), 119–133.https://doi.org/10.1007/s00607-010-0117-9 doi: 10.1007/s00607-010-0117-9
![]() |
[13] |
W. A. T. W. Abdullah, Logic programming on a neural network, Int. J. Intell. Syst. , 7 (1992), 513–519.https://doi.org/10.1002/int.4550070604 doi: 10.1002/int.4550070604
![]() |
[14] | M. A. Mansor, S. Sathasivam, Optimal performance evaluation metrics for satisfiability logic representation in discrete Hopfield neural network, Comput. Sci. , 16 (2021), 963–976. |
[15] |
M. A. Mansor, M. S. M. Kasihmuddin, S. Sathasivam, Robust artificial immune system in the Hopfield network for maximum k-satisfiability, Int. J. Interact. Multimedia Artif. Intell., 4 (2017), 63.https://doi.org/10.9781/ijimai.2017.448 doi: 10.9781/ijimai.2017.448
![]() |
[16] |
M. S. M. Kasihmuddin, M. A. Mansor, S. Sathasivam, Discrete Hopfield neural network in restricted maximum k-satisfiability logic programming, Sains Malays. , 47 (2018), 1327–1335.https://doi.org/10.17576/JSM-2018-4706-30 doi: 10.17576/JSM-2018-4706-30
![]() |
[17] |
N. E. Zamri, M. A. Mansor, M. S. M. Kasihmuddin, A. Alway, M. S. Z. Jamaludin, S. A. Alzaeemi, Amazon employees resources access data extraction via clonal selection algorithm and logic mining approach, Entropy, 22 (2020), 596.https://doi.org/10.3390/e22060596 doi: 10.3390/e22060596
![]() |
[18] |
M. S. M. Kasihmuddin, M. A. Mansor, M. F. M. Basir, S. Sathasivam, Discrete mutation Hopfield neural network in propositional satisfiability, Mathematics, 7 (2019), 1133.https://doi.org/10.3390/math7111133 doi: 10.3390/math7111133
![]() |
[19] | M. S. M. Kasihmuddin, M. A. Mansor, S. Sathasivam, Hybrid genetic algorithm in the Hopfield neural network for logic satisfiability problem, Pertanika J. Sci. Technol., 25 (2017), 139152. |
[20] |
S. Sathasivam, M. A. Mansor, A. I. M. Ismail, S. Z. M Jamaludin, M. S. M. Kasihmuddin, M. Mamat, Novel random k satisfiability for k≤2 in Hopfield neural network, Sains Malays. , 49 (2020), 2847–2857.https://doi.org/10.17576/jsm-2020-4911-23 doi: 10.17576/jsm-2020-4911-23
![]() |
[21] |
N. E. Zamri, S. A. Azhar, M. A. Mansor, A. Alway, M. S. M. Kasihmuddin, Weighted random k satisfiability for k = 1, 2 (r2SAT) in discrete Hopfield neural network, Appl. Soft Comput. , 126 (2022), 109312.https://doi.org/10.1016/j.asoc.2022.109312 doi: 10.1016/j.asoc.2022.109312
![]() |
[22] |
S. A. Karim, N. E. Zamri, A. Alway, M. S. M. Kasihmuddin, A. I. M. Ismail, M. A. Mansor, et al., Random satisfiability: a higher-order logical approach in discrete Hopfield neural network, IEEE Access, 9 (2021), 50831–50845.https://doi.org/10.1109/ACCESS.2021.3068998 doi: 10.1109/ACCESS.2021.3068998
![]() |
[23] |
M. M. Bazuhair, S. Z. M. Jamaludin, N. E Zamri, M. S. M. Kasihmuddin, M. Mansor, A. Alway, et al., Novel Hopfield neural network model with election algorithm for random 3 satisfiability, Processes, 9 (2021), 1292.https://doi.org/10.3390/pr9081292 doi: 10.3390/pr9081292
![]() |
[24] |
S. Sathasivam, M. A. Mansor, M. S. M. Kasihmuddin, H. Abubakar, Election algorithm for random k satisfiability in the Hopfield neural network, Processes, 8 (2020), 568.https://doi.org/10.3390/pr8050568 doi: 10.3390/pr8050568
![]() |
[25] |
A. Alway, N. E. Zamri, S. A. Karim, M. A. Mansor, M. S. M. Kasihmuddin, M. M. Bazuhair, Major 2 satisfiability logic in discrete Hopfield neural network, Int. J. Comput. Math. , 99 (2022), 924–948.https://doi.org/10.1080/00207160.2021.1939870 doi: 10.1080/00207160.2021.1939870
![]() |
[26] |
Y. Guo, M. S. M. Kasihmuddin, Y. Gao, M. A. Mansor, H. A. Wahab, N. E. Zamri, et al., YRAN2SAT: a novel flexible random satisfiability logical rule in discrete Hopfield neural nework, Adv. Eng. Software, 171 (2022), 103169.https://doi.org/10.1016/j.advengsoft.2022.103169 doi: 10.1016/j.advengsoft.2022.103169
![]() |
[27] | Y. Gao, Y. Guo, N. A. Romli, M. S. M. Kasihmuddin, W. Chen, M. A. Mansor, et al., GRAN3SAT: creating flexible higher-order logic satisfiability in the discrete Hopfield neural network, Mathematics, 10 (2022), 1899. https: /doi. org/10.3390/math10111899 |
[28] |
S. Abdeen, M. S. M. Kasihmuddin, N. E. Zamri, G. Manoharam, M. A. Mansor, N. Alshehri, S-type random k satisfiability logic in discrete Hopfield neural network using probability distribution: performance optimization and analysis, Mathematics, 11 (2023), 984.https://doi.org/10.3390/math11040984 doi: 10.3390/math11040984
![]() |
[29] |
M. A. F. Roslan, N. E. Zamri, M. A. Mansor, M. S. M. Kasihmuddin, Major 3 satisfiability logic in discrete Hopfield neural network integrated with multi-objective election algorithm, AIMS Math. , 8 (2023), 22447–22482.https://doi.org/10.3934/math.20231145 doi: 10.3934/math.20231145
![]() |
[30] |
S. A. Karim, M. S. M. Kasihmuddin, S. Sathasivam, M. A. Mansor, S. Z. M. Jamaludin, M. R. Amin, A novel multi-objective hybrid election algorithm for higher-order random satisfiability in discrete Hopfield neural network, Mathematics, 10 (2022), 1963.https://doi.org/10.3390/math10121963 doi: 10.3390/math10121963
![]() |
[31] |
T. Fukai, S. Tanaka, A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winners-share-all, Neural Comput. , 9 (1997), 77–97.https://doi.org/10.1162/neco.1997.9.1.77 doi: 10.1162/neco.1997.9.1.77
![]() |
[32] |
M. S. M. Kasihmuddin, S. Z. M. Jamaludin, M. A. Mansor, H. A. Wahab, S. M. S. Ghadzi, Supervised learning perspective in logic mining, Mathematics, 10 (2022), 915.https://doi.org/10.3390/math10060915 doi: 10.3390/math10060915
![]() |
[33] |
O. I. Abiodun, A. Jantan, A. E. Omolara, K. V. Dada, N. A. Mohamed, H. Arshad, State-of-the-art in artificial neural network applications: a survey, Heliyon, 4 (2018), e00938.https://doi.org/10.1016/j.heliyon.2018.e00938 doi: 10.1016/j.heliyon.2018.e00938
![]() |
[34] |
C. J. Willmott, K. Matsuura, Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance, Climate Res. , 30 (2005), 79–82.https://doi.org/10.3354/CR030079 doi: 10.3354/CR030079
![]() |
[35] |
A. de Myttenaere, B. Golden, B. L. Grand, F. Rossi, Mean absolute percentage error for regression models, Neurocomputing, 192 (2016), 38–48.https://doi.org/10.1016/j.neucom.2015.12.114 doi: 10.1016/j.neucom.2015.12.114
![]() |
[36] |
M. Bilal, S. Masud, S. Athar, FPGA design for statistics-inspired approximate sum-of-squared-error computation in multimedia applications, IEEE Trans. Circuits Syst. II, 59 (2012), 506–510.https://doi.org/10.1109/TCSII.2012.2204841 doi: 10.1109/TCSII.2012.2204841
![]() |
[37] |
P. Ong, Z. Zainuddin, Optimizing wavelet neural networks using modified cuckoo search for multi-step ahead chaotic time series prediction, Appl. Soft Comput. , 80 (2019), 374–386.https://doi.org/10.1016/j.asoc.2019.04.016 doi: 10.1016/j.asoc.2019.04.016
![]() |
[38] |
P. Bergmann, K. Batzner, M. Fauser, D. Sattlegger, C. Steger, The MVTec anomaly detection dataset: a comprehensive real-world dataset for unsupervised anomaly detection, Int. J. Comput. Vision, 129 (2021), 1038–1059.https://doi.org/10.1007/s11263-020-01400-4 doi: 10.1007/s11263-020-01400-4
![]() |
[39] |
V. Lopez, A. Fernandez, S. Garcia, V. Palade, F. Herrera, An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics, Inf. Sci. , 250 (2013), 113–141.https://doi.org/10.1016/j.ins.2013.07.007 doi: 10.1016/j.ins.2013.07.007
![]() |
[40] |
B. H. Goh, Forecasting residential construction demand in Singapore: a comparative study of the accuracy of time series, regression, and artificial neural network techniques, Eng. Constr. Archit. Manage. , 5 (1998), 261–275.https://doi.org/10.1046/j.1365-232X.1998.00048.x doi: 10.1046/j.1365-232X.1998.00048.x
![]() |
[41] |
J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. , 79 (1982), 2554–2558.https://doi.org/10.1073/pnas.79.8.2554 doi: 10.1073/pnas.79.8.2554
![]() |
[42] |
V. Veerasamy, N. I. A. Wahab, R. Ramachandran, M. L. Othman, H. Hizam, A. X. R. Irudayaraj, et al., A Hankel matrix based reduced order model for stability analysis of hybrid power system using PSO-GSA optimized cascade PI-PD controller for automatic load frequency control, IEEE Access, 8 (2020), 71422–71446.https://doi.org/10.1109/ACCESS.2020.2987387 doi: 10.1109/ACCESS.2020.2987387
![]() |
[43] |
X. Fan, W. Zhang, C. Zhang, A. Chen, F. An, SOC estimation of Li-ion battery using convolutional neural network with U-Net architecture, Energy, 256 (2022), 124612.https://doi.org/10.1016/j.energy.2022.124612 doi: 10.1016/j.energy.2022.124612
![]() |
[44] |
J. Bruck, J. W. Goodman, A generalized convergence theorem for neural networks, IEEE Trans. Inf. Theory, 34 (1988), 1089–1092.https://doi.org/10.1109/18.21239 doi: 10.1109/18.21239
![]() |
[45] |
P. Li, X. Peng, C. Xu, L. Han, S. Shi, Novel extended mixed controller design for bifurcation control of fractional-order Myc/E2F/miR-17-92 network model concerning delay, Math. Methods Appl. Sci. , 46 (2023), 18878–18898.https://doi.org/10.1002/mma.9597 doi: 10.1002/mma.9597
![]() |
[46] |
Y. Zhang, P. Li, C. Xu, X. Peng, R. Qiao, Investigating the effects of a fractional operator on the evolution of the ENSO model: bifurcations, stability and numerical analysis, Fractal Fract. , 7 (2023), 602.https://doi.org/10.3390/fractalfract7080602 doi: 10.3390/fractalfract7080602
![]() |
[47] |
C. Xu, Z. Liu, P. Li, J. Yan, L. Yao, Bifurcation mechanism for fractional-order three-triangle multi-delayed neural networks, Neural Process. Lett. , 55 (2023), 6125–6151.https://doi.org/10.1007/s11063-022-11130-y doi: 10.1007/s11063-022-11130-y
![]() |
[48] |
C. Xu, Y. Zhao, J. Lin, Y. Pang, Z. Liu, J. Shen, et al., Mathematical exploration on control of bifurcation for a plankton-oxygen dynamical model owning delay, J. Math. Chem. , 2023.https://doi.org/10.1007/s10910-023-01543-y doi: 10.1007/s10910-023-01543-y
![]() |
1. | Yueling Guo, Nur Ezlin Zamri, Mohd Shareduwan Mohd Kasihmuddin, Alyaa Alway, Mohd. Asyraf Mansor, Jia Li, Qianhong Zhang, Dual optimization approach in discrete Hopfield neural network, 2024, 164, 15684946, 111929, 10.1016/j.asoc.2024.111929 |
Algorithm 1. Pseudocode of DHNN-MR1, 3SAT |
Input: Parameters, COMBMAX, trial number, relaxation rate and tolerance value; |
Output: The final neuron state and global minimum solutions |
![]() |
Parameter | Parameter values |
Number of learning (v) | 100 [18] |
Number of combination (n) | 100 [18] |
Number of trials (φ) | 100 [32] |
Number of neurons (NN) | 1<NN<130 |
Tolerance value (Tol) | 0.001 [11] |
Synaptic weight method | Abdullah [13] |
Relaxation rate (R) | 3 [11] |
Threshold CPU time | 24 hours [20] |
Learning iteration (ϕ) | ϕ⩽υ[26] |
Initialization of neuron states | Random [27] |
Learning algorithm | ES |
Threshold constraint of DHNN (θ) | 0 [18] |
Activation function | HTAF [22] |
Order of clauses | First and third-order logic |
Variable | Smaxi | Si |
e | 1 | 1 |
f | 1 | –1 |
g | –1 | 1 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
RMSE | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | |
1-10 | 1.97 | 2.84 | -0.88 | 1.29 | 0.67 | 2.56 | -0.60 | 5.07 | -3.11 | 2.14 | -0.18 |
11-20 | 6.05 | 9.63 | -3.58 | 8.10 | -2.05 | 7.17 | -1.11 | 10.65 | -4.60 | 5.53 | 0.52 |
21-30 | 9.48 | 15.85 | -6.37 | 11.44 | -1.96 | 11.54 | -2.06 | 13.66 | -4.18 | 11.44 | -1.96 |
31-40 | 11.67 | 23.88 | -12.21 | 17.85 | -6.18 | 15.90 | -4.23 | 20.90 | -9.22 | 14.57 | -2.90 |
41-50 | 16.34 | 29.85 | -13.51 | 20.88 | -4.54 | 21.88 | -5.54 | 30.85 | -14.51 | 21.89 | -5.55 |
51-60 | 20.67 | 35.82 | -15.15 | 26.85 | -6.17 | 25.87 | -5.20 | 32.84 | -12.16 | 25.86 | -5.19 |
61-70 | 24.72 | 41.79 | -17.07 | 32.84 | -8.11 | 31.84 | -7.12 | 37.81 | -13.09 | 30.85 | -6.12 |
71-80 | 31.79 | 49.75 | -17.96 | 38.81 | -7.02 | 35.82 | -4.03 | 52.74 | -20.95 | 35.82 | -4.03 |
81-90 | 32.83 | 55.72 | -22.89 | 41.79 | -8.96 | 41.79 | -8.96 | 55.72 | -22.89 | 36.81 | -3.97 |
91-100 | 36.79 | 61.69 | -24.90 | 47.76 | -10.97 | 45.77 | -8.98 | 59.70 | -22.91 | 43.78 | -6.99 |
101-110 | 37.65 | 67.66 | -30.01 | 50.75 | -13.10 | 51.74 | -14.09 | 68.66 | -31.01 | 45.77 | -8.12 |
111-120 | 45.77 | 75.62 | -29.85 | 56.72 | -10.95 | 55.72 | -9.95 | 73.63 | -27.86 | 48.76 | -2.99 |
121-130 | 46.77 | 83.58 | -36.82 | 62.69 | -15.92 | 61.69 | -14.93 | 77.61 | -30.85 | 57.71 | -10.95 |
+/=/- | 13/0/0 | 12/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | ||||||
Avg | 24.81 | 42.59 | 32.14 | 31.49 | 41.53 | 29.30 | |||||
Min | 1.97 | 2.84 | 1.29 | 2.56 | 5.07 | 2.14 | |||||
Max | 46.77 | 83.58 | 62.69 | 61.69 | 77.61 | 57.71 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAE | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | |
1-10 | 1.50 | 2.43 | -0.92 | 0.94 | 0.56 | 2.14 | -0.63 | 4.57 | -3.07 | 1.69 | -0.19 |
11-20 | 5.10 | 9.40 | -4.30 | 7.53 | -2.43 | 6.63 | -1.53 | 10.44 | -5.34 | 4.80 | 0.30 |
21-30 | 8.69 | 15.72 | -7.03 | 11.06 | -2.37 | 11.25 | -2.56 | 13.49 | -4.80 | 11.06 | -2.37 |
31-40 | 10.90 | 23.76 | -12.87 | 17.71 | -6.82 | 15.81 | -4.91 | 20.79 | -9.89 | 14.18 | -3.28 |
41-50 | 15.90 | 29.70 | -13.81 | 20.77 | -4.87 | 21.76 | -5.87 | 30.69 | -14.80 | 21.78 | -5.88 |
51-60 | 20.37 | 35.64 | -15.27 | 26.69 | -6.32 | 25.74 | -5.37 | 32.67 | -12.30 | 25.72 | -5.36 |
61-70 | 24.46 | 41.58 | -17.12 | 32.67 | -8.21 | 31.68 | -7.22 | 37.62 | -13.16 | 30.69 | -6.23 |
71-80 | 31.58 | 49.50 | -17.92 | 38.61 | -7.03 | 35.64 | -4.06 | 52.48 | -20.89 | 35.64 | -4.06 |
81-90 | 32.67 | 55.45 | -22.78 | 41.58 | -8.92 | 41.58 | -8.92 | 55.45 | -22.78 | 36.61 | -3.95 |
91-100 | 36.59 | 61.39 | -24.80 | 47.52 | -10.94 | 45.54 | -8.96 | 59.41 | -22.82 | 43.56 | -6.98 |
101-110 | 37.37 | 67.33 | -29.96 | 50.50 | -13.13 | 51.49 | -14.12 | 68.32 | -30.95 | 45.54 | -8.18 |
111-120 | 45.54 | 75.25 | -29.70 | 56.44 | -10.89 | 55.45 | -9.90 | 73.27 | -27.72 | 48.51 | -2.97 |
121-130 | 46.53 | 83.17 | -36.63 | 62.38 | -15.84 | 61.39 | -14.85 | 77.23 | -30.69 | 57.43 | -10.89 |
+/=/- | 13/0/0 | 13/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | ||||||
Avg | 24.40 | 42.33 | 31.88 | 31.24 | 41.26 | 29.02 | |||||
Min | 1.50 | 2.43 | 0.94 | 2.14 | 4.57 | 1.69 | |||||
Max | 46.53 | 83.17 | 62.38 | 61.39 | 77.23 | 57.43 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAPE | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | |
1-10 | 37.62 | 60.72 | -23.10 | 31.34 | 6.28 | 53.42 | -15.80 | 76.24 | -38.62 | 42.28 | -4.66 |
11-20 | 63.73 | 93.99 | -30.26 | 83.70 | -19.97 | 82.83 | -19.09 | 94.87 | -31.13 | 68.50 | -4.77 |
21-30 | 79.00 | 98.26 | -19.25 | 92.16 | -13.15 | 93.77 | -14.77 | 96.37 | -17.37 | 92.19 | -13.18 |
31-40 | 83.82 | 99.01 | -15.19 | 98.41 | -14.58 | 98.79 | -14.96 | 99.01 | -15.19 | 94.50 | -10.68 |
41-50 | 93.51 | 99.01 | -5.50 | 98.89 | -5.37 | 98.92 | -5.41 | 99.01 | -5.50 | 99.00 | -5.48 |
51-60 | 96.99 | 99.01 | -2.02 | 98.87 | -1.87 | 99.01 | -2.02 | 99.01 | -2.02 | 98.94 | -1.94 |
61-70 | 97.86 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 |
71-80 | 98.70 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 |
81-90 | 98.98 | 99.01 | -0.03 | 99.01 | -0.03 | 99.01 | -0.03 | 99.01 | -0.03 | 98.96 | 0.03 |
91-100 | 98.88 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 |
101-110 | 98.34 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 |
111-120 | 99.01 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 |
121-130 | 99.01 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 |
+/=/- | 11/2/0 | 10/2/1 | 11/2/0 | 11/2/0 | 11/2/0 | ||||||
Avg | 88.11 | 95.62 | 92.03 | 93.83 | 96.74 | 91.42 | |||||
Min | 37.62 | 60.72 | 31.34 | 53.42 | 76.24 | 42.28 | |||||
Max | 99.01 | 99.01 | 99.01 | 99.01 | 99.01 | 99.01 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | ||||
1-10 | 32 | 119 | -87 | 16 | 16 | 77 | -45 | 481 | -450 | 44 | -12 | |||
11-20 | 514 | 7279 | -6765 | 2347 | -1833 | 1677 | -1164 | 9633 | -9119 | 511 | 3 | |||
21-30 | 2585 | 24589 | -22004 | 8408 | -5824 | 9841 | -7256 | 17652 | -15067 | 8453 | -5868 | |||
31-40 | 5183 | 57600 | -52417 | 30683 | -25500 | 24809 | -19626 | 44100 | -38917 | 14621 | -9437 | |||
41-50 | 18184 | 90000 | -71816 | 43306 | -25122 | 47819 | -29635 | 96100 | -77916 | 48308 | -30124 | |||
51-60 | 36308 | 129600 | -93292 | 71989 | -35681 | 67600 | -31292 | 108900 | -72592 | 66829 | -30522 | |||
61-70 | 57656 | 176400 | -118744 | 108900 | -51244 | 102400 | -44744 | 144400 | -86744 | 96100 | -38444 | |||
71-80 | 99430 | 250000 | -150570 | 152100 | -52670 | 129600 | -30170 | 280900 | -181470 | 129600 | -30170 | |||
81-90 | 108639 | 313600 | -204961 | 176400 | -67761 | 176400 | -67761 | 313600 | -204961 | 135955 | -27317 | |||
91-100 | 134408 | 384400 | -249992 | 230400 | -95992 | 211600 | -77192 | 360000 | -225592 | 193600 | -59192 | |||
101-110 | 142653 | 462400 | -319747 | 260100 | -117447 | 270400 | -127747 | 476100 | -333447 | 211600 | -68947 | |||
111-120 | 211600 | 577600 | -366000 | 324900 | -113300 | 313600 | -102000 | 547600 | -336000 | 240100 | -28500 | |||
121-130 | 220900 | 705600 | -484700 | 396900 | -176000 | 384400 | -163500 | 608400 | -387500 | 336400 | -115500 | |||
+/=/- | 13/0/0 | 13/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | |||||||||
Avg | 79853 | 244553 | 138958 | 133863 | 231374 | 114009 | ||||||||
Min | 32 | 119 | 16 | 77 | 481 | 44 | ||||||||
Max | 220900 | 705600 | 396900 | 384400 | 608400 | 336400 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||
RMSE | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | ||
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
11-20 | 0.00 | 52.00 | -52.00 | 2.00 | -2.00 | 3.00 | -3.00 | 67.00 | -67.00 | 0.00 | 0.00 | |
21-30 | 0.00 | 94.00 | -94.00 | 33.00 | -33.00 | 47.00 | -47.00 | 84.00 | -84.00 | 27.00 | -27.00 | |
31-40 | 3.00 | 100.00 | -97.00 | 89.00 | -86.00 | 93.00 | -90.00 | 100.00 | -97.00 | 43.00 | -40.00 | |
41-50 | 41.00 | 100.00 | -59.00 | 96.95 | -55.95 | 98.00 | -57.00 | 99.80 | -58.80 | 99.00 | -58.00 | |
51-60 | 68.00 | 100.00 | -32.00 | 98.00 | -30.00 | 100.00 | -32.00 | 100.00 | -32.00 | 98.00 | -30.00 | |
61-70 | 87.00 | 99.90 | -12.90 | 100.00 | -13.00 | 100.00 | -13.00 | 100.00 | -13.00 | 100.00 | -13.00 | |
71-80 | 95.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | |
81-90 | 98.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 99.00 | -1.00 | |
91-100 | 97.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | |
101-110 | 98.00 | 99.99 | -1.99 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | |
111-120 | 99.56 | 99.92 | -0.36 | 99.94 | -0.38 | 100.00 | -0.44 | 100.00 | -0.44 | 100.00 | -0.44 | |
121-130 | 100.00 | 99.95 | 0.05 | 100.00 | 0.00 | 99.80 | 0.20 | 99.97 | 0.03 | 99.97 | 0.03 | |
+/=/- | 11/1/1 | 11/2/0 | 11/1/1 | 11/1/1 | 10/2/1 | |||||||
Avg | 60.50 | 88.14 | 78.38 | 80.06 | 88.52 | 74.31 | ||||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | ||||||
Max | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAE | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.52 | -0.52 | 0.02 | -0.02 | 0.03 | -0.03 | 0.67 | -0.67 | 0.00 | 0.00 |
21-30 | 0.00 | 0.94 | -0.94 | 0.33 | -0.33 | 0.47 | -0.47 | 0.84 | -0.84 | 0.27 | -0.27 |
31-40 | 0.03 | 1.00 | -0.97 | 0.89 | -0.86 | 0.93 | -0.90 | 1.00 | -0.97 | 0.43 | -0.40 |
41-50 | 0.41 | 1.00 | -0.59 | 0.97 | -0.56 | 0.98 | -0.57 | 1.00 | -0.59 | 0.99 | -0.58 |
51-60 | 0.68 | 1.00 | -0.32 | 0.98 | -0.30 | 1.00 | -0.32 | 1.00 | -0.32 | 0.98 | -0.30 |
61-70 | 0.87 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 |
71-80 | 0.95 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 |
81-90 | 0.98 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 0.99 | -0.01 |
91-100 | 0.97 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 |
101-110 | 0.98 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 |
111-120 | 1.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 |
121-130 | 1.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 |
+/=/- | 10/3/0 | 10/3/0 | 10/3/0 | 10/3/0 | 9/4/0 | ||||||
Avg | 0.61 | 0.88 | 0.78 | 0.80 | 0.89 | 0.74 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAPE | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 |
21-30 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 |
31-40 | 0.00 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.00 | 0.00 |
41-50 | 0.00 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 |
51-60 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
61-70 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
71-80 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
81-90 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
91-100 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
101-110 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
111-120 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
121-130 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
+/=/- | 4/9/0 | 2/10/0 | 2/11/0 | 4/9/0 | 1/12/0 | ||||||
Avg | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 |
NN | MR1, 3SATT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | |
1-10 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 |
11-20 | 0.0E+00 | 2.7E+07 | -2.7E+07 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 |
21-30 | 0.0E+00 | 8.8E+07 | -8.8E+07 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 7.3E+06 | -7.3E+06 |
31-40 | 9.0E+04 | 1.0E+08 | -1.0E+08 | 0.0E+00 | 9.0E+04 | 0.0E+00 | 9.0E+04 | 0.0E+00 | 9.0E+04 | 1.8E+07 | -1.8E+07 |
41-50 | 1.7E+07 | 1.0E+08 | -8.3E+07 | 2.4E+01 | 1.7E+07 | 0.0E+00 | 1.7E+07 | 4.0E+02 | 1.7E+07 | 9.8E+07 | -8.1E+07 |
51-60 | 4.6E+07 | 1.0E+08 | -5.4E+07 | 0.0E+00 | 4.6E+07 | 0.0E+00 | 4.6E+07 | 0.0E+00 | 4.6E+07 | 9.6E+07 | -5.0E+07 |
61-70 | 7.6E+07 | 1.0E+08 | -2.4E+07 | 0.0E+00 | 7.6E+07 | 0.0E+00 | 7.6E+07 | 0.0E+00 | 7.6E+07 | 1.0E+08 | -2.4E+07 |
71-80 | 9.0E+07 | 1.0E+08 | -9.8E+06 | 0.0E+00 | 9.0E+07 | 0.0E+00 | 9.0E+07 | 0.0E+00 | 9.0E+07 | 1.0E+08 | -9.8E+06 |
81-90 | 9.6E+07 | 1.0E+08 | -4.0E+06 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 9.8E+07 | -2.0E+06 |
91-100 | 9.4E+07 | 1.0E+08 | -5.9E+06 | 0.0E+00 | 9.4E+07 | 0.0E+00 | 9.4E+07 | 0.0E+00 | 9.4E+07 | 1.0E+08 | -5.9E+06 |
101-110 | 9.6E+07 | 1.0E+08 | -3.9E+06 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 1.0E+08 | -4.0E+06 |
111-120 | 9.9E+07 | 1.0E+08 | -7.2E+05 | 3.2E+01 | 9.9E+07 | 0.0E+00 | 9.9E+07 | 0.0E+00 | 9.9E+07 | 1.0E+08 | -8.8E+05 |
121-130 | 1.0E+08 | 1.0E+08 | 1.0E+05 | 0.0E+00 | 1.0E+08 | 4.0E+02 | 1.0E+08 | 8.0E+00 | 1.0E+08 | 1.0E+08 | 6.0E+04 |
+/=/- | 11/1/1 | 0/13/0 | 0/13/0 | 0/13/0 | 10/1/2 | ||||||
Avg | 5.5E+07 | 8.6E+07 | 4.31 | 30.77 | 31.38 | 7.1E+07 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.0E+08 | 1.0E+08 | 32.00 | 400.00 | 400.00 | 1.0E+08 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||
Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | |
1-10 | 10000 | 0 | 8400 | 1600 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 |
11-20 | 10000 | 0 | 0 | 10000 | 9400 | 600 | 10000 | 0 | 2100 | 7900 | 10000 | 0 |
21-30 | 8700 | 1300 | 100 | 9900 | 8100 | 1900 | 7000 | 3000 | 3000 | 7000 | 7900 | 2100 |
31-40 | 5600 | 4400 | 0 | 10000 | 3100 | 6900 | 1200 | 8800 | 0 | 10000 | 7800 | 2200 |
41-50 | 3300 | 6700 | 0 | 10000 | 500 | 9500 | 0 | 10000 | 100 | 9900 | 6600 | 3400 |
51-60 | 1000 | 9000 | 80 | 9920 | 100 | 9900 | 300 | 9700 | 0 | 10000 | 0 | 10000 |
61-70 | 203 | 9797 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 200 | 9800 |
71-80 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 6 | 9994 |
81-90 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
91-100 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
101-110 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 58 | 9942 |
111-120 | 0 | 10000 | 1 | 9999 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
121-130 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 16 | 9984 |
Avg Zm | 2984.85 | 660.08 | 2400.00 | 2192.31 | 1169.23 | 3275.38 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.24 | -0.24 | 0.01 | -0.01 | 0.02 | -0.02 | 0.81 | -0.81 | 0.00 | 0.00 |
21-30 | 0.00 | 0.54 | -0.54 | 0.24 | -0.24 | 0.23 | -0.23 | 0.55 | -0.55 | 0.21 | -0.21 |
31-40 | 0.01 | 1.20 | -1.19 | 0.85 | -0.84 | 0.59 | -0.58 | 0.76 | -0.75 | 0.41 | -0.40 |
41-50 | 0.31 | 1.35 | -1.05 | 1.09 | -0.79 | 0.78 | -0.48 | 1.28 | -0.98 | 0.91 | -0.60 |
51-60 | 0.61 | 1.66 | -1.05 | 1.01 | -0.40 | 0.89 | -0.28 | 1.63 | -1.02 | 1.06 | -0.45 |
61-70 | 0.93 | 1.68 | -0.75 | 1.39 | -0.45 | 1.05 | -0.12 | 1.83 | -0.90 | 1.29 | -0.36 |
71-80 | 1.34 | 1.91 | -0.58 | 1.74 | -0.40 | 1.13 | 0.21 | 1.72 | -0.39 | 1.51 | -0.17 |
81-90 | 1.17 | 2.27 | -1.10 | 1.76 | -0.59 | 1.20 | -0.03 | 1.92 | -0.75 | 1.85 | -0.68 |
91-100 | 1.49 | 2.31 | -0.82 | 1.92 | -0.43 | 1.30 | 0.19 | 2.41 | -0.93 | 2.22 | -0.74 |
101-110 | 1.64 | 2.21 | -0.57 | 2.15 | -0.51 | 1.45 | 0.19 | 2.56 | -0.92 | 2.65 | -1.01 |
111-120 | 1.65 | 2.30 | -0.65 | 2.54 | -0.89 | 1.49 | 0.16 | 2.68 | -1.03 | 2.47 | -0.82 |
121-130 | 1.87 | 3.14 | -1.27 | 2.79 | -0.93 | 1.63 | 0.23 | 2.55 | -0.69 | 2.62 | -0.75 |
+/=/- | 12/1/0 | 12/0/1 | 8/1/4 | 12/1/0 | 11/2/0 | ||||||
Avg | 0.85 | 1.60 | 1.34 | 0.90 | 1.59 | 1.32 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.87 | 3.14 | 2.79 | 1.63 | 2.68 | 2.65 |
Algorithm 1. Pseudocode of DHNN-MR1, 3SAT |
Input: Parameters, COMBMAX, trial number, relaxation rate and tolerance value; |
Output: The final neuron state and global minimum solutions |
![]() |
Parameter | Parameter values |
Number of learning (v) | 100 [18] |
Number of combination (n) | 100 [18] |
Number of trials (φ) | 100 [32] |
Number of neurons (NN) | 1<NN<130 |
Tolerance value (Tol) | 0.001 [11] |
Synaptic weight method | Abdullah [13] |
Relaxation rate (R) | 3 [11] |
Threshold CPU time | 24 hours [20] |
Learning iteration (ϕ) | ϕ⩽υ[26] |
Initialization of neuron states | Random [27] |
Learning algorithm | ES |
Threshold constraint of DHNN (θ) | 0 [18] |
Activation function | HTAF [22] |
Order of clauses | First and third-order logic |
Variable | Smaxi | Si |
e | 1 | 1 |
f | 1 | –1 |
g | –1 | 1 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
RMSE | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | |
1-10 | 1.97 | 2.84 | -0.88 | 1.29 | 0.67 | 2.56 | -0.60 | 5.07 | -3.11 | 2.14 | -0.18 |
11-20 | 6.05 | 9.63 | -3.58 | 8.10 | -2.05 | 7.17 | -1.11 | 10.65 | -4.60 | 5.53 | 0.52 |
21-30 | 9.48 | 15.85 | -6.37 | 11.44 | -1.96 | 11.54 | -2.06 | 13.66 | -4.18 | 11.44 | -1.96 |
31-40 | 11.67 | 23.88 | -12.21 | 17.85 | -6.18 | 15.90 | -4.23 | 20.90 | -9.22 | 14.57 | -2.90 |
41-50 | 16.34 | 29.85 | -13.51 | 20.88 | -4.54 | 21.88 | -5.54 | 30.85 | -14.51 | 21.89 | -5.55 |
51-60 | 20.67 | 35.82 | -15.15 | 26.85 | -6.17 | 25.87 | -5.20 | 32.84 | -12.16 | 25.86 | -5.19 |
61-70 | 24.72 | 41.79 | -17.07 | 32.84 | -8.11 | 31.84 | -7.12 | 37.81 | -13.09 | 30.85 | -6.12 |
71-80 | 31.79 | 49.75 | -17.96 | 38.81 | -7.02 | 35.82 | -4.03 | 52.74 | -20.95 | 35.82 | -4.03 |
81-90 | 32.83 | 55.72 | -22.89 | 41.79 | -8.96 | 41.79 | -8.96 | 55.72 | -22.89 | 36.81 | -3.97 |
91-100 | 36.79 | 61.69 | -24.90 | 47.76 | -10.97 | 45.77 | -8.98 | 59.70 | -22.91 | 43.78 | -6.99 |
101-110 | 37.65 | 67.66 | -30.01 | 50.75 | -13.10 | 51.74 | -14.09 | 68.66 | -31.01 | 45.77 | -8.12 |
111-120 | 45.77 | 75.62 | -29.85 | 56.72 | -10.95 | 55.72 | -9.95 | 73.63 | -27.86 | 48.76 | -2.99 |
121-130 | 46.77 | 83.58 | -36.82 | 62.69 | -15.92 | 61.69 | -14.93 | 77.61 | -30.85 | 57.71 | -10.95 |
+/=/- | 13/0/0 | 12/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | ||||||
Avg | 24.81 | 42.59 | 32.14 | 31.49 | 41.53 | 29.30 | |||||
Min | 1.97 | 2.84 | 1.29 | 2.56 | 5.07 | 2.14 | |||||
Max | 46.77 | 83.58 | 62.69 | 61.69 | 77.61 | 57.71 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAE | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | |
1-10 | 1.50 | 2.43 | -0.92 | 0.94 | 0.56 | 2.14 | -0.63 | 4.57 | -3.07 | 1.69 | -0.19 |
11-20 | 5.10 | 9.40 | -4.30 | 7.53 | -2.43 | 6.63 | -1.53 | 10.44 | -5.34 | 4.80 | 0.30 |
21-30 | 8.69 | 15.72 | -7.03 | 11.06 | -2.37 | 11.25 | -2.56 | 13.49 | -4.80 | 11.06 | -2.37 |
31-40 | 10.90 | 23.76 | -12.87 | 17.71 | -6.82 | 15.81 | -4.91 | 20.79 | -9.89 | 14.18 | -3.28 |
41-50 | 15.90 | 29.70 | -13.81 | 20.77 | -4.87 | 21.76 | -5.87 | 30.69 | -14.80 | 21.78 | -5.88 |
51-60 | 20.37 | 35.64 | -15.27 | 26.69 | -6.32 | 25.74 | -5.37 | 32.67 | -12.30 | 25.72 | -5.36 |
61-70 | 24.46 | 41.58 | -17.12 | 32.67 | -8.21 | 31.68 | -7.22 | 37.62 | -13.16 | 30.69 | -6.23 |
71-80 | 31.58 | 49.50 | -17.92 | 38.61 | -7.03 | 35.64 | -4.06 | 52.48 | -20.89 | 35.64 | -4.06 |
81-90 | 32.67 | 55.45 | -22.78 | 41.58 | -8.92 | 41.58 | -8.92 | 55.45 | -22.78 | 36.61 | -3.95 |
91-100 | 36.59 | 61.39 | -24.80 | 47.52 | -10.94 | 45.54 | -8.96 | 59.41 | -22.82 | 43.56 | -6.98 |
101-110 | 37.37 | 67.33 | -29.96 | 50.50 | -13.13 | 51.49 | -14.12 | 68.32 | -30.95 | 45.54 | -8.18 |
111-120 | 45.54 | 75.25 | -29.70 | 56.44 | -10.89 | 55.45 | -9.90 | 73.27 | -27.72 | 48.51 | -2.97 |
121-130 | 46.53 | 83.17 | -36.63 | 62.38 | -15.84 | 61.39 | -14.85 | 77.23 | -30.69 | 57.43 | -10.89 |
+/=/- | 13/0/0 | 13/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | ||||||
Avg | 24.40 | 42.33 | 31.88 | 31.24 | 41.26 | 29.02 | |||||
Min | 1.50 | 2.43 | 0.94 | 2.14 | 4.57 | 1.69 | |||||
Max | 46.53 | 83.17 | 62.38 | 61.39 | 77.23 | 57.43 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAPE | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | |
1-10 | 37.62 | 60.72 | -23.10 | 31.34 | 6.28 | 53.42 | -15.80 | 76.24 | -38.62 | 42.28 | -4.66 |
11-20 | 63.73 | 93.99 | -30.26 | 83.70 | -19.97 | 82.83 | -19.09 | 94.87 | -31.13 | 68.50 | -4.77 |
21-30 | 79.00 | 98.26 | -19.25 | 92.16 | -13.15 | 93.77 | -14.77 | 96.37 | -17.37 | 92.19 | -13.18 |
31-40 | 83.82 | 99.01 | -15.19 | 98.41 | -14.58 | 98.79 | -14.96 | 99.01 | -15.19 | 94.50 | -10.68 |
41-50 | 93.51 | 99.01 | -5.50 | 98.89 | -5.37 | 98.92 | -5.41 | 99.01 | -5.50 | 99.00 | -5.48 |
51-60 | 96.99 | 99.01 | -2.02 | 98.87 | -1.87 | 99.01 | -2.02 | 99.01 | -2.02 | 98.94 | -1.94 |
61-70 | 97.86 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 | 99.01 | -1.15 |
71-80 | 98.70 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 | 99.01 | -0.31 |
81-90 | 98.98 | 99.01 | -0.03 | 99.01 | -0.03 | 99.01 | -0.03 | 99.01 | -0.03 | 98.96 | 0.03 |
91-100 | 98.88 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 | 99.01 | -0.13 |
101-110 | 98.34 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 | 99.01 | -0.67 |
111-120 | 99.01 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 |
121-130 | 99.01 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 | 99.01 | 0.00 |
+/=/- | 11/2/0 | 10/2/1 | 11/2/0 | 11/2/0 | 11/2/0 | ||||||
Avg | 88.11 | 95.62 | 92.03 | 93.83 | 96.74 | 91.42 | |||||
Min | 37.62 | 60.72 | 31.34 | 53.42 | 76.24 | 42.28 | |||||
Max | 99.01 | 99.01 | 99.01 | 99.01 | 99.01 | 99.01 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | ||||
1-10 | 32 | 119 | -87 | 16 | 16 | 77 | -45 | 481 | -450 | 44 | -12 | |||
11-20 | 514 | 7279 | -6765 | 2347 | -1833 | 1677 | -1164 | 9633 | -9119 | 511 | 3 | |||
21-30 | 2585 | 24589 | -22004 | 8408 | -5824 | 9841 | -7256 | 17652 | -15067 | 8453 | -5868 | |||
31-40 | 5183 | 57600 | -52417 | 30683 | -25500 | 24809 | -19626 | 44100 | -38917 | 14621 | -9437 | |||
41-50 | 18184 | 90000 | -71816 | 43306 | -25122 | 47819 | -29635 | 96100 | -77916 | 48308 | -30124 | |||
51-60 | 36308 | 129600 | -93292 | 71989 | -35681 | 67600 | -31292 | 108900 | -72592 | 66829 | -30522 | |||
61-70 | 57656 | 176400 | -118744 | 108900 | -51244 | 102400 | -44744 | 144400 | -86744 | 96100 | -38444 | |||
71-80 | 99430 | 250000 | -150570 | 152100 | -52670 | 129600 | -30170 | 280900 | -181470 | 129600 | -30170 | |||
81-90 | 108639 | 313600 | -204961 | 176400 | -67761 | 176400 | -67761 | 313600 | -204961 | 135955 | -27317 | |||
91-100 | 134408 | 384400 | -249992 | 230400 | -95992 | 211600 | -77192 | 360000 | -225592 | 193600 | -59192 | |||
101-110 | 142653 | 462400 | -319747 | 260100 | -117447 | 270400 | -127747 | 476100 | -333447 | 211600 | -68947 | |||
111-120 | 211600 | 577600 | -366000 | 324900 | -113300 | 313600 | -102000 | 547600 | -336000 | 240100 | -28500 | |||
121-130 | 220900 | 705600 | -484700 | 396900 | -176000 | 384400 | -163500 | 608400 | -387500 | 336400 | -115500 | |||
+/=/- | 13/0/0 | 13/0/0 | 13/0/0 | 13/0/0 | 12/0/1 | |||||||||
Avg | 79853 | 244553 | 138958 | 133863 | 231374 | 114009 | ||||||||
Min | 32 | 119 | 16 | 77 | 481 | 44 | ||||||||
Max | 220900 | 705600 | 396900 | 384400 | 608400 | 336400 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||
RMSE | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | RMSE | Ð | ||
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
11-20 | 0.00 | 52.00 | -52.00 | 2.00 | -2.00 | 3.00 | -3.00 | 67.00 | -67.00 | 0.00 | 0.00 | |
21-30 | 0.00 | 94.00 | -94.00 | 33.00 | -33.00 | 47.00 | -47.00 | 84.00 | -84.00 | 27.00 | -27.00 | |
31-40 | 3.00 | 100.00 | -97.00 | 89.00 | -86.00 | 93.00 | -90.00 | 100.00 | -97.00 | 43.00 | -40.00 | |
41-50 | 41.00 | 100.00 | -59.00 | 96.95 | -55.95 | 98.00 | -57.00 | 99.80 | -58.80 | 99.00 | -58.00 | |
51-60 | 68.00 | 100.00 | -32.00 | 98.00 | -30.00 | 100.00 | -32.00 | 100.00 | -32.00 | 98.00 | -30.00 | |
61-70 | 87.00 | 99.90 | -12.90 | 100.00 | -13.00 | 100.00 | -13.00 | 100.00 | -13.00 | 100.00 | -13.00 | |
71-80 | 95.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | 100.00 | -5.00 | |
81-90 | 98.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 99.00 | -1.00 | |
91-100 | 97.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | 100.00 | -3.00 | |
101-110 | 98.00 | 99.99 | -1.99 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | 100.00 | -2.00 | |
111-120 | 99.56 | 99.92 | -0.36 | 99.94 | -0.38 | 100.00 | -0.44 | 100.00 | -0.44 | 100.00 | -0.44 | |
121-130 | 100.00 | 99.95 | 0.05 | 100.00 | 0.00 | 99.80 | 0.20 | 99.97 | 0.03 | 99.97 | 0.03 | |
+/=/- | 11/1/1 | 11/2/0 | 11/1/1 | 11/1/1 | 10/2/1 | |||||||
Avg | 60.50 | 88.14 | 78.38 | 80.06 | 88.52 | 74.31 | ||||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | ||||||
Max | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAE | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | MAE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.52 | -0.52 | 0.02 | -0.02 | 0.03 | -0.03 | 0.67 | -0.67 | 0.00 | 0.00 |
21-30 | 0.00 | 0.94 | -0.94 | 0.33 | -0.33 | 0.47 | -0.47 | 0.84 | -0.84 | 0.27 | -0.27 |
31-40 | 0.03 | 1.00 | -0.97 | 0.89 | -0.86 | 0.93 | -0.90 | 1.00 | -0.97 | 0.43 | -0.40 |
41-50 | 0.41 | 1.00 | -0.59 | 0.97 | -0.56 | 0.98 | -0.57 | 1.00 | -0.59 | 0.99 | -0.58 |
51-60 | 0.68 | 1.00 | -0.32 | 0.98 | -0.30 | 1.00 | -0.32 | 1.00 | -0.32 | 0.98 | -0.30 |
61-70 | 0.87 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 | 1.00 | -0.13 |
71-80 | 0.95 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 | 1.00 | -0.05 |
81-90 | 0.98 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 0.99 | -0.01 |
91-100 | 0.97 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 | 1.00 | -0.03 |
101-110 | 0.98 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 | 1.00 | -0.02 |
111-120 | 1.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 |
121-130 | 1.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 | 1.00 | 0.00 |
+/=/- | 10/3/0 | 10/3/0 | 10/3/0 | 10/3/0 | 9/4/0 | ||||||
Avg | 0.61 | 0.88 | 0.78 | 0.80 | 0.89 | 0.74 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
MAPE | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | MAPE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 |
21-30 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | -0.01 | 0.00 | 0.00 |
31-40 | 0.00 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.00 | 0.00 |
41-50 | 0.00 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 | 0.01 | -0.01 |
51-60 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
61-70 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
71-80 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
81-90 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
91-100 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
101-110 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
111-120 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
121-130 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 |
+/=/- | 4/9/0 | 2/10/0 | 2/11/0 | 4/9/0 | 1/12/0 | ||||||
Avg | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 |
NN | MR1, 3SATT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | |
1-10 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 |
11-20 | 0.0E+00 | 2.7E+07 | -2.7E+07 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 |
21-30 | 0.0E+00 | 8.8E+07 | -8.8E+07 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 0.0E+00 | 7.3E+06 | -7.3E+06 |
31-40 | 9.0E+04 | 1.0E+08 | -1.0E+08 | 0.0E+00 | 9.0E+04 | 0.0E+00 | 9.0E+04 | 0.0E+00 | 9.0E+04 | 1.8E+07 | -1.8E+07 |
41-50 | 1.7E+07 | 1.0E+08 | -8.3E+07 | 2.4E+01 | 1.7E+07 | 0.0E+00 | 1.7E+07 | 4.0E+02 | 1.7E+07 | 9.8E+07 | -8.1E+07 |
51-60 | 4.6E+07 | 1.0E+08 | -5.4E+07 | 0.0E+00 | 4.6E+07 | 0.0E+00 | 4.6E+07 | 0.0E+00 | 4.6E+07 | 9.6E+07 | -5.0E+07 |
61-70 | 7.6E+07 | 1.0E+08 | -2.4E+07 | 0.0E+00 | 7.6E+07 | 0.0E+00 | 7.6E+07 | 0.0E+00 | 7.6E+07 | 1.0E+08 | -2.4E+07 |
71-80 | 9.0E+07 | 1.0E+08 | -9.8E+06 | 0.0E+00 | 9.0E+07 | 0.0E+00 | 9.0E+07 | 0.0E+00 | 9.0E+07 | 1.0E+08 | -9.8E+06 |
81-90 | 9.6E+07 | 1.0E+08 | -4.0E+06 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 9.8E+07 | -2.0E+06 |
91-100 | 9.4E+07 | 1.0E+08 | -5.9E+06 | 0.0E+00 | 9.4E+07 | 0.0E+00 | 9.4E+07 | 0.0E+00 | 9.4E+07 | 1.0E+08 | -5.9E+06 |
101-110 | 9.6E+07 | 1.0E+08 | -3.9E+06 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 0.0E+00 | 9.6E+07 | 1.0E+08 | -4.0E+06 |
111-120 | 9.9E+07 | 1.0E+08 | -7.2E+05 | 3.2E+01 | 9.9E+07 | 0.0E+00 | 9.9E+07 | 0.0E+00 | 9.9E+07 | 1.0E+08 | -8.8E+05 |
121-130 | 1.0E+08 | 1.0E+08 | 1.0E+05 | 0.0E+00 | 1.0E+08 | 4.0E+02 | 1.0E+08 | 8.0E+00 | 1.0E+08 | 1.0E+08 | 6.0E+04 |
+/=/- | 11/1/1 | 0/13/0 | 0/13/0 | 0/13/0 | 10/1/2 | ||||||
Avg | 5.5E+07 | 8.6E+07 | 4.31 | 30.77 | 31.38 | 7.1E+07 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.0E+08 | 1.0E+08 | 32.00 | 400.00 | 400.00 | 1.0E+08 |
NN | MR1, 3SAT | RAN 2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | ||||||
Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | Zm | Lm | |
1-10 | 10000 | 0 | 8400 | 1600 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 |
11-20 | 10000 | 0 | 0 | 10000 | 9400 | 600 | 10000 | 0 | 2100 | 7900 | 10000 | 0 |
21-30 | 8700 | 1300 | 100 | 9900 | 8100 | 1900 | 7000 | 3000 | 3000 | 7000 | 7900 | 2100 |
31-40 | 5600 | 4400 | 0 | 10000 | 3100 | 6900 | 1200 | 8800 | 0 | 10000 | 7800 | 2200 |
41-50 | 3300 | 6700 | 0 | 10000 | 500 | 9500 | 0 | 10000 | 100 | 9900 | 6600 | 3400 |
51-60 | 1000 | 9000 | 80 | 9920 | 100 | 9900 | 300 | 9700 | 0 | 10000 | 0 | 10000 |
61-70 | 203 | 9797 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 200 | 9800 |
71-80 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 6 | 9994 |
81-90 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
91-100 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
101-110 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 58 | 9942 |
111-120 | 0 | 10000 | 1 | 9999 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 |
121-130 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 0 | 10000 | 16 | 9984 |
Avg Zm | 2984.85 | 660.08 | 2400.00 | 2192.31 | 1169.23 | 3275.38 |
NN | MR1, 3SAT | RAN2SAT | RAN3SAT | RAN3, 1SAT | YRAN2SAT | GRAN3SAT | |||||
SSE | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | SSE | Ð | |
1-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
11-20 | 0.00 | 0.24 | -0.24 | 0.01 | -0.01 | 0.02 | -0.02 | 0.81 | -0.81 | 0.00 | 0.00 |
21-30 | 0.00 | 0.54 | -0.54 | 0.24 | -0.24 | 0.23 | -0.23 | 0.55 | -0.55 | 0.21 | -0.21 |
31-40 | 0.01 | 1.20 | -1.19 | 0.85 | -0.84 | 0.59 | -0.58 | 0.76 | -0.75 | 0.41 | -0.40 |
41-50 | 0.31 | 1.35 | -1.05 | 1.09 | -0.79 | 0.78 | -0.48 | 1.28 | -0.98 | 0.91 | -0.60 |
51-60 | 0.61 | 1.66 | -1.05 | 1.01 | -0.40 | 0.89 | -0.28 | 1.63 | -1.02 | 1.06 | -0.45 |
61-70 | 0.93 | 1.68 | -0.75 | 1.39 | -0.45 | 1.05 | -0.12 | 1.83 | -0.90 | 1.29 | -0.36 |
71-80 | 1.34 | 1.91 | -0.58 | 1.74 | -0.40 | 1.13 | 0.21 | 1.72 | -0.39 | 1.51 | -0.17 |
81-90 | 1.17 | 2.27 | -1.10 | 1.76 | -0.59 | 1.20 | -0.03 | 1.92 | -0.75 | 1.85 | -0.68 |
91-100 | 1.49 | 2.31 | -0.82 | 1.92 | -0.43 | 1.30 | 0.19 | 2.41 | -0.93 | 2.22 | -0.74 |
101-110 | 1.64 | 2.21 | -0.57 | 2.15 | -0.51 | 1.45 | 0.19 | 2.56 | -0.92 | 2.65 | -1.01 |
111-120 | 1.65 | 2.30 | -0.65 | 2.54 | -0.89 | 1.49 | 0.16 | 2.68 | -1.03 | 2.47 | -0.82 |
121-130 | 1.87 | 3.14 | -1.27 | 2.79 | -0.93 | 1.63 | 0.23 | 2.55 | -0.69 | 2.62 | -0.75 |
+/=/- | 12/1/0 | 12/0/1 | 8/1/4 | 12/1/0 | 11/2/0 | ||||||
Avg | 0.85 | 1.60 | 1.34 | 0.90 | 1.59 | 1.32 | |||||
Min | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |||||
Max | 1.87 | 3.14 | 2.79 | 1.63 | 2.68 | 2.65 |