Processing math: 100%
Research article Special Issues

Major 3 Satisfiability logic in Discrete Hopfield Neural Network integrated with multi-objective Election Algorithm

  • Received: 20 October 2022 Revised: 24 March 2023 Accepted: 29 March 2023 Published: 14 July 2023
  • Discrete Hopfield Neural Network is widely used in solving various optimization problems and logic mining. Boolean algebras are used to govern the Discrete Hopfield Neural Network to produce final neuron states that possess a global minimum energy solution. Non-systematic satisfiability logic is popular due to the flexibility that it provides to the logical structure compared to systematic satisfiability. Hence, this study proposed a non-systematic majority logic named Major 3 Satisfiability logic that will be embedded in the Discrete Hopfield Neural Network. The model will be integrated with an evolutionary algorithm which is the multi-objective Election Algorithm in the training phase to increase the optimality of the learning process of the model. Higher content addressable memory is proposed rather than one to extend the measure of this work capability. The model will be compared with different order logical combinations k=3,2, k=3,2,1 and k=3,1. The performance of those logical combinations will be measured by Mean Absolute Error, Global Minimum Energy, Total Neuron Variation, Jaccard Similarity Index and Gower and Legendre Similarity Index. The results show that k=3,2 has the best overall performance due to its advantage of having the highest chances for the clauses to be satisfied and the absence of the first-order logic. Since it is also a non-systematic logical structure, it gains the highest diversity value during the learning phase.

    Citation: Muhammad Aqmar Fiqhi Roslan, Nur Ezlin Zamri, Mohd. Asyraf Mansor, Mohd Shareduwan Mohd Kasihmuddin. Major 3 Satisfiability logic in Discrete Hopfield Neural Network integrated with multi-objective Election Algorithm[J]. AIMS Mathematics, 2023, 8(9): 22447-22482. doi: 10.3934/math.20231145

    Related Papers:

    [1] Muhammad Sajjad, Tariq Shah, Qin Xin, Bander Almutairi . Eisenstein field BCH codes construction and decoding. AIMS Mathematics, 2023, 8(12): 29453-29473. doi: 10.3934/math.20231508
    [2] Berna Arslan . On generalized biderivations of Banach algebras. AIMS Mathematics, 2024, 9(12): 36259-36272. doi: 10.3934/math.20241720
    [3] Moin A. Ansari, Ali N. A. Koam, Azeem Haider . Intersection soft ideals and their quotients on KU-algebras. AIMS Mathematics, 2021, 6(11): 12077-12084. doi: 10.3934/math.2021700
    [4] Shan Li, Kaijia Luo, Jiankui Li . Generalized Lie n-derivations on generalized matrix algebras. AIMS Mathematics, 2024, 9(10): 29386-29403. doi: 10.3934/math.20241424
    [5] Jie Qiong Shi, Xiao Long Xin . Ideal theory on EQ-algebras. AIMS Mathematics, 2021, 6(11): 11686-11707. doi: 10.3934/math.2021679
    [6] Shakir Ali, Ali Yahya Hummdi, Mohammed Ayedh, Naira Noor Rafiquee . Linear generalized derivations on Banach -algebras. AIMS Mathematics, 2024, 9(10): 27497-27511. doi: 10.3934/math.20241335
    [7] Dan Liu, Jianhua Zhang, Mingliang Song . Local Lie derivations of generalized matrix algebras. AIMS Mathematics, 2023, 8(3): 6900-6912. doi: 10.3934/math.2023349
    [8] Wen Teng, Jiulin Jin, Yu Zhang . Cohomology of nonabelian embedding tensors on Hom-Lie algebras. AIMS Mathematics, 2023, 8(9): 21176-21190. doi: 10.3934/math.20231079
    [9] Yingyu Luo, Yu Wang, Junjie Gu, Huihui Wang . Jordan matrix algebras defined by generators and relations. AIMS Mathematics, 2022, 7(2): 3047-3055. doi: 10.3934/math.2022168
    [10] He Yuan, Zhuo Liu . Lie n-centralizers of generalized matrix algebras. AIMS Mathematics, 2023, 8(6): 14609-14622. doi: 10.3934/math.2023747
  • Discrete Hopfield Neural Network is widely used in solving various optimization problems and logic mining. Boolean algebras are used to govern the Discrete Hopfield Neural Network to produce final neuron states that possess a global minimum energy solution. Non-systematic satisfiability logic is popular due to the flexibility that it provides to the logical structure compared to systematic satisfiability. Hence, this study proposed a non-systematic majority logic named Major 3 Satisfiability logic that will be embedded in the Discrete Hopfield Neural Network. The model will be integrated with an evolutionary algorithm which is the multi-objective Election Algorithm in the training phase to increase the optimality of the learning process of the model. Higher content addressable memory is proposed rather than one to extend the measure of this work capability. The model will be compared with different order logical combinations k=3,2, k=3,2,1 and k=3,1. The performance of those logical combinations will be measured by Mean Absolute Error, Global Minimum Energy, Total Neuron Variation, Jaccard Similarity Index and Gower and Legendre Similarity Index. The results show that k=3,2 has the best overall performance due to its advantage of having the highest chances for the clauses to be satisfied and the absence of the first-order logic. Since it is also a non-systematic logical structure, it gains the highest diversity value during the learning phase.



    Let m and n be two positive integers with m2 and n2, [n]={1,2,,n}, R be the set of all real numbers, Rn be the set of all n-dimensional real vectors. Let x=(x1,x2,,xm)Rm and y=(y1,y2,,yn)Rn. If a fourth-order tensor A=(aijkl)R[m]×[n]×[m]×[n] satisfies the properties

    aijkl=akjil=ailkj=aklij,i,k[m],j,l[n],

    then we call A a partially symmetric tensor.

    It is well know that the tensor of the elastic modulus of elastic materials is just partially symmetrical [11]. And the components of a fourth-order partially symmetric tensor A can be regarded as the coefficients of the following biquadratic homogeneous polynomial optimization problem [6,19]:

    maxf(x,y)Axyxyi,k[m]j,l[n]aijklxiyjxkyl,s.t.xx=1,yy=1. (1.1)

    The optimization problem plays a great role in the analysis of nonlinear elastic materials and the entanglement problem in quantum physics [5,6,8,9,26]. To solve the problem, we would establish a new version based on the following definition:

    Definition 1.1. [11,20,21] Let A=(aijkl)R[m]×[n]×[m]×[n] be a partially symmetric tensor. If there are λR, xRm{0} and yRn{0} such that

    Ayxy=λx,Axyx=λy,xx=1,yy=1, (1.2)

    where

    (Ayxy)i=k[m]j,l[n]aijklyjxkyl,(Axyx)l=i,k[m]j[n]aijklxiyjxk,

    then we call λ an M-eigenvalue of A, x and y the left and right M-eigenvectors associated with λ, respectively. Let σ(A) be the set of all M-eigenvalues of A and λmax(A) be the largest M-eigenvalue of A, i.e.,

    λmax(A)=max{|λ|:λσ(A)}.

    In 2009, Wang, Qi and Zhang [24] pointed out that Problem (1.1) is equivalently transformed into calculating the largest M-eigenvalue of a fourth-order partially symmetric tensor. Based on this, Wang et al. [24] presented an algorithm (WQZ-algorithm) to find the largest M-eigenvalue of a fourth-order partially symmetric tensor.

    WQZ-algorithm [24,Algorithm 4.1]:

    Initial step: Input A=(aijkl)R[m]×[n]×[m]×[n] and unfold it into a matrix A=(Ast)R[mn]×[mn] by mapping Ast=aijkl with s=n(i1)+j,t=n(k1)+l.

    Substep 1: Take

    τ=1stmn|Ast|, (1.3)

    and set

    ¯A=τI+A, (1.4)

    where I=(δijkl)R[m]×[n]×[m]×[n] with δijkl=1 if i=k and j=l, otherwise, δijkl=0. Then unfold ¯A=(¯aijkl)R[m]×[n]×[m]×[n] into a matrix ¯A=(¯Ast)R[mn]×[mn].

    Substep 2: Compute the unit eigenvector w=(wi)mni=1Rmn of matrix ¯A associated with its largest eigenvalue, and fold vector w into the matrix W=(Wij)R[m]×[n] in the following way:

    Wij=wk,

    set i=k/n,j=(k1)modn+1,k=1,2,,mn.

    Substep 3: Compute the singular vectors u1 and v1 corresponding to the largest singular value σ1 of the matrix W. Specifically, the singular value decomposition of W is

    W=UTΣV=ri=1σiuivTi,

    where σ1σ2σr and r is the rank of W.

    Substep 4: Take x0=u1,y0=v1, and let k=0.

    Iterative step: Execute the following procedures alternatively until certain convergence criterion is satisfied and output x,y:

    ¯xk+1=¯Aykxkyk,xk+1=¯xk+1||¯xk+1||,¯yk+1=¯Axk+1ykxk+1,yk+1=¯yk+1||¯yk+1||,k=k+1.

    Final step: Output the largest M-eigenvalue of the tensor A:

    λmax(A)=f(x,y)τ,

    where

    f(x,y)=i,k[m]j,l[n]¯aijklxiyjxkyl,

    and the associated M-eigenvectors: x,y.

    The M-eigenvalues of tensors have a close relationship with the strong ellipticity condition in elasticity theory, which guarantees the existence of the solution to the fundamental boundary value problems of elastostatics [3,5,16]. However, when the dimensions m and n of tensors are large, it is not easy to calculate all M-eigenvalues. Thus, the problem of M-eigenvalue localization have attracted the attention of many researchers and many M-eigenvalue localization sets are given; see [2,4,13,14,15,17,18,23,27].

    For this, Wang, Li and Che [23] presented the following M-eigenvalue localization set for a partially symmetric tensor:

    Theorem 1.1. [23,Theorem 2.2] Let A=(aijkl)R[m]×[n]×[m]×[n] be a partially symmetric tensor. Then

    σ(A)H(A)=i[m]k[m],kiHi,k(A),

    where

    Hi,k(A)=[ˆHi,k(A)(¯Hi,k(A)Γi(A))],ˆHi,k(A)={zC:|z|Ri(A)Rki(A),|z|Rkk(A)},¯Hi,k(A)={zC:(|z|(Ri(A)Rki(A)))(|z|Rkk(A))Rki(A)(Rk(A)Rkk(A))},Ri(A)=k[m]j,l[n]|aijkl|,Rki(A)=j,l[n]|aijkl|.

    From the set H(A) in Theorem 1.1, we can obtain an upper bound of the largest M-eigenvalue λmax(A), which can be taken as an parameter τ in WQZ-algorithm. From Example 2 in [15], it can be seen that the smaller the upper bound of λmax(A), the faster WQZ-algorithm converges. In view of this, this paper intends to provide a smaller upper bound based on a new inclusion set and take this new upper bound as a parameter τ to make WQZ-algorithm converges to λmax(A) faster.

    The remainder of this paper is organized as follows. In Section 2, we provide an M-eigenvalue localization set for a partially symmetric tensor A and prove that the new set is tighter than some existing M-eigenvalue localization sets. In Section 3, based on the new set, we provide an upper bound for the largest M-eigenvalue of A. As an application, in order to make the sequence generated by WQZ-algorithm converge to the largest M-eigenvalue of A faster, we replace the parameter τ in WQZ-algorithm with the upper bound. In Section 4, we conclude this article.

    In this section, we provide a new M-eigenvalue localization set of a fourth-order partially symmetric tensor and prove that the new M-eigenvalue localization set is tighter than that in Theorem 1.1, i.e., Theorem 2.2 in [23]. Before that, the following conclusion in [1,25] is needed.

    Lemma 2.1. Let x=(x1,x2,,xn)Rn and y=(y1,y2,,yn)Rn. Then

    a) If x2=1, then |xi||xj|12 for i,j[n], ij;

    b) (i[n]xiyi)2i[n]x2ii[n]y2i.

    Theorem 2.1. Let A=(aijkl)R[m]×[n]×[m]×[n] be a partially symmetric tensor. Then

    σ(A)Υ(A)=i[m]s[m],siΥi,s(A),

    where

    Υi,s(A)=[ˆΥi,s(A)(˜Υi,s(A)¯Υi,s(A))],ˆΥi,s(A)={zR:|z|<˜rsi(A),|z|<rss(A)},˜Υi,s(A)={zR:(|z|˜rsi(A))(|z|rss(A))rsi(A)˜rss(A)},¯Υi,s(A)={zR:|z|<˜rsi(A)+rsi(A)},

    and

    ˜rst(A)=12k[m],ksj,l[n],jl|atjkl|+k[m],ksl[n]a2tlkl,rst(A)=12j,l[n],jl|atjsl|+l[n]a2tlsl,t[m].

    Proof. Let λ be an M-eigenvalue of A, xRm{0} and yRn{0} be its left and right M-eigenvectors, respectively. Then xx=1. Let |xt|=maxi[m]|xi|. Then 0<|xt|1. For any given s[m] and st, by the t-th equation of (1.2), we have

    λxt=k[m]j,l[n]atjklyjxkyl=k[m],ksj,l[n],jlatjklyjxkyl+k[m],ksl[n]atlklylxkyl+j,l[n],jlatjslyjxsyl+l[n]atlslylxsyl.

    Taking the modulus of the above equation and using the triangle inequality and Lemma 2.1, one has

    |λ||xt|k[m],ksj,l[n],jl|atjkl||yj||xk||yl|+k[m],ksl[n]|atlkl||yl||xk||yl|+j,l[n],jl|atjsl||yj||xs||yl|+l[n]|atlsl||yl||xs||yl|12k[m],ksj,l[n],jl|atjkl||xt|+k[m],ksl[n]|atlkl||yl||xt|+12j,l[n],jl|atjsl||xs|+l[n]|atlsl||yl||xs|=12k[m],ksj,l[n],jl|atjkl||xt|+|xt|k[m],ks(l[n]|atlkl||yl|)+12j,l[n],jl|atjsl||xs|+|xs|l[n]|atlsl||yl|12k[m],ksj,l[n],jl|atjkl||xt|+|xt|k[m],ks(l[n]|atlkl|2l[n]|yl|2)+12j,l[n],jl|atjsl||xs|+|xs|l[n]|atlsl|2l[n]|yl|2=12k[m],ksj,l[n],jl|atjkl||xt|+|xt|k[m],ksl[n]a2tlkl+12j,l[n],jl|atjsl||xs|+|xs|l[n]a2tlsl=(12k[m],ksj,l[n],jl|atjkl|+k[m],ksl[n]a2tlkl)|xt|+(12j,l[n],jl|atjsl|+l[n]a2tlsl)|xs|=˜rst(A)|xt|+rst(A)|xs|,

    i.e.,

    (|λ|˜rst(A))|xt|rst(A)|xs|. (2.1)

    By (2.1), we have (|λ|˜rst(A))|xt|rst(A)|xt|, which leads to that |λ|˜rst(A)+rst(A), i.e., λ¯Υt,s(A).

    If |xs|>0, then by the s-th equation of (1.2), we have

    λxs=k[m]j,l[n]asjklyjxkyl=k[m],ksj,l[n],jlasjklyjxkyl+k[m],ksl[n]aslklylxkyl+j,l[n],jlasjslyjxsyl+l[n]aslslylxsyl.

    Taking the modulus of the above equation and using the triangle inequality and Lemma 2.1 yield

    |λ||xs|k[m],ksj,l[n],jl|asjkl||yj||xk||yl|+k[m],ksl[n]|aslkl||yl||xk||yl|+j,l[n],jl|asjsl||yj||xs||yl|+l[n]|aslsl||yl||xs||yl|12k[m],ksj,l[n],jl|asjkl||xt|+k[m],ksl[n]|aslkl||yl||xt|+12j,l[n],jl|asjsl||xs|+l[n]|aslsl||yl||xs|=12k[m],ksj,l[n],jl|asjkl||xt|+|xt|k[m],ks(l[n]|aslkl||yl|)+12j,l[n],jl|asjsl||xs|+|xs|l[n]|aslsl||yl|12k[m],ksj,l[n],jl|asjkl||xt|+|xt|k[m],ks(l[n]|aslkl|2l[n]|yl|2)+12j,l[n],jl|asjsl||xs|+|xs|l[n]|aslsl|2l[n]|yl|2=12k[m],ksj,l[n],jl|asjkl||xt|+|xt|k[m],ksl[n]a2slkl+12j,l[n],jl|asjsl||xs|+|xs|l[n]a2slsl=(12k[m],ksj,l[n],jl|asjkl|+k[m],ksl[n]a2slkl)|xt|+(12j,l[n],jl|asjsl|+l[n]a2slsl)|xs|=˜rss(A)|xt|+rss(A)|xs|,

    i.e.,

    (|λ|rss(A))|xs|˜rss(A)|xt|. (2.2)

    When |λ|˜rst(A) or |λ|rss(A), multiplying (2.1) and (2.2) and eliminating |xt||xs|>0, we have

    (|λ|˜rst(A))(|λ|rss(A))rst(A)˜rss(A), (2.3)

    which implies that

    λ(˜Υt,s(A)¯Υt,s(A)). (2.4)

    When |λ|<˜rst(A) and |λ|<rss(A), it holds that

    λˆΥt,s(A). (2.5)

    It follows from (2.4) and (2.5) that

    λ[ˆΥt,s(A)(˜Υt,s(A)¯Υt,s(A))]=Υt,s(A). (2.6)

    If |xs|=0 in (2.1), then |λ|˜rst(A). When |λ|=˜rst(A), then (2.3) holds and consequently, (2.4) holds. When |λ|<˜rst(A), if |λ|rss(A), then (2.3) and (2.4) hold. If |λ|<rss(A), then (2.5) holds. Hence, (2.6) holds. By the arbitrariness of s[m], and st, we have

    λtsΥt,s(A)t[m]tsΥt,s(A),

    therefore, the assertion is proved.

    Next, we give the relationship between the localization set H(A) given in Theorem 1.1 and the set Υ(A) given in Theorem 2.1.

    Theorem 2.2. Let A=(aijkl)R[m]×[n]×[m]×[n] be a partially symmetric tensor. Then

    Υ(A)H(A).

    Proof. For any i,s[m] and is, it holds that

    ˜rsi(A)=12k[m],ksj,l[n],jl|aijkl|+k[m],ksl[n]a2ilklk[m],ksj,l[n]|aijkl|=Ri(A)Rsi(A); (2.7)

    and

    rsi(A)=12j,l[n],jl|aijsl|+l[n]a2ilslj,l[n]|aijsl|=Rsi(A). (2.8)

    Let zΥ(A). By Theorem 2.1, there is an index i[m] such that for any s[m], is, zΥi,s(A), which means that zˆΥi,s(A), or z˜Υi,s(A) and z¯Υi,s(A).

    Let zˆΥi,s(A), i.e., |z|<˜rsi(A) and |z|<rss(A). By (2.7) and (2.8), we have |z|Ri(A)Rsi(A) and |z|Rss(A), therefore, zˆHi,s(A).

    Let z˜Υi,s(A) and z¯Υi,s(A), i.e.,

    (|z|˜rsi(A))(|z|rss(A))rsi(A)˜rss(A), (2.9)

    and

    |z|<˜rsi(A)+rsi(A). (2.10)

    By (2.7), (2.8) and (2.10), one has |z|<˜rsi(A)+rsi(A)Ri(A), which means that zΓi(A). When |z|Ri(A)Rsi(A) and |z|Rss(A), by (2.7), (2.8) and (2.9), we have

    |z|˜rsi(A)|z|(Ri(A)Rsi(A))0,|z|rss(A)|z|Rss(A)0,

    then

    (|z|(Ri(A)Rsi(A)))(|z|Rss(A))(|z|˜rsi(A))(|z|rss(A))rsi(A)˜rss(A)Rsi(A)(Rs(A)Rss(A)),

    i.e.,

    (|z|(Ri(A)Rsi(A)))(|z|Rss(A))Rsi(A)(Rs(A)Rss(A)), (2.11)

    which means that z¯Hi,s(A). Thus, whether Ri(A)Rsi(A)|z|Rss(A) or Rss(A)|z|Ri(A)Rsi(A), (2.11) also holds. When |z|Ri(A)Rsi(A) and |z|Rss(A), it follows that zˆHi,s(A). i.e.,

    z[ˆHi,s(A)(¯Hi,s(A)Γi(A))]=Hi,s(A).

    From the arbitrariness of s[m], and si, we have

    zs[m],siHi,s(A)i[m]s[m],siHi,s(A),

    i.e., zH(A). Therefore, Υ(A)H(A).

    In order to show the validity of the set Υ(A) given in Theorem 2.1, we present a running example.

    Example 1. Let A=(aijkl)R[2]×[2]×[2]×[2] be a partially symmetric tensor with entries

    a1111=1,a1112=2,a1121=2,a1212=3,a1222=5,a1211=2,a1122=4,a1221=4,a2111=2,a2112=4,a2121=3,a2122=5,a2211=4,a2212=5,a2221=5,a2222=6.

    By Theorem 1.1, we have

    H(A)=i[m]k[m],kiHi,k(A)={zC:|z|29.4765}.

    By Theorem 2.1, we have

    Υ(A)=i[m]s[m],siΥi,s(A)={zC:|z|20.0035}.

    It is easy to see that Υ(A)H(A) and all M-eigenvalues are in [20.0035,20.0035]. In fact, all different M-eigenvalues of A are 1.2765, 0.0710, 0.1242, 0.2765, 0.3437 and 15.2091.

    In this section, based on the set in Theorem 2.1, we provide an upper bound for the largest M-eigenvalue of a fourth-order partially symmetric tensor A. As an application, we apply the upper bound as a parameter τ to the WQZ-algorithm to make the sequence generated by the WQZ-algorithm converges to the largest M-eigenvalue of A faster.

    Theorem 3.1. Let A=(aijkl)R[m]×[n]×[m]×[n] be a partially symmetric tensor. Then

    ρ(A)Ω(A)=maxi[m]mins[m],isΩi,s(A),

    where

    Ωi,s(A)=max{min{˜rsi(A),rss(A)},min{˜rsi(A)+rsi(A),ˆΩi,s(A)}},

    and

    ˆΩi,s(A)=12{˜rsi(A)+rss(A)+(rss(A)˜rsi(A))2+4rsi(A)˜rss(A)}.

    Proof. By Theorem 2.1 and ρ(A)σ(A), it follows that there exists an index i[m] such that for any s[m] and si, ρ(A)ˆΥi,s(A), or ρ(A)(˜Υi,s(A)¯Υi,s(A)). If ρ(A)ˆΥi,s(A), that is, ρ(A)<˜rsi(A) and ρ(A)<rss(A), then

    ρ(A)<min{˜rsi(A),rss(A)}. (3.1)

    If ρ(A)(˜Υi,s(A)¯Υi,s(A)), that is,

    ρ(A)<˜rsi(A)+rsi(A)<min{˜rsi(A)+rsi(A)}, (3.2)

    and

    (ρ(A)˜rsi(A))(ρ(A)rss(A))rsi(A)˜rss(A). (3.3)

    Solving Inequality (3.3), we have

    ρ(A)ˆΩi,s(A)min{ˆΩi,s(A)}. (3.4)

    Combining (3.2) and (3.4), we have

    ρ(A)min{˜rsi(A)+rsi(A),ˆΩi,s(A)}. (3.5)

    Hence, by (3.1) and (3.5), we have

    ρ(A)max{min{˜rsi(A),rss(A)},min{˜rsi(A)+rsi(A),ˆΩi,s(A)}}=Ωi,s(A).

    Furthermore, by the arbitrariness of s, we have

    ρ(A)mins[m],isΩi,s(A).

    Since we do not know which i is appropriate to ρ(A), we can only conclude that

    ρ(A)maxi[m]mins[m],isΩi,s(A).

    This proof is complete.

    Remark 3.1. In Theorem 3.1, we obtain an upper bound Ω(A) for the largest M-eigenvalue of a fourth order partially symmetric tensor A. Now, we take Ω(A) as the parameter τ in WQZ-algorithm to obtain a modified WQZ-algorithm. That is, the only difference between WQZ-algorithm and the modified WQZ-algorithm is the selection of τ, in particular, τ=1stmn|Ast| in WQZ-algorithm and τ=Ω(A) in the modified WQZ-algorithm.

    Next, we take Ω(A) and some existing upper bounds of the largest M-eigenvalue as τ in WQZ-algorithm to calculate the largest M-eigenvalue of a fourth-order partially symmetric tensor A.

    Example 2. Consider the tensor A in Example 4.1 of [24], where

    A(:,:,1,1)=[0.97270.31690.34370.63320.78660.42570.33500.98960.4323],
    A(:,:,2,1)=[0.63320.78660.42570.73870.68730.32480.79860.59880.9485],
    A(:,:,3,1)=[0.33500.98960.43230.79860.59880.94850.58530.59210.6301],
    A(:,:,1,2)=[0.31690.61580.01840.78660.01600.00850.98960.66630.2559],
    A(:,:,2,2)=[0.78660.01600.00850.68730.51600.02160.59880.04110.9857],
    A(:,:,3,2)=[0.98960.66630.25590.59880.04110.98570.59210.29070.3881],
    A(:,:,1,3)=[0.34370.01840.56490.42570.00850.14390.43230.25590.6162],
    A(:,:,2,3)=[0.42570.00850.14390.32480.02160.00370.94850.98570.7734],
    A(:,:,3,3)=[0.43230.25590.61620.94850.98570.77340.63010.38810.8526].

    By (1.3), we have τ=1st9|Ast|=23.3503. By Corollary 1 of [17], we have

    ρ(A)16.6014.

    By Theorem 3.5 of [23], we have

    ρ(A)15.4102.

    By Corollary 2 of [17], we have

    ρ(A)14.5910.

    By Corollary 1 of [15], where Sm=Sn=1, we have

    ρ(A)13.8844.

    By Corollary 2 of [15], where Sm=Sn=1, we have

    ρ(A)11.7253.

    By Theorem 3.1, we have

    ρ(A)8.2342.

    From [24], it can be seen that λmax(A)=2.3227.

    Taking τ=23.3503, 16.6014, 15.4102, 14.5910, 13.8844, 11.7253 and 8.2342 respectively, numerical results obtained by the WQZ-algorithm are shown in Figure 1.

    Figure 1.  Numerical results for the WQZ-algorithm with different τ.

    Numerical results in Figure 1 shows that :

    1) When we take τ=8.2342, the sequence more rapidly converges to the largest M-eigenvalue λmax(A) than taking τ=23.3503, τ=16.6014, τ=15.4102, τ=14.5910, τ=13.8844 and τ=11.7253, respectively.

    2) When we take τ=23.3503, 16.6014, 15.4102, 14.5910, 13.8844, 11.7253 and 8.2342, the WQZ-algorithm can get the largest M-eigenvalue λmax(A) after finite iterations. However, under the same stopping criterion, if we take τ=23.3503, 16.6014, 15.4102, 14.5910, 13.8844 and 11.7253, it can be seen that the WQZ-algorithm needs more iterations to obtain the largest M-eigenvalue, and when τ=8.2342, WQZ-algorithm can obtain the largest M-eigenvalue λmax(A) faster.

    3) The choice of the parameter τ in WQZ-algorithm has a significant impact on the convergence speed of the WQZ-algorithm. When τ is larger, the convergence speed of WQZ-algorithm is slower. When τ is smaller and τ is greater than the largest M-eigenvalue, the WQZ-algorithm converges faster. In other words, the faster the largest M-eigenvalue can be obtained.

    4) The numerical result of the upper bound of the M-spectral radius obtained by Theorem 3.1 is of great help to the WQZ-algorithm. Therefore, it shows that the results we get have a certain effect.

    Now, we consider a real elasticity tensor, which is derived from the study of self-anisotropic materials [10] for explanation.

    In anisotropy materials, the components of the tensor of elastic moduli C=(cijkl)R[3]×[3]×[3]×[3] satisfy the following symmetry:

    cijkl=cjikl=cijlk=cjilk,cijkl=cklij,1i,j,k,l3,

    which is also called an elasticity tensor. After a lot of research, we know that there are many anisotropic materials, of which crystal is one of its typical examples. We classify from the crystal homologues [22], the elasticity tensor C=(cijkl)R[3]×[3]×[3]×[3] of some crystals for trigonal system, such as CaCO3 and HgS also satisfy

    c1112=c2212=c3323=c3331=c3312=c2331=0,c2222=c1111,c3131=c2323,c2233=c1133,c2223=c1123,c2231=c1131,c3112=2c1123,c2312=2c1131,c1212=c1111c1122.

    This shows that the triangular system of anisotropic materials has only 7 elasticities. In fact, CaMg(CO3)2-dolomite and CaCO3-calcite have similar crystal structures, in which the atoms along any triplet are alternated with magnesium and calcium. In [22], we can know that the elasticity tensor of CaMg(CO3)2-dolomite is as follows.

    c2222=c1111=196.6,c3131=c2323=83.2,c2233=c1133=54.7,c2223=c1123=31.7,c2231=c1131=25.3,c3112=44.8,c2312=35.84,c1212=132.2,c3333=110,c1122=64.4.

    Next, we transform the elastic tensor C into a partially symmetric tensor A through the following double mapping, and the M-eigenvalue of A after transformation is the same as the M-eigenvalue of C [7,12]:

    aijkl=aikjl,1i,j,k,l3.

    In order to illustrate the validity of the results we obtained, we take the above-mentioned partial symmetry tensor of the CaMg(CO3)2-dolomite elasticity tensor transformation as an example.

    Example 3. Consider the tensor A2=(aijkl)R[3]×[3]×[3]×[3] in Example 3 of [17], where

    a2222=a1111=196.6,a3311=a2233=83.2,a2323=a3232=a1313=a3131=54.7,a2223=a2232=a1213=a2131=31.7,a3333=110,a1212=a2121=64.4,a1122=132.2,a2321=a1232=a1311=a1131=25.3,a3112=a1321=44.8,a2132=a1223=35.84,

    and other aijkl=0.

    The data results of Example 2 show that the upper bound of the largest M-eigenvalue in Theorem 3.1 is sharper than the existing results.Here, we only calculate the upper bound of the largest M-eigenvalue of A2 by Theorem 3.1, and use it as the parameter τ in the WQZ-algorithm to calculate the largest M-eigenvalue of A2. Here, in order to distinguish different values of τ, we calculate the result by Theorem 3.1 and record it as τ2, that is, WQZ-algorithm τ=τ2.

    By Theorem 3.1, we can get τ2=647.6100.

    By Eq (1.3), we can get

    τ=1st9|Ast|=1998.6000.

    In the WQZ-algorithm, when we take τ=1998.6000 and 647.6100 respectively, the numerical results we get are shown in Figure 2.

    Figure 2.  Numerical results for the WQZ-algorithm with different τ.

    As we can see in Figure 2, in the WQZ-algorithm, when we regard τ2 as τ, it makes the convergence sequence in the WQZ-algorithm converges faster than τ=1st9|Ast|, so that the largest M-eigenvalue can be calculated faster.That is to say, in this article, the result we provide as the parameter τ in the WQZ-algorithm can speed up the convergence speed, so that the largest M-eigenvalue can be calculated quickly.

    In this paper, we first in Theorem 2.1 provided an M-eigenvalue localization set Υ(A) for a fourth-order partially symmetric tensor A, and then proven that the set Υ(A) is tighter than the set H(A) in Theorem 2.2 of [23]. Secondly, based on the set Υ(A), we derived an upper bound for the M-spectral radius of A. As an application, we took the upper bound of the M-spectral radius as a parameter τ in the WQZ-algorithm to make the sequence generated by this algorithm converge to the largest M-eigenvalue of A faster. Finally, two numerical examples are given to show the effectiveness of the set Υ(A) and the upper bound Ω(A).

    The author sincerely thanks the editors and anonymous reviewers for their insightful comments and constructive suggestions, which greatly improved the quality of the paper. The author also thanks Professor Jianxing Zhao (Guizhou Minzu University) for guidance. This work is supported by Science and Technology Plan Project of Guizhou Province (Grant No. QKHJC-ZK[2021]YB013).

    The author declares no conflict of interest.



    [1] M. Abdallah, M. A. Talib, S. Feroz, Q. Nasir, H. Abdalla, B. Mahfood, Artificial intelligence applications in solid waste management: A systematic research review, Waste Manage., 109 (2020), 231–246. https://doi.org/10.1016/j.wasman.2020.04.057 doi: 10.1016/j.wasman.2020.04.057
    [2] S. Agatonovic-Kustrin, R. Beresford, Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research, J Pharm. Biomed. Anal., 22 (2000), 717–727. https://doi.org/10.1016/s0731-7085(99)00272-1 doi: 10.1016/s0731-7085(99)00272-1
    [3] J. J. Hopfield, D. W. Tank, "Neural" computation of decisions in optimization problems, Biol. Cybern., 52 (1985), 141–152. https://doi.org/10.1007/bf00339943 doi: 10.1007/bf00339943
    [4] W. A. T. W. Abdullah, Logic programming on a neural network, Int. J. Intell. Syst., 7 (1992), 513–519. https://doi.org/10.1002/int.4550070604 doi: 10.1002/int.4550070604
    [5] A. Alway, N. E. Zamri, S. A. Karim, M. A. Mansor, M. S. M. Kasihmuddin, M. M, Bazuhair, Major 2 satisfiability logic in discrete Hopfield neural network, Int. J. Comput. Math. 99 (2022), 924–948. https://doi.org/10.1080/00207160.2021.1939870 doi: 10.1080/00207160.2021.1939870
    [6] S. A. Karim, M. S. M. Kasihmuddin, S. Sathasivam, M. A. Mansor, S. Z. M. Jamaludin, M. R. Amin, A Novel Multi-Objective Hybrid Election Algorithm for Higher-Order Random Satisfiability in Discrete Hopfield Neural Network, Mathematics, 10 (2022), 1963. https://doi.org/10.3390/math10121963 doi: 10.3390/math10121963
    [7] M. M. Bazuhair, S. Z. M. Jamaludin, N. E. Zamri, M. S. M. Kasihmuddin, M. A. Mansor, A. Alway, S. A. Karim, Novel Hopfield Neural Network Model with Election Algorithm for Random 3 Satisfiability, Processes, 9 (2021), 1292. https://doi.org/10.3390/pr9081292 doi: 10.3390/pr9081292
    [8] M. S. M. Kasihmuddin, M. A. Mansor, S. Sathasivam, Hybrid Genetic Algorithm in the Hopfield Network for Logic Satisfiability Problem, Pertanika J. Sci. Technol., 25 (2017), 139–152. https://doi.org/10.1063/1.4995911 doi: 10.1063/1.4995911
    [9] M. A. Mansor, M. S. M. Kasihmuddin, S. Sathasivam, Artificial Immune System Paradigm in the Hopfield Network for 3-Satisfiability Problem, Pertanika J. Sci. Technol., 25 (2017), 1173–1188.
    [10] N. E. Zamri, M. A. Mansor, M. S. M. Kasihmuddin, A. Alway, S. Z. M. Jamaludin, S. A. Alzaeemi, Amazon employees resources access data extraction via clonal selection algorithm and logic mining approach, Entropy, 22 (2020), 596. https://doi.org/10.3390/e22060596 doi: 10.3390/e22060596
    [11] S. A. Karim, N. E. Zamri, A. Alway, M. S. M. Kasihmuddin, A. I. M. Ismail, M. A. Mansor, et al., Random satisfiability: A higher-order logical approach in discrete Hopfield Neural Network, IEEE Access, 9 (2021), 50831–50845. https://doi.org/10.1109/access.2021.3068998 doi: 10.1109/access.2021.3068998
    [12] E. Pacuit, S. Salame, Majority Logic, KR, 4 (2004), 598–605.
    [13] Z. Zheng, S. Yang, Y. Guo, X. Jin, R. Wang, Meta-heuristic Techniques in Microgrid Management: A Survey, Swarm Evol. Comput., 78 (2023), 101256. https://doi.org/10.1016/j.swevo.2023.101256 doi: 10.1016/j.swevo.2023.101256
    [14] H. Emami, F. Derakhshan, Election algorithm: A new socio-politically inspired strategy, AI Commun., 28 (2015), 591–603. https://doi.org/10.3233/aic-140652 doi: 10.3233/aic-140652
    [15] H. Emami, Chaotic election algorithm, Comput. Inform. 38 (2019), 1444–1478. https://doi.org/10.31577/cai_2019_6_1444 doi: 10.31577/cai_2019_6_1444
    [16] S. Sathasivam, M. A. Mansor, M. S. M. Kasihmuddin, H. Abubakar, Election algorithm for random k satisfiability in the Hopfield neural network, Processes, 8 (2020), 568. https://doi.org/10.3390/pr8050568 doi: 10.3390/pr8050568
    [17] B. F. B. A. Boya, B. Ramakrishnan, J. Y. Effa, J. Kengne, K. Rajagopal, Effects of bias current and control of multistability in 3D hopfield neural network, Heliyon, 9 (2023), 13034. https://doi.org/10.1016/j.heliyon.2023.e13034. doi: 10.1016/j.heliyon.2023.e13034
    [18] S. Z. M. Jamaludin, N. A. Romli, M. S. M. Kasihmuddin, A. Baharum, M. A. Mansor, M.F. Marsani, Novel logic mining incorporating log linear approach, J. King Saud Univ. Comput. Inform. Sci., 34 (2022), 9011–9027.
    [19] M. S. M. Kasihmuddin, S. Z. M. Jamaludin, M. A. Mansor, H. A. Wahab, S. M. S. Ghadzi, Supervised learning perspective in logic mining, Mathematics, 10 (2022), 915.
    [20] Y. Guo, M. S. M. Kasihmuddin, Y. Gao, M. A. Mansor, H. A. Wahab, N. E. Zamri, et al., YRAN2SAT: A novel flexible random satisfiability logical rule in discrete hopfield neural network, Adv. Eng. Software, 171 (2022), 103169. https://doi.org/10.1016/j.advengsoft.2022.103169 doi: 10.1016/j.advengsoft.2022.103169
    [21] N. Bacanin, C. Stoean, M. Zivkovic, M. Rakic, R. Strulak-Wójcikiewicz, R. Stoean, On the Benefits of Using Metaheuristics in the Hyperparameter Tuning of Deep Learning Models for Energy Load Forecasting, Energies, 16 (2023), 1434. https://doi.org/10.3390/en16031434 doi: 10.3390/en16031434
    [22] S. Z. M. Jamaludin, M. S. M. Kasihmuddin, A. I. M. Ismail, M. A. Mansor, M.F.M. Basir, Energy based logic mining analysis with hopfield neural network for recruitment evaluation. Entropy, 23 (2020), 40. https://doi.org/10.3390/e23010040 doi: 10.3390/e23010040
    [23] J. Chen, M. S. M. Kasihmuddin, Y. Gao, Y. Guo, M. A. Mansor, N.A. Romli, et al., PRO2SAT: Systematic Probabilistic Satisfiability logic in Discrete Hopfield Neural Network, Adv. Eng. Software, 175 (2023), 103355. https://doi.org/10.1016/j.advengsoft.2022.103355 doi: 10.1016/j.advengsoft.2022.103355
    [24] L. C. Kho, M. S. M. Kasihmuddin, M. A. Mansor, S. Sathasivam, Logic Mining in League of Legends, Pertanika J. Sci. Technol., 28 (2020), 211–225.
    [25] N. Khentout, G. Magrotti, Fault supervision of nuclear research reactor systems using artificial neural networks: A review with results, Ann. Nucl. Energy, 185 (2023), 109684. https://doi.org/10.1016/j.anucene.2023.109684 doi: 10.1016/j.anucene.2023.109684
    [26] N. Kanwisher, M. Khosla, K. Dobs, Using artificial neural networks to ask 'why'questions of minds and brains, Trends Neurosci., 46 (2023), 240–254. https://doi.org/10.1016/j.tins.2022.12.008 doi: 10.1016/j.tins.2022.12.008
    [27] S. S. M. Sidik, N. E. Zamri, M. S. M. Kasihmuddin, H. A. Wahab, Y. Guo, M. A. Mansor, Non-Systematic Weighted Satisfiability in Discrete Hopfield Neural Network Using Binary Artificial Bee Colony Optimization, Mathematics, 10 (2022), 1129. https://doi.org/10.3390/math10071129 doi: 10.3390/math10071129
    [28] S. Subiyanto, A. Mohamed, M. A. Hannan, Intelligent maximum power point tracking for PV system using Hopfield neural network optimized fuzzy logic controller, Energ. Buildings, 51 (2012), 29–38.
    [29] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, A. H. Gandomi, The arithmetic optimization algorithm, Comput. Method. Appl. M., 376 (2021), 113609. https://doi.org/10.1016/j.cma.2020.113609 doi: 10.1016/j.cma.2020.113609
    [30] Z. Wang, L. Shen, X. Li, L. Gao, An improved multi-objective firefly algorithm for energy-efficient hybrid flowshop rescheduling problem, J. Clean. Produc., 385 (2023), 135738. https://doi.org/10.1016/j.jclepro.2022.135738 doi: 10.1016/j.jclepro.2022.135738
    [31] W. H. Bangyal, A. Hameed, J. Ahmad, K. Nisar, M. R. Haque, A. A. A. Ibrahim, et al., New modified controlled bat algorithm for numerical optimization problem, Comput., Mater. Con., 70 (2022), 2241–2259.
    [32] M. Kobayashi, Quaternion projection rule for rotor hopfield neural networks, IEEE T. Neur. Net. Lear., 32 (2020), 900–908.
    [33] M. Safavi, A. K. Siuki, S. R. Hashemi, New optimization methods for designing rain stations network using new neural network, election, and whale optimization algorithms by combining the Kriging method, Environ. Monit. Assess., 193 (2021), 4. https://doi.org/10.1007/s10661-020-08726-z doi: 10.1007/s10661-020-08726-z
    [34] X. Dang, X. Tang, Z. Hao, J. Ren, Discrete Hopfield neural network based indoor Wi-Fi localization using CSI, EURASIP J. Wirel. Commun., 2020 (2020), 76.
    [35] G. J. Sawale, S. R. Gupta, Use of artificial neural network in data mining for weather forecasting, Int. J. Comput. Sci. Appl., 6 (2013), 383–387.
    [36] Z. Zhang, L. Zheng, Y. Zhou, Q. Guo, A Novel Finite-Time-Gain-Adjustment Controller Design Method for UAVs Tracking Time-Varying Targets, IEEE T. Intell. Transport. Syst., 23 (2021), 12531–12543.
  • This article has been cited by:

    1. Muhammad Anwar Chaudhry, Asfand Fahad, Muhammad Imran Qureshi, Urwa Riasat, Musavarah Sarwar, Some Results about Weak UP-algebras, 2022, 2022, 2314-4785, 1, 10.1155/2022/1206804
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1784) PDF downloads(103) Cited by(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog