Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Feature adaptive multi-view hash for image search

  • With the rapid development of network technology and small handheld devices, the amount of data has significantly increased and various kinds of data can be supplied to us at the same time. Recently, hashing technology has become popular in executing large-scale similarity search and image matching tasks. However, most of the prior hashing methods are mainly focused on the choice of the high-dimensional feature descriptor for learning effective hashing functions. In practice, real world image data collected from multiple scenes cannot be descriptive enough by using a single type of feature. Recently, several unsupervised multi-view hashing learning methods have been proposed based on matrix factorization, anchor graph and metric learning. However, large quantization error will be introduced via a sign function and the robustness of multi-view hashing is ignored. In this paper we present a novel feature adaptive multi-view hashing (FAMVH) method based on a robust multi-view quantization framework. The proposed method is evaluated on three large-scale benchmarks CIFAR-10, CIFAR-20 and Caltech-256 for approximate nearest neighbor search task. The experimental results show that our approach can achieve the best accuracy and efficiency in the three large-scale datasets.

    Citation: Li Sun, Bing Song. Feature adaptive multi-view hash for image search[J]. Electronic Research Archive, 2023, 31(9): 5845-5865. doi: 10.3934/era.2023297

    Related Papers:

    [1] Erlin Guo, Cuixia Li, Patrick Ling, Fengqin Tang . Convergence rate for integrated self-weighted volatility by using intraday high-frequency data with noise. AIMS Mathematics, 2023, 8(12): 31070-31091. doi: 10.3934/math.20231590
    [2] Wenhui Feng, Xingfa Zhang, Yanshan Chen, Zefang Song . Linear regression estimation using intraday high frequency data. AIMS Mathematics, 2023, 8(6): 13123-13133. doi: 10.3934/math.2023662
    [3] Yue Li, Yunyan Wang . Strong consistency of the nonparametric kernel estimator of the transition density for the second-order diffusion process. AIMS Mathematics, 2024, 9(7): 19015-19030. doi: 10.3934/math.2024925
    [4] Oussama Bouanani, Salim Bouzebda . Limit theorems for local polynomial estimation of regression for functional dependent data. AIMS Mathematics, 2024, 9(9): 23651-23691. doi: 10.3934/math.20241150
    [5] Ahmed Ghezal, Mohamed balegh, Imane Zemmouri . Markov-switching threshold stochastic volatility models with regime changes. AIMS Mathematics, 2024, 9(2): 3895-3910. doi: 10.3934/math.2024192
    [6] Gaosheng Liu, Yang Bai . Statistical inference in functional semiparametric spatial autoregressive model. AIMS Mathematics, 2021, 6(10): 10890-10906. doi: 10.3934/math.2021633
    [7] Junke Kou, Hao Zhang . Wavelet estimations of the derivatives of variance function in heteroscedastic model. AIMS Mathematics, 2023, 8(6): 14340-14361. doi: 10.3934/math.2023734
    [8] Omar Alzeley, Ahmed Ghezal . On an asymmetric multivariate stochastic difference volatility: structure and estimation. AIMS Mathematics, 2024, 9(7): 18528-18552. doi: 10.3934/math.2024902
    [9] Jiajia Zhao, Zuoliang Xu . Calibration of time-dependent volatility for European options under the fractional Vasicek model. AIMS Mathematics, 2022, 7(6): 11053-11069. doi: 10.3934/math.2022617
    [10] Fatimah Alshahrani, Wahiba Bouabsa, Ibrahim M. Almanjahie, Mohammed Kadi Attouch . kNN local linear estimation of the conditional density and mode for functional spatial high dimensional data. AIMS Mathematics, 2023, 8(7): 15844-15875. doi: 10.3934/math.2023809
  • With the rapid development of network technology and small handheld devices, the amount of data has significantly increased and various kinds of data can be supplied to us at the same time. Recently, hashing technology has become popular in executing large-scale similarity search and image matching tasks. However, most of the prior hashing methods are mainly focused on the choice of the high-dimensional feature descriptor for learning effective hashing functions. In practice, real world image data collected from multiple scenes cannot be descriptive enough by using a single type of feature. Recently, several unsupervised multi-view hashing learning methods have been proposed based on matrix factorization, anchor graph and metric learning. However, large quantization error will be introduced via a sign function and the robustness of multi-view hashing is ignored. In this paper we present a novel feature adaptive multi-view hashing (FAMVH) method based on a robust multi-view quantization framework. The proposed method is evaluated on three large-scale benchmarks CIFAR-10, CIFAR-20 and Caltech-256 for approximate nearest neighbor search task. The experimental results show that our approach can achieve the best accuracy and efficiency in the three large-scale datasets.



    Many complicated structures' memory and natural features may be realized using fractional calculus (FC), which studies integrals and derivatives of fractional orders [1,2]. Many recent FC applications have included analyzing the dynamics of large-scale physical events by converting derivatives and integrals from classical to non-integer order. Many branches of engineering and the physical sciences use it, including electric circuits, mathematical biology, control theory, robotics, viscoelasticity, flow models, relaxation, and signal processing [3,4]. Numerous mysterious ideas have been refined via the study of fractional calculus, for example, logistic regression, Malthusian growth, and blood alcohol concentration, all of which have shown that fractional operators outperform integer-order operators [5,6].

    Derivatives of fractional order such as Riemann-Liouville, Atangana Baleanu, Caputo, Hilfer, Grunwald-Letnikov, Caputo Fabrizio, and Riemann-Liouville are among the numerous that have recently been proposed [7,8]. Since all fractional derivatives may be reduced in Caputo's meaning with minor parametric adjustments, the fractional derivative of Caputo is the essential principle of FC to investigate fractional differential equations (FDEs). Caputo's operator, which has numerous applications to model various physical models, possesses a power-law kernel. To address this difficulty, the alternative fractional differential operator [9] was developed, which consists of a Mittag-Leffler kernel and an exponentially decaying kernel. Caputo-Fabrizio (CF) and Atangana-Baleanu are operators characterized by their non-singular kernels. These operators have been widely applied in analyzing diverse problem classes, including but not limited to biology, economics, geophysics, and bioengineering [10].

    Korteweg and de Vries introduced the KdV equation in 1895 to formulate a model for Russell's soliton phenomenon, encompassing water waves of long and small amplitude. Solitons are classified as stable solitary waves, signifying their particle-like nature [11]. Various applied disciplines, including plasma physics, fluid dynamics, quantum mechanics, and optics, implement the KdV equations [12]. Particle physics has employed the fifth-order KdV equations to analyze many nonlinear phenomena [13]. Its function in the propagation of waves is crucial [14]. The authors find third-order and fifth-order dispersive terms in the KdV form equation pertinent to the magneto-acoustic wave problem. Furthermore, these dispersive terms manifest themselves in the vicinity of critical angle propagation [15]. An electrically conducting fluid, plasma is also dynamic and quasi-neutral. Ions, electrons, and neutral particles comprise it. Due to the electrical conductivity exhibited by plasma, it includes both electric and magnetic regions. The variety of particles and regions supports diverse types of plasma waves. A magnetic lock is a less longitudinal ion dispersion. In the low magnetic field range, the magneto-acoustic wave exhibits characteristics of an ion acoustic wave [16,17]. However, at low temperatures, it transforms into an Alfven wave.

    Equivalent to the general model for the investigation of magnetic characteristics of acoustic waves with surface tension is the fifth order of KdV. According to a recent investigation [18,19], the solutions to the equation above concerning traveling waves persist beyond infinity. The following are two widely recognized types of fifth-order KdV equations [20,21]:

    DpΩη(ϵ,Ω)5η(ϵ,Ω)ϵ5+η(ϵ,Ω)3η(ϵ,Ω)ϵ3+η(ϵ,Ω)η(ϵ,Ω)ϵ=0,  0<p1. (1.1)
    DpΩη(ϵ,Ω)+5η(ϵ,Ω)ϵ5η(ϵ,Ω)3η(ϵ,Ω)ϵ3+η(ϵ,Ω)η(ϵ,Ω)ϵ=0,  0<p1. (1.2)

    Here, Eqs (1.1) and (1.2) are called the Kawahara and KdV equation of fifth-order, respectively. The extreme nonlinearity of these mathematical models makes it difficult to find suitable analytical methods. Researchers have developed and implemented several techniques for solving nonlinear and linear equations of KdV in the past ten years. These techniques include the variational iteration method [21], the multi-symplectic method [22], He's homotopy perturbation method [23], and the Exp-function method [24].

    Omar Abu Arqub established residual power series method (RPSM) in 2013 [25]. It is created by merging the residual error function with the Taylor series. According to [26], an infinite convergence series solves differential equations (DEs). The development of novel RPSM algorithms has been prompted by several DEs, including KdV Burger's equation, fuzzy DEs, Boussinesq DEs, and numerous others [27,28]. The goal of these algorithms is to provide efficient and accurate estimates.

    A novel strategy for solving FDEs was established by integrating two effective methods. Some approaches that fall into these categories include those that use the natural transform [29], the Laplace transform with RPSM [30], and the homotopy perturbation method [31]. In this work, we used a novel combination method known as the Abdooh residual power series method (ARPSM) to discover approximation and precise solutions for time-fractional nonlinear partial differential equations (PDEs). This innovative method is significant because it combines the Aboodh transform technique with the RPSM [32,33].

    The computing effort and complexity needed are significant issues with the previously mentioned approaches. Our suggested Aboodh transform iterative method (ATIM) [34] is this work's unique aspect that solves the Kawahara and KdV equations of fractional order. By integrating the Aboodh transform with the new iterative technique, this strategy significantly reduces the computing effort and complexity required. According to [35,36], the suggested approach yields a convergent series solution.

    The ARPSM and the ATIM are the two most straightforward approaches to solving fractional DEs. These methods fully and immediately explain the symbolic terms used in analytical solutions and offer numerical solutions to PDEs. This paper assesses ATIM and ARPSM's efficacy in solving the fifth-order KdV and Kawahara equations.

    The fifth-order KdV and Kawahara equations are solved using ARPSM and ATIM. These methods provide more precise numerical answers when compared with other numerical techniques. Additionally, a comparison analysis is performed on the numerical findings. The suggested approaches' findings are consistent with one another, which is a strong indicator of their efficacy and reliability. For various values of fractional-order derivatives, there is additional graphical importance. Therefore, the methods are accurate, easy to implement, not affected by computational error phases, and quick. This study lays the groundwork for researchers to quickly solve various PDEs.

    Definition 2.1. [37] Assume that η(ϵ,Ω) is an exponential order continuous function. The definition of the Aboodh transform (AT), assuming σ0 for η(ϵ,Ω), is as follows:

    A[η(ϵ,Ω)]=Ψ(ϵ,ξ)=1ξ0η(ϵ,Ω)eΩξdΩ,  r1ξr2.

    The Aboodh inverse transform (AIT) is given as:

    A1[Ψ(ϵ,ξ)]=η(ϵ,Ω)=12πiu+iuiΨ(ϵ,Ω)ξeΩξdΩ,

    where ϵ=(ϵ1,ϵ2,,ϵp)Rp and pN.

    Lemma 2.1. [38,39] It is assumed that there exist two exponentially ordered, piecewise continuous functions η1(ϵ,Ω) and η2(ϵ,Ω) on [0,]. Let A[η1(ϵ,Ω)]=Ψ1(ϵ,Ω),A[η2(ϵ,Ω)]=Ψ2(ϵ,Ω), and χ1,χ2 be arbitrary constants. These characteristics are thus true:

    (1) A[χ1η1(ϵ,Ω)+χ2η2(ϵ,Ω)]=χ1Ψ1(ϵ,ξ)+χ2Ψ2(ϵ,Ω),

    (2) A1[χ1Ψ1(ϵ,Ω)+χ2Ψ2(ϵ,Ω)]=χ1η1(ϵ,ξ)+χ2η2(ϵ,Ω),

    (3) A[JpΩη(ϵ,Ω)]=Ψ(ϵ,ξ)ξp,

    (4) A[DpΩη(ϵ,Ω)]=ξpΨ(ϵ,ξ)r1K=0ηK(ϵ,0)ξKp+2,r1<pr, rN.

    Definition 2.2. [40] In terms of order p, the function η(ϵ,Ω) has derivative of fractional order as stated by Caputo.

    DpΩη(ϵ,Ω)=JmpΩη(m)(ϵ,Ω), m1<pm, r0,

    where ϵ=(ϵ1,ϵ2,,ϵp)Rp and p,mR,JmpΩ is the integral of the Riemann-Liouville of η(ϵ,Ω).

    Definition 2.3. [41] The representation of power series is composed of the following structure.

    r=0r(ϵ)(ΩΩ0)rp=1+1(ΩΩ0)p+2(ΩΩ0)2p+,

    where ϵ=(ϵ1,ϵ2,,ϵp)Rp and pN. This is known as the multiple fractional power series concerning Ω0, where Ω and r(ϵ)s are variable and series coefficients, respectively.

    Lemma 2.2. Consider the exponential order function is denoted as η(ϵ,Ω). A[η(ϵ,Ω)]=Ψ(ϵ,ξ) is the description of the AT in this case. Hence,

    A[DrpΩη(ϵ,Ω)]=ξrpΨ(ϵ,ξ)r1j=0ξp(rj)2DjpΩη(ϵ,0),0<p1, (2.1)

    where ϵ=(ϵ1,ϵ2,,ϵp)Rp and pN and DrpΩ=DpΩ.DpΩ..DpΩ(rtimes)

    Proof. By using the induction method, we have to prove Eq (2.1). In Eq (2.1), substitute r=1.

    A[DpΩη(ϵ,Ω)]=ξpΨ(ϵ,ξ)ξp2η(ϵ,0)ξp2DpΩη(ϵ,0).

    On the bases of Lemma 2.1, Eq (2.1) for r=1 holds true. Put r=2 in Eq (2.1).

    A[D2prη(ϵ,Ω)]=ξ2pΨ(ϵ,ξ)ξ2p2η(ϵ,0)ξp2DpΩη(ϵ,0). (2.2)

    From left-hand side (LHS) of Eq (2.2), we obtain:

    LHS=A[D2pΩη(ϵ,Ω)]. (2.3)

    The expressions for Eq (2.3) are as follows:

    LHS=A[DpΩη(ϵ,Ω)]. (2.4)

    Assume

    z(ϵ,Ω)=DpΩη(ϵ,Ω). (2.5)

    This makes Eq (2.4) as

    LHS=A[DpΩz(ϵ,Ω)]. (2.6)

    From the definition of the derivative of Caputo, we make changes in Eq (2.6).

    LHS=A[J1pz(ϵ,Ω)]. (2.7)

    By applying the Riemann-Liouville integral Eq (2.7), we obtain:

    LHS=A[z(ϵ,Ω)]ξ1p. (2.8)

    By using the AT feature of differentiability, Eq (2.8) is modified:

    LHS=ξpZ(ϵ,ξ)z(ϵ,0)ξ2p. (2.9)

    From Eq (2.5), we derive:

    Z(ϵ,ξ)=ξpΨ(ϵ,ξ)η(ϵ,0)ξ2p,

    where A[z(ϵ,Ω)]=Z(ϵ,ξ). Hence, Eq (2.9) becomes

    LHS=ξ2pΨ(ϵ,ξ)η(ϵ,0)ξ22pDpΩη(ϵ,0)ξ2p. (2.10)

    Let's suppose Eq (2.1) holds true for r=K. Substitute r=K in Eq (2.1):

    A[DKpΩη(ϵ,Ω)]=ξKpΨ(ϵ,ξ)K1j=0ξp(Kj)2DjpΩDjpΩη(ϵ,0), 0<p1. (2.11)

    Substituting r=K+1 in Eq (2.1):

    A[D(K+1)pΩη(ϵ,Ω)]=ξ(K+1)pΨ(ϵ,ξ)Kj=0ξp((K+1)j)2DjpΩη(ϵ,0). (2.12)

    After analyzing Eq (2.12)'s LHS, we deduce

    LHS=A[DKpΩ(DKpΩ)]. (2.13)

    Let

    DKpΩ=g(ϵ,Ω).

    By Eq (2.13), we drive

    LHS=A[DpΩg(ϵ,Ω)]. (2.14)

    By using the integral of the Riemann-Liouville and derivative of Caputo on Eq (2.14), the subsequent result can be obtained.

    LHS=ξpA[DKpΩη(ϵ,Ω)]g(ϵ,0)ξ2p. (2.15)

    To get Eq (2.15), use Eq (2.11).

    LHS=ξrpΨ(ϵ,ξ)r1j=0ξp(rj)2DjpΩη(ϵ,0). (2.16)

    In addition, Eq (2.16) produces the subsequent outcome.

    LHS=A[DrpΩη(ϵ,0)].

    Thus, for r=K+1, Eq (2.1) holds. For all positive integers, Eq (2.1) holds true according to the mathematical induction technique.

    A deeper understanding of the ARPSM and multiple fractional Taylor series (MFTS) are given as follow.

    Lemma 2.3. Consider the function η(ϵ,Ω) is an exponential order. A[η(ϵ,Ω)]=Ψ(ϵ,ξ) is the expression that signifies the AT of η(ϵ,Ω). AT is represented as follows in MFTS notation:

    Ψ(ϵ,ξ)=r=0r(ϵ)ξrp+2,ξ>0, (2.17)

    where, ϵ=(s1,ϵ2,,ϵp)Rp, pN.

    Proof. Consider the Taylor's series:

    η(ϵ,Ω)=0(ϵ)+1(ϵ)ΩpΓ[p+1]AA+2(ϵ)Ω2pΓ[2p+1]+. (2.18)

    The subsequent equality is produced when the AT is applied to Eq (2.18):

    A[η(ϵ,Ω)]=A[0(ϵ)]+A[1(ϵ)ΩpΓ[p+1]]+A[1(ϵ)Ω2pΓ[2p+1]]+.

    This is achieved by utilizing the AT's features.

    A[η(ϵ,Ω)]=0(ϵ)1ξ2+1(ϵ)1Γ[p+1]1ξp+2+2(ϵ)1Γ[2p+1]1ξ2p+2.

    Hence, by Eq (2.17), a new Taylor's series is obtained:

    Lemma 2.4. Let the multiple fractional power series (MFPS) be expressed in terms of Taylor's series new form Eq (2.17), A[η(ϵ,Ω)]=Ψ(ϵ,ξ).

    0(ϵ)=limξξ2Ψ(ϵ,ξ)=η(ϵ,0). (2.19)

    Proof. Let's suppose the Taylor's series:

    0(ϵ)=ξ2Ψ(ϵ,ξ)1(ϵ)ξp2(ϵ)ξ2p. (2.20)

    As denoted by Eq (2.20), the necessary solution can be obtained by employing limx in Eq (2.19) and performing a short calculation.

    Theorem 2.5. The following is an MFPS representation of the function A[η(ϵ,Ω)]=Ψ(ϵ,ξ):

    Ψ(ϵ,ξ)=0r(ϵ)ξrp+2, ξ>0,

    where ϵ=(ϵ1,ϵ2,,ϵp)Rp and pN. Then, we have

    r(ϵ)=Drprη(ϵ,0),

    where, DrpΩ=DpΩ.DpΩ..DpΩ(rtimes).

    Proof. Let's suppose the Taylor's series:

    1(ϵ)=ξp+2Ψ(ϵ,ξ)ξp0(ϵ)2(ϵ)ξp3(ϵ)ξ2p (2.21)

    limξ, is applied to Eq (2.21), and we get

    1(ϵ)=limξ(ξp+2Ψ(ϵ,ξ)ξp0(ϵ))limξ2(ϵ)ξplimξ3(ϵ)ξ2p.

    The equality that results from taking the limit is as follows:

    1(ϵ)=limξ(ξp+2Ψ(ϵ,ξ)ξp0(ϵ)). (2.22)

    Using Lemma 2.2, we obtain:

    1(ϵ)=limξ(ξ2A[DpΩη(ϵ,Ω)](ξ)). (2.23)

    Furthermore, the Eq (2.23) is modified using Lemma 2.3.

    1(ϵ)=DpΩη(ϵ,0).

    Using Taylor's series and applying limitξ again, we obtain:

    2(ϵ)=ξ2p+2Ψ(ϵ,ξ)ξ2p0(ϵ)ξp1(ϵ)3(ϵ)ξp.

    Lemma 2.3 gives us the result

    2(ϵ)=limξξ2(ξ2pΨ(ϵ,ξ)ξ2p20(ϵ)ξp21(ϵ)). (2.24)

    Equation (2.24) is transformed using Lemmas 2.2 and Eq (2.4).

    2(ϵ)=D2pΩη(ϵ,0).

    Apply the same procedure and Taylor series, and we obtain:

    3(ϵ)=limξξ2(A[D2pΩη(ϵ,p)](ξ)).

    Finally, we get:

    3(ϵ)=D3pΩη(ϵ,0).

    In general,

    r(ϵ)=DrpΩη(ϵ,0),

    is proved. The new Taylor series has the conditions for the convergence given in the subsequent theorem.

    Theorem 2.6. The expression for MFTS is given in Lemma 2.3 and can be expressed as: A[η(ϵ,Ω)]=Ψ(ϵ,ξ). When |ξaA[D(K+1)pΩη(ϵ,Ω)]|T, 0<p1, and 0<ξs, RK(ϵ,ξ) is the residual of the new MFTS satisfying:

    |RK(ϵ,ξ)|Tξ(K=1)p+2, 0<ξs.

    Proof. For r=0,1,2,,K+1, and 0<ξs, we consider to define A[DrpΩη(ϵ,Ω)](ξ). Utilize the Taylor series to derive the subsequent relation:

    RK(ϵ,ξ)=Ψ(ϵ,ξ)Kr=0r(ϵ)ξrp+2. (2.25)

    Apply Theorem 2.5 on Eq (2.25) to obtain:

    RK(ϵ,ξ)=Ψ(ϵ,ξ)Kr=0DrpΩη(ϵ,0)ξrp+2. (2.26)

    ξ(K+1)a+2 is to be multiplied with Eq (2.26) to obtain the following form.

    ξp(K+1)+2RK(ϵ,ξ)=ξ2(ξp(K+1)Ψ(ϵ,ξ)Kr=0ξp(K+1r)2DrpΩη(ϵ,0)). (2.27)

    Equation (2.27) is modified with Lemma 2.2:

    ξp(K+1)+2RK(ϵ,ξ)=ξ2A[Dp(K+1)Ωη(ϵ,Ω)]. (2.28)

    The absolute of Eq (2.28) gives us

    |ξp(K+1)+2RK(ϵ,ξ)|=|ξ2A[Dp(K+1)Ωη(ϵ,Ω)]|. (2.29)

    By applying the conditions listed in Eq (2.29), the subsequent result is achieved.

    Tξp(K+1)+2RK(ϵ,ξ)Tξp(K+1)+2. (2.30)

    Equation (2.30) yields the desired outcome.

    |RK(ϵ,ξ)|Tξp(K+1)+2.

    Therefore, new conditions for the series to converge are developed.

    In this paper, we explain how ARPSM rules formed the basis of our solution.

    Step 1: Assume the general PDE:

    DqpΩη(ϵ,Ω)+ϑ(ϵ)N(η)δ(ϵ,η)=0. (3.1)

    Step 2: Apply the AT on Eq (3.1):

    A[DqpΩη(ϵ,Ω)+ϑ(ϵ)N(η)δ(ϵ,η)]=0. (3.2)

    Utilizing Lemma 2.1 to modify Eq (3.2),

    Ψ(ϵ,s)=q1j=0DjΩη(ϵ,0)sqp+2ϑ(ϵ)Y(s)sqp+F(ϵ,s)sqp, (3.3)

    where A[δ(ϵ,η)]=F(ϵ,s),A[N(η)]=Y(s).

    Step 3: Equation (3.3) takes the following form:

    Ψ(ϵ,s)=r=0r(ϵ)srp+2, s>0.

    Step 4: Take the steps listed below:

    0(ϵ)=limss2Ψ(ϵ,s)=η(ϵ,0).

    Use Theorem 2.6 to obtain this form.

    1(ϵ)=DpΩη(ϵ,0),2(ϵ)=D2pΩη(ϵ,0),w(ϵ)=DwpΩη(ϵ,0).

    Step 5: The Kth truncated series Ψ(ϵ,s) can be obtained using the following expression:

    ΨK(ϵ,s)=Kr=0r(ϵ)srp+2, s>0,
    ΨK(ϵ,s)=0(ϵ)s2+1(ϵ)sp+2++w(ϵ)swp+2+Kr=w+1r(ϵ)srp+2.

    Step 6: Note that the residual Aboodh function (RAF) (3.3) and the Kth-truncated RAF must be considered independently to obtain:

    ARes(ϵ,s)=Ψ(ϵ,s)q1j=0DjΩη(ϵ,0)sjp+2+ϑ(ϵ)Y(s)sjpF(ϵ,s)sjp,

    and

    AResK(ϵ,s)=ΨK(ϵ,s)q1j=0DjΩη(ϵ,0)sjp+2+ϑ(ϵ)Y(s)sjpF(ϵ,s)sjp. (3.4)

    Step 7: Equation (3.4) may be substituted with ΨK(ϵ,s) in place of its expansion form.

    AResK(ϵ,s)=(0(ϵ)s2+1(ϵ)sp+2++w(ϵ)swp+2+Kr=w+1r(ϵ)srp+2)q1j=0DjΩη(ϵ,0)sjp+2+ϑ(ϵ)Y(s)sjpF(ϵ,s)sjp. (3.5)

    Step 8: Multifly sKp+2 on either side of the equation to get the solution to Eq (3.5).

    sKp+2AResK(ϵ,s)=sKp+2(0(ϵ)s2+1(ϵ)sp+2++w(ϵ)swp+2+Kr=w+1r(ϵ)srp+2q1j=0DjΩη(ϵ,0)sjp+2+ϑ(ϵ)Y(s)sjpF(ϵ,s)sjp). (3.6)

    Step 9: Take lims of Eq (3.6) to obtain:

    limssKp+2AResK(ϵ,s)=limssKp+2(0(ϵ)s2+1(ϵ)sp+2++w(ϵ)swp+2+Kr=w+1r(ϵ)srp+2q1j=0DjΩη(ϵ,0)sjp+2+ϑ(ϵ)Y(s)sjpF(ϵ,s)sjp).

    Step 10: K(ϵ) values can be obtained using the equation above.

    lims(sKp+2AResK(ϵ,s))=0,

    where K=1+w,2+w,.

    Step 11: Values of K(ϵ) are then substituted in Eq (3.3).

    Step 12: Taking the inverse AT we obtain the final solution ηK(ϵ,Ω).

    Let's consider the PDE as given below:

    DpΩη(ϵ,Ω)=Φ(η(ϵ,Ω),DΩϵη(ϵ,Ω),D2Ωϵη(ϵ,Ω),D3Ωϵη(ϵ,Ω)), 0<p,Ω1. (3.7)

    The initial condition is

    η()(ϵ,0)=h, =0,1,2,,m1. (3.8)

    The function to be determined is η(ϵ,Ω), while Φ(η(ϵ,Ω),DΩϵη(ϵ,Ω),D2Ωϵη(ϵ,Ω)D3Ωϵη(ϵ,Ω)) are operators of η(ϵ,Ω),DΩϵη(ϵ,Ω),D2Ωϵη(ϵ,Ω) and D3Ωϵη(ϵ,Ω). The AT is applied on Eq (3.7) to obtain:

    A[η(ϵ,Ω)]=1sp(m1=0η()(ϵ,0)s2p++A[Φ(η(ϵ,Ω),DΩϵη(ϵ,Ω),D2Ωϵη(ϵ,Ω),D3Ωϵη(ϵ,Ω))]). (3.9)

    The AIT yields the solution to this problem:

    η(ϵ,Ω)=A1[1sp(m1=0η()(ϵ,0)s2p++A[Φ(η(ϵ,Ω),DΩϵη(ϵ,Ω),D2Ωϵη(ϵ,Ω),D3Ωϵη(ϵ,Ω))])]. (3.10)

    An infinite series denotes the ATIM-derived solution.

    η(ϵ,Ω)=i=0ηi. (3.11)

    Φ(η,DΩϵη,D2Ωϵη,D3Ωϵη) can be decomposed as:

    Φ(η,DΩϵη,D2Ωϵη,D3Ωϵη)=Φ(η0,DΩϵη0,D2Ωϵη0,D3Ωϵη0)+i=0(Φ(i=0(η,DΩϵη,D2Ωϵη,D3Ωϵη))Φ(i1=1(η,DΩϵη,D2Ωϵη,D3Ωϵη))). (3.12)

    The subsequent equation is obtained by substituting the values of Eqs (3.11) and (3.12) for the initial equation (3.10).

    i=0ηi(ϵ,Ω)=A1[1sp(m1=0η()(ϵ,0)s2p++A[Φ(η0,DΩϵη0,D2Ωϵη0,D3Ωϵη0)])]+A1[1sp(A[i=0(Φi=0(η,DΩϵη,D2Ωϵη,D3Ωϵη))])]A1[1sp(A[(Φi1=1(η,DΩϵη,D2Ωϵη,D3Ωϵη))])] (3.13)
    η0(ϵ,Ω)=A1[1sp(m1=0η()(ϵ,0)s2p+)],η1(ϵ,Ω)=A1[1sp(A[Φ(η0,DΩϵη0,D2Ωϵη0,D3Ωϵη0)])],ηm+1(ϵ,Ω)=A1[1sp(A[i=0(Φi=0(η,DΩϵη,D2Ωϵη,D3Ωϵη))])]A1[1sp(A[(Φi1=1(η,DΩϵη,D2Ωϵη,D3Ωϵη))])], m=1,2,. (3.14)

    For the m-term of Eq (3.7), the analytically approximate solution may be obtained using the following expression:

    η(ϵ,Ω)=m1i=0ηi. (3.15)

    Consider Kawahara equation of fractional order as follows:

    DpΩη(ϵ,Ω)5η(ϵ,Ω)ϵ5+η(ϵ,Ω)3η(ϵ,Ω)ϵ3+η(ϵ,Ω)η(ϵ,Ω)ϵ=0,   where   0<p1, (4.1)

    with the initial condition:

    η(ϵ,0)=105169sech4(ϵ2213), (4.2)

    and exact solution

    η(ϵ,Ω)=105169sech4(36Ω169+ϵ2213).

    Equation (4.2) is used, and {AT} is applied to Eq (4.1) to get

    η(ϵ,s)105169sech4(ϵ2213)s21sp[5η(ϵ,s)ϵ5]+1spAΩ[A1Ωη(ϵ,s)×3A1Ωη(ϵ,s)ϵ3]+1spAΩ[A1Ωη(ϵ,s)×A1Ωη(ϵ,s)ϵ]=0. (4.3)

    Therefore, the series kth-truncated terms are:

    η(ϵ,s)=105169sech4(ϵ2213)s2+kr=1fr(ϵ,s)srp+1,  r=1,2,3,4. (4.4)

    Following is the RAF:

    AΩRes(ϵ,s)=η(ϵ,s)105169sech4(ϵ2213)s21sp[5η(ϵ,s)ϵ5]+1spAΩ[A1Ωη(ϵ,s)×3A1Ωη(ϵ,s)ϵ3]+1spAΩ[A1Ωη(ϵ,s)×A1Ωη(ϵ,s)ϵ]=0, (4.5)

    and the kth-RAFs is:

    AΩResk(ϵ,s)=ηk(ϵ,s)105169sech4(ϵ2213)s21sp[5ηk(ϵ,s)ϵ5]+1spAΩ[A1Ωηk(ϵ,s)×3A1Ωηk(ϵ,s)ϵ3]+1spAΩ[A1Ωηk(ϵ,s)×A1Ωηk(ϵ,s)ϵ]=0. (4.6)

    It takes some calculation to find fr(ϵ,s) for r=1,2,3,.... Using these procedures, we replace the rth-truncated series Eq (4.4) for the rth-RAF Eq (4.6), applying lims(srp+1) and solving AΩResη,r(ϵ,s))=0, for r=1,2,3,. Some terms that we obtain are given below:

    f1(ϵ,s)=105594068813(17290sinh(ϵ2213)10029sinh(3(ϵ2)213)2015sinh(5(ϵ2)213)+104sinh(7(ϵ2)213))sech11(ϵ2213)), (4.7)
    f2(ϵ,s)=10521718014715904(50957301372cosh(ϵ213)+12586770193cosh(2(ϵ2)13)12962735946cosh(3(ϵ2)13)+2020967026cosh(4(ϵ2)13)+68039374cosh(5(ϵ2)13)9200529cosh(6(ϵ2)13)+43264cosh(7(ϵ2)13)54264784626)sech18(ϵ2213), (4.8)

    and so on.

    For r=1,2,3,, replace fr(ϵ,s) in Eq (4.4):

    η(ϵ,s)=105169sech4(ϵ2213)s2(105594068813(17290sinh(ϵ2213)10029sinh(3(ϵ2)213)2015sinh(5(ϵ2)213)+104sinh(7(ϵ2)213))sech11(ϵ2213)))/(sp+1)+(10521718014715904(50957301372cosh(ϵ213)+12586770193cosh(2(ϵ2)13)12962735946cosh(3(ϵ2)13)+2020967026cosh(4(ϵ2)13)+68039374cosh(5(ϵ2)13)9200529cosh(6(ϵ2)13)+43264cosh(7(ϵ2)13)54264784626)sech18(ϵ2213))/(s2p+1)+. (4.9)

    Apply AIT to obtain:

    η(ϵ,Ω)=105169sech4(ϵ2213)Ωp(105594068813(17290sinh(ϵ2213)10029sinh(3(ϵ2)213)2015sinh(5(ϵ2)213)+104sinh(7(ϵ2)213))sech11(ϵ2213)))/(Γ(p+1))+Ω2p(10521718014715904(50957301372cosh(ϵ213)+12586770193cosh(2(ϵ2)13)12962735946cosh(3(ϵ2)13)+2020967026cosh(4(ϵ2)13)+68039374cosh(5(ϵ2)13)9200529cosh(6(ϵ2)13)+43264cosh(7(ϵ2)13)54264784626)sech18(ϵ2213))/(Γ(2p+1))+. (4.10)

    Table 1 presents the ARPSM solution comparison for different values of the parameter p for Ω=0.1, illustrating how the choice of p impacts the accuracy and behavior of the solutions. Figure 1 shows a comparison between the approximate solution obtained using ARPSM (a) and the exact solution (b) for Example 1, confirming the high accuracy of the ARPSM approach. Figure 2 visualizes the impact of varying fractional orders on the ARPSM solution for different p values (p=0.32,0.52,0.72), showcasing how changes in the fractional order influence the solution structure. Figure 3 extends the comparison in two dimensions, offering a 2D view of the fractional order solutions using ARPSM for the same values of p, further confirming the method's ability to capture the dynamics of fractional systems.

    Table 1.  ARPSM solution comparison for the values of p of Example 1 for Ω=0.1.
    ϵ ARPSMp=0.52 ARPSMp=0.72 ARPSMp=1.00 Exact Errorp=1.00
    1.0 0.597480 0.597823 0.597918 0.597923 4.746940×106
    1.1 0.601882 0.602193 0.602280 0.602284 4.296239×106
    1.2 0.605857 0.606136 0.606214 0.606217 3.837431×106
    1.3 0.609395 0.609642 0.609710 0.609713 3.371748×106
    1.4 0.612487 0.612700 0.612759 0.612762 2.900316×106
    1.5 0.615125 0.615304 0.615354 0.615356 2.424166×106
    1.6 0.617301 0.617446 0.617486 0.617488 1.944232×106
    1.7 0.619010 0.619121 0.619151 0.619152 1.461368×106
    1.8 0.620248 0.620324 0.620344 0.620345 9.763596×107
    1.9 0.621010 0.621051 0.621061 0.621062 4.899361×107
    2.0 0.621296 0.621301 0.621302 0.621302 2.792130×108

     | Show Table
    DownLoad: CSV
    Figure 1.  (a) ARPSM approximate solution, (b) exact solution.
    Figure 2.  Fractional order comparison using ARPSM for p=0.32,0.52,0.72.
    Figure 3.  Fractional order 2D comparison using ARPSM for p=0.32,0.52,0.72.

    Consider the Kawahara equation of fractional order:

    DpΩη(ϵ,Ω)=5η(ϵ,Ω)ϵ5η(ϵ,Ω)3η(ϵ,Ω)ϵ3η(ϵ,Ω)η(ϵ,Ω)ϵ,   where   0<p1, (4.11)

    with the initial condition:

    η(ϵ,0)=105169sech4(ϵ2213), (4.12)

    and exact solution

    η(ϵ,Ω)=105169sech4(36Ω169+ϵ2213).

    Apply AT on both sides of Eq (4.11) to obtain:

    A[DpΩη(ϵ,Ω)]=1sp(m1k=0η(k)(ϵ,0)s2p+k+A[5η(ϵ,Ω)ϵ5η(ϵ,Ω)3η(ϵ,Ω)ϵ3η(ϵ,Ω)η(ϵ,Ω)ϵ]). (4.13)

    Apply AIT on Eq (4.13) to obtain:

    η(ϵ,Ω)=A1[1sp(m1k=0η(k)(ϵ,0)s2p+k+A[5η(ϵ,Ω)ϵ5η(ϵ,Ω)3η(ϵ,Ω)ϵ3η(ϵ,Ω)η(ϵ,Ω)ϵ])]. (4.14)

    Utilize AT iteratively to get:

    η0(ϵ,Ω)=A1[1sp(m1k=0η(k)(ϵ,0)s2p+k)]=A1[η(ϵ,0)s2]=105169sech4(ϵ2213).

    Applying the Riemann-Liouville integral on Eq (4.11),

    η(ϵ,Ω)=105169sech4(ϵ2213)A[5η(ϵ,Ω)ϵ5η(ϵ,Ω)3η(ϵ,Ω)ϵ3η(ϵ,Ω)η(ϵ,Ω)ϵ]. (4.15)

    Using the ATIM technique, we provide the following terms:

    η0(ϵ,Ω)=105169sech4(ϵ2213),η1(ϵ,Ω)=105297034413Γ(p+1)Ωp(11940cosh(ϵ213)+1911cosh(2(ϵ2)13)104cosh(3(ϵ2)13)2675)tanh(ϵ2213)sech10(ϵ2213),η2(ϵ,Ω)=105Ω2psech18(ϵ2213)620288218300934144((3513π4pΩpΓ(p+12)(13(9385221sinh(1213(ϵ2))+120132725sinh(11(ϵ2)213)910000sinh(15(ϵ2)213)+14144sinh(17(ϵ2)213))+581521261600sinh(ϵ2213)374464577051sinh(3(ϵ2)213)+130226023125sinh(5(ϵ2)213)12004154204sinh(7(ϵ2)213)7059672300sinh(9(ϵ2)213))sech7(ϵ2213))/(p2Γ(p)Γ(3p))+28561Γ(2p+1)(50957301372cosh(ϵ213)+12586770193cosh(2(ϵ2)13)12962735946cosh(3(ϵ2)13)+13(155459002cosh(4(ϵ2)13)+5233798cosh(5(ϵ2)13)707733cosh(6(ϵ2)13)+3328cosh(7(ϵ2)13)4174214202))). (4.16)

    The final solution that is obtained via ATIM is given as:

    η(ϵ,Ω)=η0(ϵ,Ω)+η1(ϵ,Ω)+η2(ϵ,Ω)+. (4.17)
    η(ϵ,Ω)=105169sech4(ϵ2213)+105297034413Γ(p+1)Ωp(11940cosh(ϵ213)+1911cosh(2(ϵ2)13)104cosh(3(ϵ2)13)2675)tanh(ϵ2213)sech10(ϵ2213)+105Ω2psech18(ϵ2213)620288218300934144((3513π4pΩpΓ(p+12)(13(9385221sinh(1213(ϵ2))+120132725sinh(11(ϵ2)213)910000sinh(15(ϵ2)213)+14144sinh(17(ϵ2)213))+581521261600sinh(ϵ2213)374464577051sinh(3(ϵ2)213)+130226023125sinh(5(ϵ2)213)12004154204sinh(7(ϵ2)213)7059672300sinh(9(ϵ2)213))sech7(ϵ2213))/(p2Γ(p)Γ(3p))+28561Γ(2p+1)(50957301372cosh(ϵ213)+12586770193cosh(2(ϵ2)13)12962735946cosh(3(ϵ2)13)+13(155459002cosh(4(ϵ2)13)+5233798cosh(5(ϵ2)13)707733cosh(6(ϵ2)13)+3328cosh(7(ϵ2)13)4174214202)))+. (4.18)

    Table 2 compares ATIM solutions for the same set of parameters, with similar trends observed as in ARPSM, demonstrating the robustness of both methods. Figure 4 juxtaposes the ATIM approximate solution (a) with the exact solution (b), verifying the precision of the ATIM method. Figure 5 compares the fractional order solutions using ATIM for (p=0.32,0.52,0.72), and Figure 6 presents a 2D version of this comparison, highlighting the impact of the fractional order on the solution dynamics. Table 3 compares the absolute error for ARPSM and ATIM at Ω=0.1, demonstrating that both methods achieve highly accurate solutions with minimal error.

    Table 2.  ATIM solution comparison for the values of p of Example 1 for Ω=0.1.
    ϵ ATIMp=0.52 ATIMp=0.72 ATIMp=1.00 Exact Errorp=1.00
    1.0 0.597546 0.597850 0.597917 0.597923 6.195481×106
    1.1 0.601942 0.602218 0.602278 0.602284 5.609848×106
    1.2 0.605911 0.606158 0.606212 0.606217 5.012997×106
    1.3 0.609443 0.609661 0.609709 0.609713 4.406507×106
    1.4 0.612528 0.612717 0.612758 0.612762 3.791852×106
    1.5 0.615160 0.615318 0.615353 0.615356 3.170410×106
    1.6 0.617329 0.617458 0.617485 0.617488 2.543461×106
    1.7 0.619032 0.619130 0.619150 0.619152 1.912204×106
    1.8 0.620263 0.620329 0.620343 0.620345 1.277767×106
    1.9 0.621019 0.621054 0.621061 0.621062 6.412226×107
    2.0 0.621298 0.621302 0.621302 0.621302 3.606186×108

     | Show Table
    DownLoad: CSV
    Figure 4.  (a) ATIM approximate solution, (b) exact solution.
    Figure 5.  Fractional order comparison using ATIM for p=0.32,0.52,0.72.
    Figure 6.  Fractional order 2D comparison using ATIM for p=0.32,0.52,0.72.
    Table 3.  The comparison of absolute error of Example 1 for Ω=0.1.
    ϵ ARPSMp=1 ATIMp=1 Exact ErrorARPSM ErrorATIM
    1.0 0.597918 0.597917 0.597923 4.746940×106 6.195481×106
    1.1 0.602280 0.602278 0.602284 4.296239×106 5.609848×106
    1.2 0.606214 0.606212 0.606217 3.837431×106 5.012997×106
    1.3 0.609710 0.609709 0.609713 3.371748×106 4.406507×106
    1.4 0.612759 0.612758 0.612762 2.900316×106 3.791852×106
    1.5 0.615354 0.615353 0.615356 2.424166×106 3.170410×106
    1.6 0.617486 0.617485 0.617488 1.944232×106 2.543461×106
    1.7 0.619151 0.619150 0.619152 1.461368×106 1.912204×106
    1.8 0.620344 0.620343 0.620345 9.763596×107 1.277767×106
    1.9 0.621061 0.621061 0.621062 4.899361×107 6.412226×107
    2.0 0.621302 0.621302 0.621302 2.792130×108 3.606186×108

     | Show Table
    DownLoad: CSV

    Examine the famous fifth-order KdV equations as follows:

    DpΩη(ϵ,Ω)+5η(ϵ,Ω)ϵ5η(ϵ,Ω)3η(ϵ,Ω)ϵ3+η(ϵ,Ω)η(ϵ,Ω)ϵ=0,   where   0<p1, (4.19)

    with the initial condition:

    η(ϵ,0)=eϵ, (4.20)

    and exact solution

    η(ϵ,Ω)=eϵΩ.

    After applying AT to Eq (4.19), Eq (4.20) is used to obtain:

    η(ϵ,s)eϵs2+1sp[5η(ϵ,s)ϵ5]1spAΩ[A1Ωη(ϵ,s)×3A1Ωη(ϵ,s)ϵ3]+1spAΩ[A1Ωη(ϵ,s)×A1Ωη(ϵ,s)ϵ]=0. (4.21)

    Therefore, the kth-truncated term series is:

    η(ϵ,s)=eϵs2+kr=1fr(ϵ,s)srp+1,  r=1,2,3,4. (4.22)

    Following is the RAF:

    AΩRes(ϵ,s)=η(ϵ,s)eϵs2+1sp[5η(ϵ,s)ϵ5]1spAΩ[A1Ωη(ϵ,s)×3A1Ωη(ϵ,s)ϵ3]+1spAΩ[A1Ωη(ϵ,s)×A1Ωη(ϵ,s)ϵ]=0, (4.23)

    and the kth-RAFs is:

    AΩResk(ϵ,s)=ηk(ϵ,s)eϵs2+1sp[5ηk(ϵ,s)ϵ5]1spAΩ[A1Ωηk(ϵ,s)×3A1Ωηk(ϵ,s)ϵ3]+1spAΩ[A1Ωηk(ϵ,s)×A1Ωηk(ϵ,s)ϵ]=0. (4.24)

    It takes some calculation to find fr(ϵ,s) for r=1,2,3,.... Using these procedures, we replace the rth-truncated series Eq (4.22) for the rth-RAF Eq (4.24), applying lims(srp+1) and solving AΩResη,r(ϵ,s))=0, for r=1,2,3,.

    f1(ϵ,s)=eϵ, (4.25)
    f2(ϵ,s)=eϵ, (4.26)
    f2(ϵ,s)=eϵ, (4.27)

    and so on.

    For r=1,2,3,, replace fr(ϵ,s) in Eq (4.22):

    η(ϵ,s)=eϵseϵsp+1+eϵs2p+1eϵs3p+1+. (4.28)

    Apply AIT to obtain:

    η(ϵ,Ω)=eϵeϵΩpΓ(p+1)+eϵΩ2pΓ(2p+1)eϵΩ4pΓ(3p+1)+. (4.29)

    Figure 7 explores the fractional order comparison using ARPSM for an extended range of p values (p=0.33,0.55,0.77,1.00), providing a more comprehensive analysis of how different orders affect the solution. Figure 8 offers 2D and 3D graphs for ARPSM solutions, further highlighting the changes in solution behavior as the fractional order varies.

    Figure 7.  Fractional order comparison using ARPSM for p=0.33,0.55,0.77,1.00.
    Figure 8.  2D and 3D graphs for comparing ARPSM solution for p=0.33,0.55,0.77,1.00.

    Examine the famous fifth-order KdV equations as follows:

    DpΩη(ϵ,Ω)=5η(ϵ,Ω)ϵ5+η(ϵ,Ω)3η(ϵ,Ω)ϵ3η(ϵ,Ω)η(ϵ,Ω)ϵ,   where   0<p1, (4.30)

    with the initial condition:

    η(ϵ,0)=eϵ, (4.31)

    and exact solution

    η(ϵ,Ω)=eϵΩ.

    Apply AT on either side of Eq (4.30) to obtain:

    A[DpΩη(ϵ,Ω)]=1sp(m1k=0η(k)(ϵ,0)s2p+k+A[5η(ϵ,Ω)ϵ5+η(ϵ,Ω)3η(ϵ,Ω)ϵ3η(ϵ,Ω)η(ϵ,Ω)ϵ]). (4.32)

    Apply AIT on either side of Eq (4.32) to obtain:

    η(ϵ,Ω)=A1[1sp(m1k=0η(k)(ϵ,0)s2p+k+A[5η(ϵ,Ω)ϵ5+η(ϵ,Ω)3η(ϵ,Ω)ϵ3η(ϵ,Ω)η(ϵ,Ω)ϵ])]. (4.33)

    Iteratively apply the AT to obtain:

    η0(ϵ,Ω)=A1[1sp(m1k=0η(k)(ϵ,0)s2p+k)]=A1[η(ϵ,0)s2]=eϵ.

    Applying Riemann-Liouville integral on Eq (4.19),

    η(ϵ,Ω)=eϵA[5η(ϵ,Ω)ϵ5+η(ϵ,Ω)3η(ϵ,Ω)ϵ3η(ϵ,Ω)η(ϵ,Ω)ϵ]. (4.34)

    The use of the ATIM technique provides the following terms:

    η0(ϵ,Ω)=eϵ,η1(ϵ,Ω)=eϵΩpΓ(p+1),η2(ϵ,Ω)=eϵΩ2pΓ(2p+1),η3(ϵ,Ω)=eϵΩ3pΓ(3p+1). (4.35)

    The final solution that is obtained via ATIM is given as:

    η(ϵ,Ω)=η0(ϵ,Ω)+η1(ϵ,Ω)+η2(ϵ,Ω)+η3(ϵ,Ω)+. (4.36)
    η(ϵ,Ω)=eϵ(1ΩpΓ(p+1)+Ω2pΓ(2p+1)Ω4pΓ(3p+1)+). (4.37)

    Table 4 analyzes the effect of various fractional orders for ARPSM and ATIM, for Example 2, indicating the consistency and accuracy of both methods across different fractional orders. Figures 9 and 10 continue the analysis for ATIM, comparing fractional order solutions and offering 3D and 2D views further to elucidate the complex behavior of fractional wave systems as modeled by the Kawahara and KdV equations. These figures and tables collectively emphasize the efficacy of ARPSM and ATIM in providing accurate and insightful solutions for fractional nonlinear PDEs, especially in the context of nonlinear wave phenomena in applied mathematics and physics. The graphical representations and error comparisons showcase the reliability and precision of these methods in solving complex fractional models.

    Table 4.  Analysis of various fractional order of ARPSM and ATIM of Example 2 for Ω=0.1.
    ϵ ARPSM ATIM ARPSM ATIM ARPSM ATIM
    p=0.55 p=0.77 p=1.00 Exact Errorp=1.0
    1.0 2.49168 2.63507 2.69123 2.69123 4.473861×107
    1.1 2.75373 2.91220 2.97427 2.97427 4.944381×107
    1.2 3.04335 3.21848 3.28708 3.28708 5.464386×107
    1.3 3.36342 3.55697 3.63279 3.63279 6.039081×107
    1.4 3.71715 3.93106 4.01485 4.01485 6.674217×107
    1.5 4.10809 4.34449 4.43710 4.43710 7.376150×107
    1.6 4.54014 4.80141 4.90375 4.90375 8.151907×107
    1.7 5.01763 5.30638 5.41948 5.41948 9.009250×107
    1.8 5.54534 5.86445 5.98945 5.98945 9.956761×107
    1.9 6.12855 6.48122 6.61937 6.61937 1.100392×106
    2.0 6.77309 7.16286 7.31553 7.31553 1.216121×106

     | Show Table
    DownLoad: CSV
    Figure 9.  Fractional order comparison using ATIM for p=0.33,0.55,0.77,1.00.
    Figure 10.  Fractional order 3D and 2D comparison using ATIM for p=0.33,0.55,0.77,1.00.

    The study utilizes advanced analytical methods, precisely the ARPSM and the ATIM, to investigate the fractional Kawahara and fifth-order KdV equations. The discussion of figures and tables highlights the effectiveness of these methods in providing accurate approximate solutions, comparing their results with exact solutions, and examining the effects of fractional orders on the solutions.

    In conclusion, our analytical investigation into the fractional Kawahara equation and fifth-order KdV equations employing the ARPSM and ATIM has yielded significant insights and advancements in understanding nonlinear wave phenomena. Through rigorous analysis and computational simulations, we have demonstrated the effectiveness of these advanced analytical techniques in providing accurate and insightful solutions to these complex equations governed by fractional calculus under the Caputo operator framework. Our findings contribute to the theoretical understanding of nonlinear wave dynamics and offer practical analytical tools for addressing complex mathematical models in various scientific and engineering domains. Further research in this direction holds promise for exploring additional applications of the Aboodh methods and advancing our understanding of nonlinear wave phenomena in diverse real-world contexts. Future research can extend the ARPSM and ATIM methods to more complex nonlinear fractional PDEs, including those with higher-order fractional operators. Exploring their application to multidimensional systems could provide deeper insights into wave propagation in fields like quantum field theory. Investigating computational efficiency and convergence across different fractional orders may optimize these techniques for broader use. Applying these methods to real-world engineering problems could further validate their utility in practical settings.

    Conceptualization, M.Y.A.; Data curation, H.A.; Formal analysis, M.Y.A; Resources, H.A.; Investigation, M.Y.A.; Project administration, M.Y.A.; Validation, H.A.; Software, H.A.; Validation, M.Y.A.; Visualization, M.Y.A.; Validation, H.A.; Visualization, M.Y.A.; Resources, H.A.; Project administration, H.A.; Writing-review & editing, H.A.; Funding, M.Y.A. All authors have read and agreed to the published version of the manuscript.

    The authors gratefully acknowledge the funding of the Deanship of Graduate Studies and Scientific Research, Jazan University, Saudi Arabia, through project number: RG24-L02.

    The authors declare that they have no conflicts of interest.



    [1] K. D. Doan, P. Yang, P. Li, One loss for quantization: Deep hashing with discrete wasserstein distributional matching, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2022), 9447–9457.
    [2] J. M. Guo, A. W. H. Prayuda, H. Prasetyo, S. Seshathiri, Deep learning based image retrieval with unsupervised double bit hashing, IEEE Trans. Circ. Syst. Vid. Technol., (2023), 1–15. https://doi.org/10.1109/TCSVT.2023.3268091 doi: 10.1109/TCSVT.2023.3268091
    [3] S. Li, X. Li, J. Lu, J. Zhou, Self-supervised video hashing via bidirectional transformers, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 13549–13558.
    [4] L. Wang, Y. Pan, C. Liu, H. Lai, J. Yin, Y. Liu, Deep hashing with minimal-distance-separated hash centers, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2023), 23455–23464.
    [5] V. Gaede, O. Günther, Multidimensional access methods, ACM Comput. Surv., 30 (1998), 170–231. https://doi.org/10.1145/280277.280279 doi: 10.1145/280277.280279
    [6] J. H. Friedman, J. L. Bentley, R. A. Finkel, An algorithm for finding best matches in logarithmic expected time, ACM Trans. Math. Softw., 1977. https://doi.org/10.1145/355744.355745 doi: 10.1145/355744.355745
    [7] A. Gionis, P. Indyk, R. Motwani, Similarity search in high dimensions via hashing, in International Conference on Very Large Data Bases, (1999), 518–529.
    [8] Y. Weiss, A. Torralba, R. Fergus, Spectral hashing, in Advances in Neural Information Processing Systems, (2008), 1753–1760.
    [9] A.Z. Broder, On the resemblance and containment of documents, in Compression and Complexity of Sequences, (1997), 21–29.
    [10] M. S. Charikar, Similarity estimation techniques from rounding algorithms, Appl. Comput. Harmon. Anal., (2002), 380–388.
    [11] M. Raginsky, S. Lazebnik, Locality-sensitive binary codes from shift-invariant kernels, in Advances in Neural Information Processing Systems, (2009), 1509–1517.
    [12] A. Torralba, R. Fergus, Y. Weiss, Small codes and large image databases for recognition, in IEEE Conference on Computer Vision and Pattern Recognition, (2008), 1–8. https://doi.org/10.1109/CVPR.2008.4587633
    [13] Y. Weiss, R. Fergus, A. Torralba, Multidimensional spectral hashing, in European Conference on Computer Vision, Springer, (2012), 340–353. https://doi.org/10.1007/978-3-642-33715-4_25
    [14] W. Liu, J. Wang, S. Kumar, S. F. Chang, Hashing with graphs, in Proceedings of the 28th International Conference on Machine Learning, (2011).
    [15] Q. Y. Jiang, W. J. Li, Scalable graph hashing with feature transformation, in International Joint Conference on Artificial Intelligence, (2015), 2248–2254.
    [16] L. Chen, D. Xu, I. W. H. Tsang, X. Li, Spectral embedded hashing for scalable image retrieval. IEEE Trans. Cybern., 44 (2014), 1180–1190. https://doi.org/10.1109/TCYB.2013.2281366 doi: 10.1109/TCYB.2013.2281366
    [17] Y. Gong, S. Lazebnik, A. Gordo, F. Perronnin, Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2916–2929. https://doi.org/10.1109/TPAMI.2012.193
    [18] H. Jegou, M. Douze, C. Schmid, Product quantization for nearest neighbor search, Trans. Pattern Anal. Mach. Intell., 33 (2011), 117–128. https://doi.org/10.1109/TPAMI.2010.57 doi: 10.1109/TPAMI.2010.57
    [19] M. Norouzi, D. J. Fleet, Cartesian k-means, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2013), 3017–3024.
    [20] T. Zhang, C. Du, J. Wang, Composite quantization for approximate nearest neighbor search, in Proceedings of the 31st International Conference on Machine Learning, 32 (2014), 838–846.
    [21] G. Shakhnarovich, Learning Task-Specific Similarity, PhD thesis, Massachusetts Institute of Technology, 2005.
    [22] G. E. Hinton, R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, 313 (2006), 504–507. https://doi.org/10.1126/science.1127647 doi: 10.1126/science.1127647
    [23] B. Kulis, T. Darrell, Learning to hash with binary reconstructive embeddings, in Advances in Neural Information Processing Systems, (2009), 22.
    [24] J. Wang, S. Kumar, S. F. Chang, Semi-supervised hashing for large-scale search, IEEE Trans. Pattern Anal. Mach. Intell., 34 (2012), 2393–2406. https://doi.org/10.1109/TPAMI.2012.48 doi: 10.1109/TPAMI.2012.48
    [25] M. M. Bronstein, A. M. Bronstein, F. Michel, N. Paragios, Data fusion through cross-modality metric learning using similarity-sensitive hashing, in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010), 3594–3601. https://doi.org/10.1109/CVPR.2010.5539928
    [26] S. Kumar, R. Udupa, Learning hash functions for cross-view similarity search, in Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, (2011), 1360–1365.
    [27] Y. Zhen, D. Y. Yeung, Co-regularized hashing for multimodal data, in Advances in Neural Information Processing Systems, (2012), 25.
    [28] D. Zhang, F. Wang, L. Si, Composite hashing with multiple information sources, in Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, (2011), 225–234. https://doi.org/10.1145/2009916.2009950
    [29] S. Kim, S. Choi, Multi-view anchor graph hashing, in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, (2013), 3123–3127. https://doi.org/10.1109/ICASSP.2013.6638233
    [30] S. Kim, Y. Kang, S. Choi, Sequential spectral learning to hash with multiple representations, in European Conference on Computer Vision, Springer, (2012), 538–551. https://doi.org/10.1007/978-3-642-33715-4_39
    [31] Q. Wang, L. Si, B. Shen, Learning to hash on partial multi-modal data, in Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, (2015), 3904–3910.
    [32] L. Liu, M. Yu, L. Shao, Multiview alignment hashing for efficient image search, IEEE Trans. Image Process., 24 (2015), 956–966. https://doi.org/10.1109/TIP.2015.2390975 doi: 10.1109/TIP.2015.2390975
    [33] X. Liu, J. He, B. Lang, Multiple feature kernel hashing for large-scale visual search, Pattern Recogn., 47 (2014), 748–757. https://doi.org/10.1016/j.patcog.2013.08.022 doi: 10.1016/j.patcog.2013.08.022
    [34] X. Cai, F. Nie, H. Huang, Multi-view k-means clustering on big data, in Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
    [35] J. Song, Y. Yang, X. Li, Z. Huang, Y. Yang, Robust hashing with local models for approximate similarity search, IEEE Trans. Cybern., 44 (2014), 1225–1236. https://doi.org/10.1109/TCYB.2013.2289351 doi: 10.1109/TCYB.2013.2289351
    [36] F. Nie, H. Huang, X. Cai, C. Ding, Efficient and robust feature selection via joint 2, 1-norms minimization, in Advances in Neural Information Processing Systems, (2010), 1813–1821.
    [37] Z. Ma, F. Nie, Y. Yang, J. R. R. Uijlings, N. Sebe, Web image annotation via subspace-sparsity collaborated feature selection, IEEE Trans. Multimedia, 14 (2012), 1021–1030. https://doi.org/10.1109/TMM.2012.2187179 doi: 10.1109/TMM.2012.2187179
    [38] X. Ren, L. Bo, D. Fox, RGB-(D) scene labeling: Features and algorithms, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 2759–2766. https://doi.org/10.1109/CVPR.2012.6247999
    [39] K. Lai, L. Bo, X. Ren, D. Fox, A large-scale hierarchical multi-view RGB-D object dataset, in 2011 IEEE International Conference on Robotics and Automation, (2011), 1817–1824. https://doi.org/10.1109/ICRA.2011.5980382
    [40] P. H. Schönemann, A generalized solution of the orthogonal procrustes problem, Psychometrika, 31 (1966), 1–10. https://doi.org/10.1007/BF02289451 doi: 10.1007/BF02289451
    [41] G. Ding, Y. Guo, J. Zhou, Collective matrix factorization hashing for multimodal data, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2014), 2083–2090.
    [42] A. Torralba, R. Fergus, W. T. Freeman, 80 million tiny images: A large data set for nonparametric object and scene recognition, IEEE Trans. Pattern Anal. Mach. Intell., 30 (2008), 1958–1970. https://doi.org/10.1109/TPAMI.2008.128 doi: 10.1109/TPAMI.2008.128
    [43] A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, C. Schmid, Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7–13, 2012, Proceedings, Part Ⅰ, Springer, Heidelberg, 2012.
    [44] A. Oliva, A. Torralba, Modeling the shape of the scene: A holistic representation of the spatial envelope, Int. J. Comput. Vision, 42 (2001), 145–175. https://doi.org/10.1023/A:1011139631724 doi: 10.1023/A:1011139631724
    [45] N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), (2005), 886–893. https://doi.org/10.1109/CVPR.2005.177
    [46] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell., 24 (2002), 971–987. https://doi.org/10.1109/TPAMI.2002.1017623 doi: 10.1109/TPAMI.2002.1017623
    [47] A. Vedaldi, B. Fulkerson, VLFeat: An open and portable library of computer vision algorithms, in Proceedings of the 18th ACM international conference on Multimedia, (2010), 1469–1472. https://doi.org/10.1145/1873951.1874249
  • This article has been cited by:

    1. Aljawhara H. Almuqrin, Sherif M. E. Ismaeel, C. G. L. Tiofack, A. Mohamadou, Badriah Albarzan, Weaam Alhejaili, Samir A. El-Tantawy, Solving fractional physical evolutionary wave equations using advanced techniques, 2025, 2037-4631, 10.1007/s12210-025-01320-w
    2. Samir A. El-Tantawy, Sahibzada I. H. Bacha, Muhammad Khalid, Weaam Alhejaili, Application of the Tantawy Technique for Modeling Fractional Ion-Acoustic Waves in Electronegative Plasmas having Cairns Distributed-Electrons, Part (I): Fractional KdV Solitary Waves, 2025, 55, 0103-9733, 10.1007/s13538-025-01741-w
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1352) PDF downloads(36) Cited by(0)

Figures and Tables

Figures(7)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog