Loading [MathJax]/extensions/TeX/boldsymbol.js
Research article Special Issues

Ternary compound ontology matching for cognitive green computing

  • Cognitive green computing (CGC) dedicates to study the designing, manufacturing, using and disposing of computers, servers and associated subsystems with minimal environmental damage. These solutions should provide efficient mechanisms for maximizing the efficiency of use of computing resources. Evolutionary algorithm (EA) is a well-known global search algorithm, which has been successfully used to solve various complex optimization problems. However, a run of population-based EA often requires huge memory consumption, which limited their applications in the memory-limited hardware. To overcome this drawback, in this work, we propose a compact EA (CEA) for the sake of CGC, whose compact encoding and evolving mechanism is able to significantly reduce the memory consumption. After that, we use it to address the ternary compound ontology matching problem. Six testing cases that consist of nine ontologies are used to test CEA's performance, and the experimental results show its effectiveness.

    Citation: Wei-Min Zheng, Qing-Wei Chai, Jie Zhang, Xingsi Xue. Ternary compound ontology matching for cognitive green computing[J]. Mathematical Biosciences and Engineering, 2021, 18(4): 4860-4870. doi: 10.3934/mbe.2021247

    Related Papers:

    [1] Li-Bin Liu, Yige Liao, Guangqing Long . Error estimate of BDF2 scheme on a Bakhvalov-type mesh for a singularly perturbed Volterra integro-differential equation. Networks and Heterogeneous Media, 2023, 18(2): 547-561. doi: 10.3934/nhm.2023023
    [2] Xiongfa Mai, Ciwen Zhu, Libin Liu . An adaptive grid method for a singularly perturbed convection-diffusion equation with a discontinuous convection coefficient. Networks and Heterogeneous Media, 2023, 18(4): 1528-1538. doi: 10.3934/nhm.2023067
    [3] Li-Bin Liu, Limin Ye, Xiaobing Bao, Yong Zhang . A second order numerical method for a Volterra integro-differential equation with a weakly singular kernel. Networks and Heterogeneous Media, 2024, 19(2): 740-752. doi: 10.3934/nhm.2024033
    [4] Dilip Sarkar, Shridhar Kumar, Pratibhamoy Das, Higinio Ramos . Higher-order convergence analysis for interior and boundary layers in a semi-linear reaction-diffusion system networked by a k-star graph with non-smooth source terms. Networks and Heterogeneous Media, 2024, 19(3): 1085-1115. doi: 10.3934/nhm.2024048
    [5] Chaoqun Huang, Nung Kwan Yip . Singular perturbation and bifurcation of diffuse transition layers in inhomogeneous media, part II. Networks and Heterogeneous Media, 2015, 10(4): 897-948. doi: 10.3934/nhm.2015.10.897
    [6] Yongqiang Zhao, Yanbin Tang . Approximation of solutions to integro-differential time fractional wave equations in Lpspace. Networks and Heterogeneous Media, 2023, 18(3): 1024-1058. doi: 10.3934/nhm.2023045
    [7] Chaoqun Huang, Nung Kwan Yip . Singular perturbation and bifurcation of diffuse transition layers in inhomogeneous media, part I. Networks and Heterogeneous Media, 2013, 8(4): 1009-1034. doi: 10.3934/nhm.2013.8.1009
    [8] Gianni Dal Maso, Francesco Solombrino . Quasistatic evolution for Cam-Clay plasticity: The spatially homogeneous case. Networks and Heterogeneous Media, 2010, 5(1): 97-132. doi: 10.3934/nhm.2010.5.97
    [9] Ciro D’Apice, Umberto De Maio, T. A. Mel'nyk . Asymptotic analysis of a perturbed parabolic problem in a thick junction of type 3:2:2. Networks and Heterogeneous Media, 2007, 2(2): 255-277. doi: 10.3934/nhm.2007.2.255
    [10] Rémi Goudey . A periodic homogenization problem with defects rare at infinity. Networks and Heterogeneous Media, 2022, 17(4): 547-592. doi: 10.3934/nhm.2022014
  • Cognitive green computing (CGC) dedicates to study the designing, manufacturing, using and disposing of computers, servers and associated subsystems with minimal environmental damage. These solutions should provide efficient mechanisms for maximizing the efficiency of use of computing resources. Evolutionary algorithm (EA) is a well-known global search algorithm, which has been successfully used to solve various complex optimization problems. However, a run of population-based EA often requires huge memory consumption, which limited their applications in the memory-limited hardware. To overcome this drawback, in this work, we propose a compact EA (CEA) for the sake of CGC, whose compact encoding and evolving mechanism is able to significantly reduce the memory consumption. After that, we use it to address the ternary compound ontology matching problem. Six testing cases that consist of nine ontologies are used to test CEA's performance, and the experimental results show its effectiveness.



    Consider the following singularly perturbed Fredholm integro-differential equation (SPFIDE) in the interval ˉI=[0,T]:

    {Lu(t):=εu(t)+f(t,u(t))+λT0K(t,s,u(s))ds=0, tI=(0,T],u(0)=A, (1.1)

    where 0<ε1 is a perturbation parameter, A is a given constant and λ is a real parameter. We assume that f(t,u)C1(ˉI×R), K(t,s,u)C1(ˉI×ˉI×R) and there exist constants α, β such that 0<α|f/u|, |K/u|β. Under these assumptions, the problem (1.1) has a unique solution (see [1]).

    It is well known that the SPFIDEs arise widely in scientific fields such as mathematical biology [2], material mechanics [3], hydrodynamic [4] and so on.These problems depend on such a small positive parameter ε that the solution varies rapidly when ε0. Due to the presence of this perturbation parameter ε, classical numerical methods on a uniform mesh fail to give accurate results. Therefore, it is necessary to develop suitable numerical methods that are ε-uniformly convergent for solving these problems.

    Over the past few years, there has been a growing interest in the numerical methods for Volterra integro-differential equations (see, e.g., [5,6,7,8,9,10]) and Fredholm integro-differential equations (see, e.g., [11,12]). When the differential term of these integro-differential equations contains a small positive perturbation parameter ε, these problems are called singularly perturbed integro-differential equations. Recently, some robust numerical methods are proposed to solve singularly perturbed Volterra integro-differential equations[13,14,15,16]. Meanwhile, the authors in [17,18] developed fitted finite difference schemes on a uniform mesh for second-order SPFIDEs and gave some convergence results based on the prior information of the exact solution. Durmaz, et.al., [19] proposed a second-order uniformly convergent finite difference scheme on a Shishkin mesh for a singularly perturbed Fredholm integro-differential equation with the reduced second type Fredholm equation. In [20], the authors presented a fitted finite difference approach on a Shishkin mesh for a first-order singularly perturbed Fredholm integro-differential initial value problem with integral condition. Kumar et al. [21] proposed a non-standard finite difference scheme with Haar wavelet basis functions for a singularly perturbed partial integro-differential equation. Recently, Cakir et al., [22] solved first-order nonlinear SPFIDEs on a Sinshkin mesh with first-order convergence rate.

    From the literatures we have mentioned above, the existing numerical methods of SPFIDEs given in [19,20,21] are layer-adapted mesh approaches, which require a priori information about the location and width of the boundary layer. Thus, adaptive grid methods by equidistributing monitor functions are widely used to solve some singularly perturbed problems, see [25,26,27,28,29] for example. The advantages of these adaptive grid methods is to cluster automatically the grid points within the boundary layer. To the best of our knowledge, there is no report about this adaptive grid method for problem (1.1). Therefore, the aim of this paper is to solve problem (1.1) numerically by a finite difference scheme on an adaptive grid obtained by equidistributing a positive monitor function. The discrete scheme is constructed by using the backward Euler formula and right rectangle formula to approximate derivative term and nonlinear integral term, respectively. It is proved under some additional conditions that the proposed adaptive grid method is first-order uniformly convergent, with respect to ε.

    The rest of this paper is as follows. In Section 2, preliminary results of the exact solution are laid out. A discretization scheme is established in Section 3, in which a prior error analysis and a posterior error estimate are carried out successively. Numerical results obtained by the adaptive gird algorithm are given in Section 4 to support the theoretical analyses. The paper ends with a summary of the main conclusions in Section 5.

    Notation Throughout this paper, C which is not necessarily the same at each occurrence, indicates a positive constant independent of the mesh division parameter N or the singular perturbation parameter ε. To simplify the notation, we set vk=v(tk) for any function v(t). In our estimates, the maximum norm of a continuous function v(t) with the domain [0,T] is defined as v(t)=esssupt[0,T]|v(t)| as well as the maximum norm of a discrete vector \mathit{\boldsymbol{x}} = \left \{ x_i \right \}_{i = 0}^{N} with N+1 elements is defined as \left \| \mathit{\boldsymbol{x}} \right \|_{\infty} = \underset{i = 0, 1, 2, \cdots, N}{\max }\left | x_{i} \right | .

    In this section, we list the bounds for the exact solution u\left(t\right) and its first-order derivative.

    Lemma 2.1. [22, Lemma 1] Assume the constant \lambda satisfies

    \begin{equation} \left | \lambda \right | < \frac{\alpha }{\underset{0\leq t\leq T}{\max}\int_{0}^{T}\left | G\left ( t, s \right ) \right |ds}. \end{equation} (2.1)

    Then we have

    \begin{eqnarray} &&\left \| u \right \| _{\infty } \le C_{0}, \end{eqnarray} (2.2)
    \begin{eqnarray} && \left | u'\left(t\right) \right |\le C\left ( 1+\frac{1}{\varepsilon}e^{-\frac{\alpha t}{\varepsilon}} \right ) , \ 0\le t\le T, \end{eqnarray} (2.3)

    where

    \begin{eqnarray*} &&C_{0} = \frac{\left | A \right |+\frac{1}{\alpha}\left \|q\right \|_{\infty}}{1-\frac{1}{\alpha}\left | \lambda \right |\underset{0\leq t\leq T}{\max}\int_{0}^{T}\left | G\left ( t, s \right ) \right |ds}, \\ &&G\left(t, s\right) = \frac{\partial }{\partial u} K\left(t, s, \gamma u\right), \ 0 < \gamma < 1, \\ &&q\left(t\right) = -f\left(t, 0\right)-\gamma \int_{0}^{T} K\left(t, s, 0\right)ds. \end{eqnarray*}

    Corollary 2.1. For any two functions v\left(t\right) and w\left(t\right) satisfying

    \begin{equation} v\left(0\right) = w\left(0\right) = A, \ \end{equation} (2.4)

    and

    \begin{equation} Lv\left(t\right)-Lw\left(t\right) = \tilde{F}\left(t\right), \ t\in I, \ \end{equation} (2.5)

    where \tilde{F}(t) is a bounded piece-wise continuous function, we have

    \begin{equation} \left \| v\left(t\right)-w\left(t\right) \right \|_{\infty}\le C\left \| Lv\left(t\right)-Lw\left(t\right) \right \|_{\infty}.\ \end{equation} (2.6)

    Proof. The proof is similar to [15, Corollary 2.1].

    Let \bar{\Omega } _{N}: = \left\{0 = t_{0} < t_{1} < \dots < t_{N} = T \right\} be an arbitrary non-uniform mesh and h_{i} = t_{i}-t_{i-1} be the local mesh size for i = 1, 2, \cdots, N . For a given mesh function \left \{ v_{i} \right \} _{i = 0}^{N} , define the backward finite difference operator as follows:

    \begin{equation} D^-v_i = \frac{v_i-v_{i-1}}{h_i}, \ i = 1, 2, \cdots, N. \end{equation} (3.1)

    Then to construct the discretization scheme for problem (1.1), we integrate Eq (1.1) over \left(t_{i-1}, t_i \right) and use the right rectangle rule to approximate the integral part, which yield,

    \begin{equation} \left\{ \begin{aligned} &\varepsilon D^{-}u_{i}+f\left(t_i, u_i\right)+\lambda \sum\limits_{j = 1}^{N}h_jK\left(t_i, t_j, u_j\right)+R_i = 0, \ i = 1, 2, \cdots, N, \\ &u_0 = A, \end{aligned} \right. \end{equation} (3.2)

    where

    \begin{equation} R_i: = R_i^{\left(1\right)}+R_i^{\left(2\right)}+R_i^{\left(3\right)} \end{equation} (3.3)

    and

    \begin{eqnarray*} && R_i^{\left(1\right)} = -h_i^{-1}\int_{t_{i-1}}^{t_i}(t-t_{i-1})\frac{d }{d t} f\left(t, u\left(t\right)\right)dt, \\ &&R_i^{\left(2\right)} = -\lambda h_i^{-1}\int_{t_{i-1}}^{t_i}\left(t-t_{i-1}\right)\int_0^T\frac{\partial }{\partial t} K\left(t, s, u\left(s\right)\right)dsdt, \\ && R_i^{\left(3\right)} = -\lambda \sum\limits_{j = 1}^{N}\int_{t_{j-1}}^{t_j}\left(s-t_{j-1}\right)\frac{d }{d s} K\left(t_i, s, u\left(s\right)\right)ds.\ \end{eqnarray*}

    Neglecting the truncation error R_i in Eq (3.2), we obtain the discretization scheme of problem (1.1)

    \begin{equation} \left\{ \begin{aligned}&L^Nu_i^N: = \varepsilon D^-u_i^N+f\left(t_i, u_i^N\right)+\lambda\sum\limits_{j = 1}^{N}h_jK\left(t_i, t_j, u_j^N\right) = 0, \ i = 1, 2, \cdots, N, \\ &u_0 = A, \end{aligned}\right. \end{equation} (3.4)

    where u_i^N is the approximation of u(t) at point t = t_i .

    Let e_i^N: = u_i^N-u_i , i = 0, 1, \cdots, N , be the absolute error at t_i of the numerical solution. Then we can obtain the following error equations

    \begin{equation} \left\{ \begin{aligned} &L^Nu_i^N-L^Nu_i = R_i, \ i = 1, 2, \cdots, N, \\ &e_0^N = 0, \end{aligned}\right. \end{equation} (3.5)

    where R_i is the local truncation error defined in Eq (3.3) at t_i .

    Lemma 3.1. For i = 1, 2, \cdots, N , the truncation error R_i defined in Eq (3.3) satisfies

    \begin{equation} \left | R_i \right | \le C\max\limits_{1\le i\le N} \int_{t_{i-1}}^{t_i}\left(1+\left | {u}'\left(t\right) \right |\right)dt. \ \end{equation} (3.6)

    Proof. At first, based on the conditions f\left(t, u\right)\in C^1(\bar{I}\times R) and 0 < \alpha\le\left | \partial f/\partial u \right | , we obtain

    \begin{equation} \begin{aligned} \left | R_i^{\left(1\right)} \right |& \le h_{i}^{-1}\int_{t_{i-1}}^{t_i}h_i \left(\left | \frac{\partial f\left(t, u\right)}{\partial t} \right |+\left | \frac{\partial f\left(t, u\right)}{\partial u} \right | \left | u'\left(t\right) \right | \right)dt\\ &\le C\int_{ t_{i-1}}^{t_i}\left(1+\left | u'\left(t\right) \right |\right)dt.\ \end{aligned} \end{equation} (3.7)

    Then, since K\left(t, s, u\right)\in C^1\left(\bar{I}\times\bar{I}\times R\right) and \left | \partial K /\partial u \right | \le \beta , we have

    \begin{equation} \begin{aligned} \left | R_i^{\left(2\right)} \right | &\le\left | \lambda \right | \int_{t_{i-1}}^{t_i} \int_{0}^{T}\left | \frac{\partial }{\partial t}K\left(t, s, u\left(s\right)\right) \right |dsdt\\ & \le Ch_i.\ \end{aligned} \end{equation} (3.8)

    and

    \begin{equation} \begin{aligned} \left | R_i^{\left(3\right)} \right |&\le \left | \lambda \right | \sum\limits_{j = 1}^{N}T \int_{t_{j-1}}^{t_j}\left(\left | \frac{\partial K\left(t_i, s, u\right)}{\partial s} \right | +\left | \frac{\partial K\left(t_i, s, u\right)}{\partial u} \right |\left | u'\left(s\right) \right | \right)ds \\ &\le C\max\limits_{1\le j\le N}\int_{t_{j-1}}^{t_j}\left(1+\left | u'\left(t\right) \right |\right) dt.\ \end{aligned} \end{equation} (3.9)

    Finally, the desired result of this lemma can be followed by Eqs (3.7), (3.8) and (3.9).

    Lemma 3.2. Under the assumption

    \begin{equation} \left | \lambda \right | < \frac{1}{\alpha}\max\limits_{1\le i\le N}\sum\limits_{j = 1}^{N}h_j\left | G_{ij} \right |, \ \end{equation} (3.10)

    we have

    \begin{equation} \left \| \mathit{\boldsymbol{e}}^N \right \| _{\infty}\le \frac{1}{\alpha}\left(1-\frac{1}{\alpha}\left | \lambda \right |\max\limits_{1\le i\le N}\sum\limits_{j = 1}^{N}h_i\left | G_{ij} \right | \right)^{-1}\left \|\mathit{\boldsymbol{R}} \right \|_{\infty}, \ \end{equation} (3.11)

    where \mathit{\boldsymbol{e}}^N = \left \{ e_i^N \right \} _{i = 0}^{N} , \mathit{\boldsymbol{R}} = \left \{ R_i \right \} _{i = 0}^{N} and G_{ij} = \frac{\partial }{\partial u}K\left(t_i, s_j, u_j+\zeta e_j^N\right), \ 0 < \zeta < 1 .

    Proof. Applying the mean value theorem to Eq (3.5), we get

    \begin{equation} \varepsilon D^-e_i^N+a_ie_i^N+\lambda \sum\limits_{j = 1}^{N}h_jG_{ij}e_j^N = R_i, \ i = 1, 2, \cdots, N, \end{equation} (3.12)

    where

    \begin{eqnarray} &&a_i = \frac{\partial }{\partial u}f\left(t_i, u_i+\xi e_i^N\right), \ 0 < \xi < 1, \end{eqnarray} (3.13)
    \begin{eqnarray} &&G_{ij} = \frac{\partial }{\partial u}K\left(t_i, s_j, u_j+\zeta e_j^N\right), \ 0 < \zeta < 1 . \end{eqnarray} (3.14)

    According to maximum principle for the operator \varepsilon D^-e_i^N+a_ie_i^N , we have

    \begin{equation} \left \| \mathit{\boldsymbol{e}}^N \right \| _\infty \le \frac{1}{\alpha}\left \| \mathit{\boldsymbol{R}} \right \| _\infty +\frac{1}{\alpha}\left | \lambda \right | \left \| \mathit{\boldsymbol{e}}^N \right \| _\infty \max\limits_{1\le i\le N}\sum\limits_{j = 1}^{N}h_j\left | G_{ij} \right |, \end{equation} (3.15)

    which immediately leads to the desired result with the assumption (3.10).

    Based on the above Lemmas 3.1–3.2, we get the following convergence results.

    Theorem 3.1. Let u(t) be the solution of problem (1.1) and u_i^N be the solution of discrete scheme (3.4). Then

    \begin{equation} \max\limits_{1\le i\le N}\left | u_i^N-u_i \right | \le C \max\limits_{1\le i\le N}\int_{t_{i-1}}^{t_i}\left(1+\left | {u}'\left(t\right) \right |\right)dt. \end{equation} (3.16)

    Corollary 3.1. Under the conditions of Theorem 3.1, there exists an adaptive grid \left \{ t_i \right \} _{i = 0}^{N} such that

    \begin{equation} \max\limits_{1\le i\le N}\left | u_i^N-u_i \right |\le CN^{-1}. \ \end{equation} (3.17)

    Proof. Based on the mesh equidistribution principle presented in [29], the mesh \left \{ t_i \right \}_{i = 0}^{N} given by our adaptive grid algorithm satisfies

    \begin{equation} \int_{t_{i-1}}^{t_i}M\left(t\right)dt = \frac{1}{N}\int_{0}^{T}M\left(t\right)dt, \ i = 1, 2, \cdots, N, \end{equation} (3.18)

    where M(t) is called the monitor function, which can be chosen as

    \begin{equation} M\left(t\right) = 1+\left | {u}'\left(t\right) \right |.\ \end{equation} (3.19)

    Therefore, it follows from Lemma 2.1 that

    \begin{equation} \begin{aligned} \max\limits_{1\le i\le N}\left | u_i^N-u_i \right | &\le C \max\limits_{1\le i\le N}\int_{t_{i-1}}^{t_i}\left(1+\left | {u}'\left(t\right) \right |\right)dt\\ & = \frac{C}{N}\int_{0}^{T}\left(1+\left | {u}'\left(t\right) \right | \right)dt\\ &\le \frac{C}{N}\int_{0}^{T}\left(1+\frac{1}{\varepsilon }\exp\left(-\frac{\alpha t}{\varepsilon } \right)\right)dt\\ &\le \frac{C}{N}\left(T+\frac{1}{\alpha}\left(1-\exp\left(-\frac{\alpha T}{\varepsilon } \right)\right)\right)\\ &\le\frac{C}{N}. \end{aligned} \end{equation} (3.20)

    In this section, we shall derive an a posteriori error estimation for the numerical solution \left \{ u_i^N \right \} _{i = 0}^{N} . Recall that \tilde{u}^N\left(t\right) is a piece-wise linear interpolation function through knots \left(t_i, u_i^N\right), \ i = 0, 1, \cdots, N . Then, for any t\in J_i: = \left [t_{i-1}, t_i \right] , we obtain

    \begin{equation} \tilde{u}^N\left(t\right) = u_i^N+D^-u_i^N\left(t-t_i\right), \ i = 1, 2, \cdots, N.\ \end{equation} (3.21)

    Theorem 3.2. Let u(t) be the exact solution of problem (1.1), \left \{ u_i^N \right \} _{i = 0}^{N} be the discrete solution of problem (3.4) and \tilde{u}^N(t) be its piece-wise linear interpolation function defined in Eq (3.21). Then we have

    \begin{equation} \left \| \tilde{u}^N\left(t\right)-u\left(t\right) \right \|_{\infty}\le C \max\limits_{1\le i\le N}\left(h_i+h_i\left | D^-u_i^N \right | \right).\ \end{equation} (3.22)

    Proof. For any t\in\left (t_{i-1}, t_i \right] , it follows from Eq (1.1) and Eq (3.4) that

    \begin{equation} \begin{aligned} L\tilde{u}^N\left(t\right)-Lu\left(t\right) & = \varepsilon D^-u_i^N+f\left(t, \tilde{u}^N\left(t\right)\right)+\lambda \int_0^TK\left(t, s, \tilde{u}^N\left(s\right)\right)ds\\ & = -f\left(t_i, u_i^N\right)-\lambda \sum\limits_{j = i}^{N}h_jK\left(t_i, t_j, u_j^N\right) \\ &\quad +f\left(t, \tilde{u}^N\left(t\right)\right)+\lambda \int_0^TK\left(t, s, \tilde{u}^N\left(s\right)\right)ds\\ & = P\left(t\right)+Q\left(t\right), \ \end{aligned} \end{equation} (3.23)

    where

    \begin{eqnarray} &&P\left(t\right) = f\left(t, \tilde{u}^N\left(t\right)\right)-f\left(t_i, u_i^N\right), \end{eqnarray} (3.24)
    \begin{eqnarray} && Q\left(t\right) = \lambda \int_{0}^{T}K(t, s, \tilde{u}^N\left(s)\right)ds-\lambda \sum\limits_{j = 1}^{N}h_jK\left(t_i, t_j, u_j^N\right) . \end{eqnarray} (3.25)

    With the assumptions of functions f\left(t, u\right) , K\left(t, s, u\right) and the definition of \tilde{u}^N\left(t\right) , we have

    \begin{equation} \begin{aligned} \left | P\left(t\right) \right | & = \left | f\left(t_i, u_i^N\right)+\int_{t_i}^{t}\frac{df\left(\tau, \tilde{u}^N\left(\tau\right)\right)}{d\tau}d\tau-f\left(t_i, u_i^N\right) \right | \\ &\le \int_{t}^{t_i}\left(\left | \frac{\partial f\left(\tau, \tilde{u}^N\right)}{\partial \tau} \right |+ \left | \frac{\partial f\left(\tau, \tilde{u}^N\right)}{\partial u} \right |\left | D^-u_i^N \right |\right) d\tau \\ &\le C h_i\left(1+\left | D^-u_i^N \right |\right) \end{aligned} \end{equation} (3.26)

    and

    \begin{equation} \begin{aligned} \left | Q\left(t\right) \right |& = \left | \lambda \sum\limits_{j = 1}^{N}\int_{t_{j-1}}^{t_j}\left(K\left(t, s, \tilde{u}^N\left(s\right)\right)-K\left(t_i, t_j, u_j^N\right)\right)ds \right | \\ &\le\left |\lambda\right | \sum\limits_{j = 1}^{N} \int_{t_{j-1}}^{t_j}\left(\left |\frac{\partial }{\partial t} K\left(\xi_1t+\left(1-\xi_1\right)t_i, s, \tilde{u}^N\left(s\right)\right)\left(t-t_{i}\right)\right |\right.\\ &\quad+\left |\frac{\partial }{\partial s} K\left(t_i, \xi_2s+\left(1-\xi_2\right)t_{j}, \tilde{u}^N\left(s\right)\right)\left(s-t_{j}\right)\right |\\ &\left. \quad+\left | \frac{\partial }{\partial u}K\left(t_i, t_{j}, \xi_3\tilde{u}^N\left(s\right)+\left(1-\xi_3\right)u_{j}^N\right)\left(\tilde{u}^N\left(s\right)-u_{j }^N\right)\right |\right) ds\\ &\le C\left(h_i+\max\limits_{1\le j\le N} h_j \left(1+\left | D^-u_j^N \right |\right)\right), \ \end{aligned} \end{equation} (3.27)

    where 0 < \xi_1 < 1 , 0 < \xi_2 < 1 , and 0 < \xi_3 < 1 . The result can be derived from Eqs (3.23), (3.26), (3.27) and Corollary 2.1.

    From Corollary 3.1, it is easy to conclude that there exists a mesh \left \{ t_i \right \}_{i = 0}^{N} and a monitor function M\left(t\right) given in Eq (3.19) such that the inequality (3.17) holds true. However, {u}'\left(t\right) is not available. Therefore, based on the a posterior error estimation (3.22), we choose the discrete analogue of M\left(t\right) as

    \begin{equation} {\tilde{M}_i = 1+\left | D^-u_i^N \right |, i = 1, 2, \cdots, N.\ } \end{equation} (3.28)

    Therefore, the idea is to adaptively design a mesh in which the values of monitor function tilde{M}_i are the same on each mesh interval. This is equivalent to find \left \{ \left(t_i, u_i^N\right) \right \}_{i = 0}^{N} , such that

    \begin{equation} h_i\tilde{M}_i = \frac{1}{N}\sum\limits_{j = 1}^{N}h_j\tilde{M}_j, \ i = 1, 2, \cdots, N.\ \end{equation} (3.29)

    Furthermore, to obtain this equidistributed mesh \left\{t_i\right\}_{i = 0}^N and the corresponding numerical solution u_i^N , we give the following iteration algorithm:

    Algorithm 1 Steps of adaptive grid algorithm
    1: Step 1: For a given N , let \{t_i^{(0)}\} _{i = 0}^{N} be an initial mesh with mesh step \frac{1}{N} . Choose a constant \mu^{*} > 1 that controls when the algorithm terminates.
    2: Step 2: For a given mesh \{t_i^{(k)}\}_{i = 0}^{N} and numerical solution \{u_i^{N, (k)}\}_{i = 0}^{N} , compute \tilde{M}_i^{(k)}, i = 1, 2, \cdots, N in Eq (3.28) and set \tilde{M}_0^{(k)} = 0 .
    3: Step 3: Set h_i^{(k)} = t_i^{(k)}-t_{i-1}^{(k)} for each i and set L_0^{(k)} = 0 and L_i^{(k)} = \sum_{j = 1}^{i}h_j^{(k)}\tilde{M}_j^{(k)} for i = 1, 2, \cdots, N . Define
                                            \begin{array}{*{20}{c}} {\mu^{(k)}: = \frac{N}{L_N^{(k)}}\max\limits_{i = 0, 1, \cdots, N}h_i^{(k)}\tilde{M}_i^{(k)}.} & {} & {} & {} & {(3.30)} \end{array}
    4: Step 4: Set Y_i^{(k)} = iL_N^{(k)}/N for i = 0, 1, \cdots, N . Interpolate (see [30, Remark 5.1]) to the points (L_i^{(k)}, t_i^{(k)}) . Generate the new mesh \{t_i^{(k+1)}\}_{i = 0}^{N} by evaluating this interpolant at the Y_i^{(k)} for i = 0, 1, \cdots, N .
    5: Step 5: If \mu^{(k)}\leq\mu^* , then take \{t_i^{(k+1)}\}_{i = 0}^{N} as the final mesh and compute \{u_i^{N, (k+1)}\}_{i = 0}^{N} . Otherwise return to Step 2.

    In Section 4.1, We first present the iterative scheme. Then numerical experiments are given in Section 4.2 to validate the theoretical result of this paper. All experiments were performed on a Windows 10 (64 bit) PC-Intel(R) Core(TM) i5-4200H CPU 2.80 GHz, 8 GB of RAM using MATLAB R2021a.

    In order to avoid solving the nonlinear equations (3.4), we apply the quasilinearization technique that performs a first-order Taylor expansion on the last iteration values and obtain

    \begin{equation} \left\{ \begin{aligned}&u_{i}^{N, \left(k\right)} = \frac{\varepsilon /h_i u_{i-1}^{N, \left(k\right)}+B_iu_i^{N, \left(k-1\right)}+C_i }{\varepsilon/h_i +B_i }, i = 1, 2, \cdots, N, \\ &u_0^{N, \left(k\right)} = A, \end{aligned} \right. \end{equation} (4.1)

    where

    \begin{eqnarray} &&B_i = \frac{\partial }{\partial u} f\left(t_i, u_i^{N, \left(k-1\right)}\right), \end{eqnarray} (4.2)
    \begin{eqnarray} &&C_i = -f\left(t_i, u_{i}^{N, \left(k-1\right)}\right)-\lambda \sum\limits_{j = 1}^{N}h_jK\left(t_i, t_j, u_j^{N, \left(k-1\right)}\right). \end{eqnarray} (4.3)

    For all the numerical experiments below, we choose \mu^* = 1.1 , which is defined in Step 4 in Algorithm 1.

    Example 4.1. We consider a SPFIDE in the form [22]

    \begin{equation} \left\{ \begin{aligned} &\varepsilon {u}'\left(t\right)+2u\left(t\right)+\tanh \left(u\left(t\right)\right)-e^t+\frac{1}{4}\int_{0}^{1}t^2\sin \left(u\left(s\right)\right)ds = 0, \ t\in\left(0, 1\right], \\ &u(0) = 1. \end{aligned} \right. \end{equation} (4.4)

    Since the analytic solution of this problem is not available, we use the following formulas to calculate the errors and the corresponding convergence rates:

    \begin{equation} e_{\varepsilon }^N = \left \| \hat{\mathit{\boldsymbol{u}}}^{2N}-\mathit{\boldsymbol{u}}^N \right \| _{\infty}, \ p_{\varepsilon }^N = \log_{2}{\left(\frac{e_{\varepsilon }^{N} }{e_{\varepsilon }^{2N} } \right)} , \ \end{equation} (4.5)

    where \hat{\mathit{\boldsymbol{u}}}^{2N} is the numerical solution obtained on the fine mesh \bar{\Omega } _{2N} = \bar{\Omega } _{N}\bigcup \left\{\frac{t_{i}+t_{i+1}}{2}\right\}_{i = 0}^{N-1} . The maximum errors and \varepsilon -uniform rates of the convergence are respectively defined as

    \begin{equation} e^N = \max\limits_\varepsilon e^N_\varepsilon, \ p^N = \log_{2}{\left(\frac{e^{N}}{e^{2N}}\right)}. \end{equation} (4.6)

    In the numerical experiments, we apply the presented adaptive grid algorithm to solve this problem. The resulting errors e_{\varepsilon }^N and the orders of the convergence p_{\varepsilon }^N , for particular values of \varepsilon and N are listed in Table 1. In addition, to compare the performance of the presented adaptive mesh with the Shishkin mesh [22] and the Bakhvalov mesh [23], some numerical results are given in Table 2.

    Table 1.  The errors and corresponding convergence rates for Example 4.1.
    \varepsilon N=64 N=128 N=256 N=512 N=1024
    2^{-2} 0.004663 0.002384 0.001222 0.000619 0.000312
    0.9680 0.9637 0.9811 0.9903 -
    2^{-4} 0.00595 0.003152 0.00163 0.000801 0.000412
    0.9167 0.9515 1.0244 0.9610 -
    2^{-6} 0.006539 0.003482 0.001818 0.000939 0.000477
    0.9089 0.9373 0.9532 0.9769 -
    2^{-8} 0.006711 0.003633 0.001893 0.000979 0.000501
    0.8853 0.9404 0.9514 0.9667 -
    2^{-10} 0.006799 0.003635 0.001913 0.000996 0.000509
    0.9035 0.9258 0.9424 0.9667 -
    2^{-12} 0.006881 0.003649 0.001926 0.000996 0.000512
    0.9151 0.9221 0.9505 0.9606 -
    2^{-14} 0.006854 0.003683 0.001924 0.001 0.000513
    0.8963 0.9365 0.9447 0.9633 -
    2^{-16} 0.00672 0.003752 0.001934 0.000999 0.000515
    0.8411 0.9562 0.9526 0.9560 -
    e^N 0.006881 0.003752 0.001934 0.001 0.000515
    p^N 0.8751 0.9562 0.9517 0.9569 -

     | Show Table
    DownLoad: CSV
    Table 2.  Comparisons of errors and corresponding convergence rates for Example 4.1.
    N \varepsilon=2^{-6} \varepsilon=2^{-8}
    Adaptive Bakhvalov Shishkin Adaptive Bakhvalov Shishkin
    3\times2^5 4.53E-03 2.81E-03 7.18E-03 4.66E-03 2.83E-03 7.16E-03
    0.93 0.98 0.75 0.91 0.98 0.75
    3\times2^6 2.39E-03 1.42E-03 4.26E-03 2.48E-03 1.44E-03 4.25E-03
    0.94 0.99 0.79 0.95 0.99 0.79
    3\times2^7 1.24E-03 7.16E-04 2.46E-03 1.29E-03 7.23E-04 2.45E-03
    0.97 0.99 0.83 0.96 0.99 0.83
    3\times2^8 6.33E-04 3.59E-04 1.39E-03 6.62E-04 3.63E-04 1.38E-03
    0.98 1.00 0.85 0.97 1.00 0.85
    3\times2^9 3.20E-04 1.80E-04 7.71E-04 3.37E-04 1.82E-04 7.69E-04
    1.05 1.00 0.86 0.98 1.00 0.86
    3\times2^{10} 1.54E-04 9.01E-05 4.23E-04 1.71E-04 9.09E-05 4.22E-04

     | Show Table
    DownLoad: CSV

    According to the results in Table 1, for fixed N , the error increases with a diminishing speed and the convergence rate goes away from 1 as \varepsilon decreases, while for fixed \varepsilon , the error deceases to half as N doubles and the convergence rate is getting closer to 1. According to the results in Table 2, our adaptive mesh demonstrates both less error and more accurate first-order convergence rate than Shishkin mesh. In general, the performance of our adaptive grid algorithm is better when \mu^* which is used to distribute the grid uniformly is closer to 1. However, the enhance becomes very limited after certain threshold and it can not work out successfully when \mu^* is so close to 1 due to the enormous amount of calculations. Here the limitation of the adaptive mesh lies.

    The behaviors of the numerical solution are presented in Figures 1, 2. Obviously, it can be seen that in these two figures the solution of the test problem decreases successively near to 0 at first and increases progressively close to 1 then and has a boundary layer at t = 0 . Figure 3 shows the \varepsilon -uniform convergence of the method with different \varepsilon . To be more physical, the first-order uniform convergence stands for our method in spite of violent changes of the numerical solution in the bound layer at t = 0 . From Figure 3, no matter how close \varepsilon tends to zero, the maximum point-wise errors are bounded by O\left(N^{-1}\right) which validates our theoretical analyses. For \varepsilon = 2^{-8} , Figure 4 displays how a mesh with N = 64 evolves through successive iterations of the algorithm by using monitor function (3.28).

    Figure 1.  Numerical results of Example 4.1 for N = 64 and \varepsilon = 2^{-4} .
    Figure 2.  Numerical results of Example 4.1 for N = 256 and various \varepsilon .
    Figure 3.  Maximum point-wise errors of log-log plot for Example 4.1.
    Figure 4.  Evolution of the mesh for Example 4.1.

    Example 4.2. We consider a first-order nonlinear singularly perturbed mixed type integro-differential equation in [24] in the form:

    \begin{equation} \left\{ \begin{aligned} &\varepsilon u'\left(t\right)+f\left(t, u\left(t\right)\right)+\frac{1}{2}\int_{0}^{t}u^2(s)ds+\frac{1}{2}\int_{0}^{1}u^3\left(s\right)ds = 0, \ t\in\left(0, 1\right], \\ &u(0) = 1. \end{aligned} \right. \end{equation} (4.7)

    where

    \begin{equation} f\left(t, u\left(t\right)\right) = \frac{\varepsilon}{4}u^2\left(t\right)+u\left(t\right)+\frac{\varepsilon }{6}e^{-\frac{3}{\varepsilon }}-\frac{5}{12}\varepsilon. \end{equation} (4.8)

    It is testified that it satisfies the assumption (3.10). The analytic solution of this problem is u\left(t\right) = e^{-t/\varepsilon} . We use the following formulas to calculate the errors and the corresponding convergence rates:

    \begin{equation} e_{\varepsilon }^N = \max\limits_{0\le i\le N} \left | u_i^{N}-u_i \right | , \ p_{\varepsilon }^N = \log_{2}{\left(\frac{e_{\varepsilon }^{N} }{e_{\varepsilon }^{2N} } \right)} .\ \end{equation} (4.9)

    The maximum errors and \varepsilon -uniform convergence rates are defined in Eq (4.6).

    The resulting errors e_{\varepsilon }^N and the orders of the convergence p_{\varepsilon }^N , for particular values of \varepsilon and N are listed in Table 3. In addition, to compare the performance of the presented adaptive mesh with the Shishkin mesh [22] and the Bakhvalov mesh [23], some numerical results are given in Table 4.

    Table 3.  The errors and corresponding convergence rates for Example 4.2.
    \varepsilon N=64 N=128 N=256 N=512 N=1024
    2^{-2} 0.008698 0.004386 0.002202 0.001103 0.000552
    0.9880 0.9939 0.9969 0.9985 -
    2^{-4} 0.011073 0.00562 0.002799 0.001413 0.00071
    0.9783 1.0057 0.9863 0.9929 -
    2^{-6} 0.012473 0.00639 0.003247 0.001638 0.0008
    0.9648 0.9767 0.9876 1.0335 -
    2^{-8} 0.013236 0.006774 0.003463 0.00176 0.000887
    0.9664 0.9681 0.9763 0.9881 -
    2^{-10} 0.013376 0.006909 0.003559 0.00181 0.000917
    0.9530 0.9570 0.9755 0.9814 -
    2^{-12} 0.013461 0.006954 0.003575 0.001835 0.000929
    0.9528 0.9601 0.9619 0.9822 -
    2^{-14} 0.013563 0.007002 0.003592 0.001836 0.000937
    0.9540 0.9631 0.9680 0.9704 -
    2^{-16} 0.015521 0.007146 0.003596 0.001842 0.000935
    1.1190 0.9908 0.9647 0.9784 -
    e^N 0.015521 0.007146 0.003596 0.001842 0.000937
    p^N 1.1190 0.9908 0.9647 0.9755 -

     | Show Table
    DownLoad: CSV
    Table 4.  Comparisons of errors and corresponding convergence rates for Example 4.2.
    N \varepsilon=2^{-6} \varepsilon=2^{-8}
    Adaptive Bakhvalov Shishkin Adaptive Bakhvalov Shishkin
    3\times2^5 8.43E-03 2.15E-02 1.91E-02 8.95E-03 2.69E-02 1.67E-02
    0.97 0.90 1.28 0.97 0.65 0.11
    3\times2^6 4.31E-03 1.15E-02 7.85E-03 4.58E-03 1.72E-02 1.54E-02
    0.98 0.94 1.29 0.97 0.67 0.80
    3\times2^7 2.18E-03 6.00E-03 3.21E-03 2.33E-03 1.08E-02 8.88E-03
    1.04 0.97 1.00 0.98 0.90 1.28
    3\times2^8 1.06E-03 3.06E-03 1.61E-03 1.18E-03 5.78E-03 3.65E-03
    0.97 0.98 0.85 0.99 0.93 1.32
    3\times2^9 5.39E-04 1.55E-03 8.88E-04 5.93E-04 3.02E-03 1.46E-03
    0.98 0.99 0.87 1.00 0.97 1.41
    3\times2^{10} 2.72E-04 7.78E-04 4.87E-04 2.98E-04 1.54E-03 5.51E-04

     | Show Table
    DownLoad: CSV

    The behaviors of the numerical solution are presented in Figures 5, 6. Obviously, it can be seen that in these two figures the solution of the test problem gradually decreases to zero with a decreasing velocity rate as well as has a boundary layer at t = 0 . Figure 7 shows the \varepsilon -uniform convergence of the method with different \varepsilon . From Figure 7, no matter how close \varepsilon tends to zero, the maximum point-wise errors are bounded by O\left(N^{-1}\right) which validates our theoretical analyses. For \varepsilon = 2^{-8} , Figure 8 displays how a mesh with N = 64 evolves through successive iterations of the algorithm by using monitor function (3.28).

    Figure 5.  Numerical results of Example 4.2 for N = 64 and \varepsilon = 2^{-4} .
    Figure 6.  Numerical results of Example 4.2 for N = 256 and various \varepsilon .
    Figure 7.  Maximum point-wise errors of log-log plot for Example 4.2.
    Figure 8.  Evolution of the mesh for Example 4.2.

    The unique but significant difference between these two examples can be found in Table 4. At the situation \varepsilon = 2^{-8} , adaptive mesh demonstrates better than Bakhvalov mesh. This is because the construction of the Bakhvalov mesh is based on prior information of the exact solution. Therefore the effect of this method is tightly relevant to the form of equation. The existing Bakhvalov mesh may display wonderful at suitable equations but poor when it is adverse. However, the adaptive mesh based on posterior error estimate requires none of prior information of the exact solution and displays steadily at various equations.

    This paper considers a nonlinear singularly perturbed Fredholm integro-differential equation for a first-order initial value problem. A first-order \varepsilon -uniformly convergent numerical method for solving this problem is presented, which comprises an adaptive grid based on the posterior error estimate and the mesh equidistribution principle. The difference scheme including the weights and the remainders is established using the backward Euler formula for the derivative term, together with the composite right rectangle quadrature rule for the integral term. A theoretical analysis based on prior error bound is conducted to prove the first-order convergence of the proposed method. Two examples show that our adaptive mesh demonstrates better than Shishkin mesh and performs as well as Bakhvalov mesh. In the future, we will try to extend the proposed adaptive grid method for solving other related integro-differential equations which can be found in [5,6,31].

    The authors declare there is no conflict of interest.



    [1] D. S. Modha, R. Ananthanarayanan, S. K. Esser, A. Ndirango, A. J. Sherbondy, R. Singh, Cognitive computing, Commun. ACM, 54 (2011), 62–71.
    [2] P. Kurp, Green computing, Commun. ACM, 51 (2008), 11–13.
    [3] M. Chen, F. Herrera, K. Hwang, Cognitive computing: architecture, technologies and intelligent applications, IEEE Access, 6 (2018), 19774–19783. doi: 10.1109/ACCESS.2018.2791469
    [4] S. Mirjalili, Genetic algorithm, in Evolutionary Algorithms and Neural Networks, Springer, 2019.
    [5] G. R. Harik, F. G. Lobo, D. E. Goldberg, The compact genetic algorithm, IEEE Trans. Evol. Comput., 3 (1999), 287–297. doi: 10.1109/4235.797971
    [6] N. Pour, A. Algergawy, R. Amini, D. Faria, I. Fundulaki, I. Harrow, et al., Results of the ontology alignment evaluation initiative 2020, Proceedings of the 15th International Workshop on Ontology Matching (OM 2020), 2020.
    [7] D. Oliveira, C. Pesquita, Improving the interoperability of biomedical ontologies with compound alignments, J. Biomed. Semantics, 9 (2018), 1–13. doi: 10.1186/s13326-017-0171-8
    [8] C. Pesquita, M. Cheatham, D. Faria, J. Barros, E. Santos, F. M. Couto, Building reference alignments for compound matching of multiple ontologies using obo cross-products, in OM, (2014), 172–173.
    [9] G. Acampora, U. Kaymak, V. Loia, A. Vitiello, Applying nsga-ii for solving the ontology alignment problem, in 2013 IEEE International Conference on Systems, Man, and Cybernetics, IEEE, (2013), 1098–1103.
    [10] X. Xue, Y. Wang, Optimizing ontology alignments through a memetic algorithm using both matchfmeasure and unanimous improvement ratio, Artif. Intell., 223 (2015), 65–81. doi: 10.1016/j.artint.2015.03.001
    [11] M. Shamsfard, B. Helli, S. Babalou, Omega: Ontology matching enhanced by genetic algorithm. in 2016 Second International Conference on Web Research (ICWR), IEEE, (2016), 170–176.
    [12] Q. Lv, C. Jiang, H. Li, Solving ontology meta-matching problem through an evolutionary algorithm with approximate evaluation indicators and adaptive selection pressure, IEEE Access, 2020 (2020), 3046–3064.
    [13] N. Ferranti, S. S. R. F. Soares, J. F. de Souza, Metaheuristics-based ontology meta-matching approaches, Expert Syst. Appl., 173 (2021), 114578. doi: 10.1016/j.eswa.2021.114578
    [14] G. Acampora, V. Loia, A. Vitiello, Enhancing ontology alignment through a memetic aggregation of similarity measures, Inf. Sci., 250 (2013), 1–20. doi: 10.1016/j.ins.2013.06.052
    [15] X. Xue, Y. Wang, Using memetic algorithm for instance coreference resolution, IEEE Trans. Knowl. Data Eng., 28 (2015), 580–591.
    [16] O. Bodenreider, The unified medical language system (umls): integrating biomedical terminology, Nucleic Acids Res., 32 (2004), D267–D270. doi: 10.1093/nar/gkh061
    [17] X. Xue, J. S. Pan, An overview on evolutionary algorithm based ontology matching, J. Inf. Hiding Multimed. Signal Proc., 9 (2018), 75–88.
    [18] X. Xue, J. Zhang, Matching large-scale biomedical ontologies with central concept based partitioning algorithm and adaptive compact evolutionary algorithm, Appl. Soft Comput., 106 (2021), 107343. doi: 10.1016/j.asoc.2021.107343
    [19] I. Mott, Investigating semantic similarity for biomedical ontology alignment. Ph.D thesis, Universida De DeLisboa, 2017.
    [20] G. Kondrak, N-gram similarity and distance, in International symposium on string processing and information retrieval, Springer, 2005,115–126.
    [21] D. Sudholt, C. Witt, Update strength in edas and aco: How to avoid genetic drift, in Proceedings of the Genetic and Evolutionary Computation Conference 2016, 2016.
    [22] P. N. Robinson, S. Köhler, S. Bauer, D. Seelow, D. Horn, S. Mundlos, The human phenotype ontology: a tool for annotating and analyzing human hereditary disease, Am. J. Hum. Genet., 83 (2008), 610–615. doi: 10.1016/j.ajhg.2008.09.017
    [23] C. L. Smith, C. A. W. Goldsmith, J. T. Eppig, The mammalian phenotype ontology as a tool for annotating, analyzing and comparing phenotypic information, Genome Biol., 6 (2005), R7. doi: 10.1186/gb-2005-6-5-p7
    [24] G. V. Gkoutos, P. N. Schofield, R. Hoehndorf, The neurobehavior ontology: an ontology for annotation and integration of behavior and behavioral phenotypes, in Int. Rev. Neurobiol., 103 (2012), 69–87.
    [25] G. Schindelman, J. S. Fernandes, C. A. Bastiani, K. Yook, P. W Sternberg, Worm phenotype ontology: integrating phenotype data within and beyond the c. elegans community, BMC Bioinformatics, 12 (2011), 32. doi: 10.1186/1471-2105-12-32
    [26] G. V. Gkoutos, E. C. J. Green, A. M. Mallon, J. M. Hancock, D. Davidson, Using ontologies to describe mouse phenotypes, Genome Biol., 6 (2005), R8. doi: 10.1186/gb-2005-6-5-p8
    [27] J. Bard, S. Y. Rhee, M. Ashburner, An ontology for cell types, Genome Biol., 6 (2005), R21. doi: 10.1186/gb-2005-6-2-r21
    [28] C. Rosse, J. L. V. Mejino Jr, A reference ontology for biomedical informatics: the foundational model of anatomy, J. Biomed. Inf., 6 (2003), 478–500.
    [29] Gene Ontology Consortium, The gene ontology (go) database and informatics resource, Nucleic Acids Res., 32 (2004), D258–D261. doi: 10.1093/nar/gkh036
    [30] C. J. Mungall, C. Torniai, G. V. Gkoutos, S. E. Lewis, M. A. Haendel, Uberon, an integrative multi-species anatomy ontology, Genome Biol., 13 (2012), R5. doi: 10.1186/gb-2012-13-1-r5
    [31] C. J. Van Rijsbergen, Foundation of evaluation, J. Doc., 30 (1974), 365–373. doi: 10.1108/eb026584
    [32] J. M. V. Naya, M. M. Romero, J. P. Loureiro, C. R. Munteanu, A. P. Sierra, Improving ontology alignment through genetic algorithms, in Soft computing methods for practical environment solutions: Techniques and studies, (2010), 240–259.
    [33] G. Acampora, P. Avella, V. Loia, S. Salerno, A. Vitiello, Improving ontology alignment through memetic algorithms, in 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), IEEE, (2011), 1783–1790.
    [34] X. Xue, J. Chen, Matching biomedical ontologies through compact differential evolution algorithm with compact adaption schemes on control parameters, Neurocomputing, 2020 (2020), 1–9.
    [35] L. Schmetterer, E. L. Lehmann, Testing statistical hypotheses, Econometrica, 30 (1962), 462–465.
  • This article has been cited by:

    1. Ajay Singh Rathore, Vembu Shanthi, A numerical solution of singularly perturbed Fredholm integro-differential equation with discontinuous source term, 2024, 446, 03770427, 115858, 10.1016/j.cam.2024.115858
    2. Abhilipsa Panda, Jugal Mohapatra, Ilhame Amirali, Muhammet Enes Durmaz, Gabil M. Amiraliyev, A numerical technique for solving nonlinear singularly perturbed Fredholm integro-differential equations, 2024, 220, 03784754, 618, 10.1016/j.matcom.2024.02.011
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2747) PDF downloads(65) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog