Loading [MathJax]/jax/output/SVG/jax.js
Research article

A generalized Liu-type estimator for logistic partial linear regression model with multicollinearity

  • Received: 27 December 2022 Revised: 17 February 2023 Accepted: 27 February 2023 Published: 20 March 2023
  • MSC : 62J07, 62G05, 62F10

  • This paper is concerned with proposing a generalized Liu-type estimator (GLTE) to address the multicollinearity problem of explanatory variable of the linear part in the logistic partially linear regression model. Using the profile likelihood method, we propose the GLTE as a general class of Liu-type estimator, which includes the profile likelihood estimator, the ridge estimator, the Liu estimator and the Liu-type estimator as special cases. The conditional superiority of the proposed GLTE over the other estimators is derived under the asymptotic mean square error matrix (MSEM) criterion. Moreover, the optimal choices of biasing parameters and function of biasing parameter are given. Numerical simulations demonstrate that the proposed GLTE performs better than the existing estimators. An application on a set of real data arising from the study of Indian Liver Patient is shown for illustrating our theoretical results.

    Citation: Dayang Dai, Dabuxilatu Wang. A generalized Liu-type estimator for logistic partial linear regression model with multicollinearity[J]. AIMS Mathematics, 2023, 8(5): 11851-11874. doi: 10.3934/math.2023600

    Related Papers:

    [1] W. B. Altukhaes, M. Roozbeh, N. A. Mohamed . Feasible robust Liu estimator to combat outliers and multicollinearity effects in restricted semiparametric regression model. AIMS Mathematics, 2024, 9(11): 31581-31606. doi: 10.3934/math.20241519
    [2] Muhammad Nauman Akram, Muhammad Amin, Ahmed Elhassanein, Muhammad Aman Ullah . A new modified ridge-type estimator for the beta regression model: simulation and application. AIMS Mathematics, 2022, 7(1): 1035-1057. doi: 10.3934/math.2022062
    [3] Kannat Na Bangchang . Application of Bayesian variable selection in logistic regression model. AIMS Mathematics, 2024, 9(5): 13336-13345. doi: 10.3934/math.2024650
    [4] Jing Kong, Shaoxin Wang . Condition numbers of the generalized ridge regression and its statistical estimation. AIMS Mathematics, 2024, 9(2): 4178-4193. doi: 10.3934/math.2024205
    [5] Juxia Xiao, Ping Yu, Zhongzhan Zhang . Weighted composite asymmetric Huber estimation for partial functional linear models. AIMS Mathematics, 2022, 7(5): 7657-7684. doi: 10.3934/math.2022430
    [6] Jieqiong Lu, Peixin Zhao, Xiaoshuang Zhou . Orthogonality based modal empirical likelihood inferences for partially nonlinear models. AIMS Mathematics, 2024, 9(7): 18117-18133. doi: 10.3934/math.2024884
    [7] Yanting Xiao, Wanying Dong . Robust estimation for varying-coefficient partially linear measurement error model with auxiliary instrumental variables. AIMS Mathematics, 2023, 8(8): 18373-18391. doi: 10.3934/math.2023934
    [8] Ahmed R. El-Saeed, Ahmed T. Ramadan, Najwan Alsadat, Hanan Alohali, Ahlam H. Tolba . Analysis of progressive Type-Ⅱ censoring schemes for generalized power unit half-logistic geometric distribution. AIMS Mathematics, 2023, 8(12): 30846-30874. doi: 10.3934/math.20231577
    [9] Liqi Xia, Xiuli Wang, Peixin Zhao, Yunquan Song . Empirical likelihood for varying coefficient partially nonlinear model with missing responses. AIMS Mathematics, 2021, 6(7): 7125-7152. doi: 10.3934/math.2021418
    [10] Bo Jiang, Yongge Tian . Equivalent analysis of different estimations under a multivariate general linear model. AIMS Mathematics, 2024, 9(9): 23544-23563. doi: 10.3934/math.20241144
  • This paper is concerned with proposing a generalized Liu-type estimator (GLTE) to address the multicollinearity problem of explanatory variable of the linear part in the logistic partially linear regression model. Using the profile likelihood method, we propose the GLTE as a general class of Liu-type estimator, which includes the profile likelihood estimator, the ridge estimator, the Liu estimator and the Liu-type estimator as special cases. The conditional superiority of the proposed GLTE over the other estimators is derived under the asymptotic mean square error matrix (MSEM) criterion. Moreover, the optimal choices of biasing parameters and function of biasing parameter are given. Numerical simulations demonstrate that the proposed GLTE performs better than the existing estimators. An application on a set of real data arising from the study of Indian Liver Patient is shown for illustrating our theoretical results.



    A generalized partial linear regression model (GPLRM) is a semiparametric extension of a generalized linear regression model. In the study of GPLRM, the topics such as parameters estimation, the effectiveness of estimators have been paid much attention by researchers [1,2,3,4,5]. Here the famous profile likelihood method (PLM) came into being. As a special case of GPLRM, the logistic partial linear regression model (LPLRM) is widely used to study the relationship between binomial response variable and explanatory variables with linear part and nonparametric part in fields of medical studies, social studies and biological studies, and its parameters estimation as well as the asymptotic properties of the estimators can be obtained from the framework of GPLRM. As known, the LPLRM is assumed that the explanatory variables in the linear part are mutually independent. However, in application, the assumption of independence between explanatory variables in the linear part seldom holds, and a violation of the assumption causes the multicollinearity problem. It is well known that multicollinearity may causes large variance of the estimators, which in turn leads to wide confidence intervals and erroneous sign estimates.

    In the study of the multicollinearity problem, to our best knowledge, the multicollinearity of linear regression models, generalized linear regression models (logistic, poisson and gamma), and partial linear regression models has been well investigated, respectively. (a) Most proposals of powerful novel methods for dealing with multicollinearity have appeared in the study of the linear regression models. Hoerl and Kennard [6] firstly proposed a ridge estimation program of unknown parameters to combat multicollinearity of a linear regression model. Liu [7] presented a class of biased Liu estimators of the unknown parameters that are superior to ridge estimators with ease of selecting bias parameters. Liu [8] introduced a two-parameter Liu-type estimator of unknown parameters to deal with the serious multicollinearity problem that the ridge estimator failed to combat. Kurnaz and Akay [9] proposed a general Liu-type estimator of unknown parameters based on the Liu-type estimator. Moreover, Zeinal [10] contributed an extension of the two-parameter estimator presented by Özkale and Kaçiranlar [11]. (b) In the case of a generalized linear regression model, for dealing with multicollinearity, some modified estimators are established. The estimators to combat multicollinearity in the logistic linear regression models (LLRM) are listed below. Kibria, Mnsson and Shukur [12] generalized and compared logistic ridge estimators with different ridge parameters. Inan and Erdogan [13] introduced a Liu-type estimator that had a smaller total mean squared error (MSE) than the ridge estimator under certain conditions. Asar and Genç [14]) constructed a two-parameter ridge estimator for the LLRM. Varathan and Wijekoon [15] proposed estimator called Modified almost unbiased logistic Liu estimator (MAULLE). Ertan and Akay [16] modified the general Liu-type estimator with reconstructing the form of the biasing parameter function. Jadhav [17] proposed a new estimator designated as linearized ridge logistic estimator. And also there are new poisson ridge estimator by Rashad et al. [18], and gamma modified ridge-type estimator by Lukman et al. [19]. (c) In order to eliminate the multicollinearity of a partial linear regression model (PLRM), some biased estimators of unknown parameters are constructed by considering additive errors, correlated errors or linear constraint, etc. For example, Roozbeh and Arashi [20] proposed difference-based ridge type estimators combining the restricted least squares method in the seemingly unrelated semiparametric models. Wu [21] proposed a difference-based almost unbiased Liu estimator in PLRM. Emami and Aghamohammadi [22] constructed the difference-based ridge and Liu type estimators in PLRM when the covariates are measured with additive errors. Akdeniz and Roozbeh [23] introduced a generalized difference-based almost unbiased ridge estimator in PLRM when the errors are correlated. Wu and Kibria [24] considered the generalized difference-based mixed two-parameter estimator in PLRM.

    Theoretically, statistical regression models (except for univariate regression model) including linear part may face to a multicollinearity problem. Now we consider the case of a LPLRM, to our best knowledge, there is no literature reported on combating multicollinearity problem in study of a LPLRM. Apparently, the existing methods mentioned above combating multicollinearity in LLRM and PLRM can be recommended to construct a suitable biased estimator for our purpose. However, it is difficult to properly coordinate the existing methods for LLRM and PLRM, i.e. the combination of the methods should generate an alternative (estimator) which optimally reduces the impact of multicollinearity on LPLRM, meanwhile the linear part and the nonparametric part of the LPLRM are estimated in accordance with some standard statistical criteria. We attempt to deal such issues in section 3 where the PLM is employed to construct a more generalized Liu-type estimator, and the optimal choices of the biasing parameters and the function of the biasing parameter as well as the superiority conditions of the proposed estimator over other estimators are given.

    This paper is organized as follows. Section 2 states the LPLRM, its estimation and evaluation criteria as well as the PLM. Several biased estimators and the proposed generalized Liu-type estimator (GLTE) are given in section 3. In section 4, theoretical conditions are derived to study the superiority of the GLTE over the other estimators under MSEM criterion. In section 5, the optimal choices of the biasing parameters and the function of biasing parameter are determined. In section 6, Mote Carlo simulations are given to evaluate the performance of the proposed GLTE. Section 7 presents a real data application. Finally, a brief summary and conclusions are given in section 8.

    The dependent variable yi{0,1} is the binary response variable. xTi is the ith row of n×p explanatory variables matrix X, which may take any form of continuous, discrete or mixture of discrete and continuous. tiRq is ith sample of a q-variate random vector of continuous explanatory variables. We consider the LPLRM as follows:

    log(πi1πi)=xTiβ+m(ti),i=1,2,,n, (2.1)

    where πi=prob(yi=1|xi,ti), β=(β1,β2,...,βp)T is p×1 parametric vector, m() is a nonparametric function.

    The PLM is often used to obtain estimators of model (2.1). In PLM, the smoothed or local log-likelihood for the nonparametric function mβ(t) at point t is given by

    LH(mβ(t))=nj=1κH(ttj)(yjlogπj(β,mβ(t))1πj(β,mβ(t))+log(1πj(β,mβ(t)))), (2.2)

    with πj(β,mβ(t))=exp(xTjβ+mβ(t))1+exp(xTjβ+mβ(t)). Here, κH(ttj) denote local kernel weights with a (multidimensional) kenel function κ and a bandwidth matrix H, mβ(t) is a differentiable function with respect to β for each t. The Logarithmic profile likelihood for β can be written as

    L(β)=ni=1(yilogπi(β,mβ(ti))1πi(β,mβ(ti))+log(1πi(β,mβ(ti)))), (2.3)

    with πi(β,mβ(ti))=exp(xTiβ+mβ(ti))1+exp(xTiβ+mβ(ti)). Abbrevuate mi=mβ(ti), the likelihood equations are obtained from Eqs (2.2) and (2.3) as follows

    LH(mβ(ti))mi=nj=1(yjπj(β,mi))κH(titj)=0, (2.4)

    and

    L(β)β=ni=1(yiπi(β,mi))(xi+mi)=0, (2.5)

    where mi is the partial derivative vector of mβ(ti) with respect to β. Taking the partial derivative of Eq (2.4) with respect to β, we have

    mi=nj=1πj(β,mi)(1πj(β,mi))κH(titj)xjnj=1πj(β,mi)(1πj(β,mi))κH(titj).

    The second partial derivative of Eq (2.3) with respect to β is given by

    2L(β)ββT=ni=1πi(β,mi)(1πi(β,mi))(xi+mi)(xi+mi)T. (2.6)

    By the iterative Newton-Raphson algorithm [25] (see Eqs (2.5) and (2.6)), the iterative formula of β is given by

    βnew=β(˜XTW˜X)1˜XT(yπ)=(˜XTW˜X)1˜XTW(˜Xβ+W1(yπ)),

    where ˜X=XSX, S is the smoothing matrix with the following elements

    Sij=πi(β,mj)(1πi(β,mj))κH(titj)ni=1πi(β,mj)(1πi(β,mj))κH(titj),

    ˜X is the matrix with rows ˜xTi, ˜xi=xi+mi, the diagonal matrix W=diag(πi(β,mi)(1πi(β,mi))i=1,,n), y=(y1,y2,,yn)T, π=(π1,π2,,πn)T. We will be able to get the iterative weighted least squares algorithm [25] for the model (2.1). Define an adjusted dependent variable Z=Xβ+m+W1(yπ) to obtain ˜Z=ZSZ=˜Xβ+W1(yπ), where m=(m1,m2,,mn)T.

    Here, the estimation of the LPLRM (2.1) is called LPLE. In summary, the estimation algorithm for the model (2.1) is given as follows

    Step 1: Give suitable initial values β(0), m(0) for β and m.

    Step 2: Repeat step (a), (b) and (c) until convergence.

    (a) Calculate ˜X and ˜Z;

    (b) Updating step for β: βnew=(˜XTW˜X)1˜XTW˜Z;

    (c) Updating step for m: mnew=S(ZXβnew).

    Step 3: Obtain the final estimates ˆβ, ˆmˆβ of β and m.

    There are two common evaluation criteria for estimators, namely, the asymptotic scalar mean squared error (SMSE) and the asymptotic mean square error matrix (MSEM). The SMSE are traces of the MSEM of an estimator ˜β, and they are defined as

    MSEM(˜β)=E(˜ββ)(˜ββ)T=Cov(˜β)+Bias(˜β)(Bias(˜β))T,SMSE(˜β)=tr(MSEM(˜β))=tr(Cov(˜β))+(Bias(˜β))T(Bias(˜β)),

    where Cov() is the covariance matrix, Bias() is the biasing vector, tr() denote trace operation. We denote α=QTβ, Λ=diag(λ1,,λp)=QT(˜XTˆW˜X)Q, where λ1λ2λp0 are the ordered eigenvalues of ˜XTˆW˜X, ˆW is the final estimate of W in the above algorithm, Q is the orthogonal matrix consisting of the eigenvectors of ˜XTˆW˜X in columns, αj denote the jth element of QTβ, j=1,2,,p. Since the LPLE is asymptotically unbiased [1], the MSEM and the SMSE of ˆβ are given by

    MSEM(ˆβ)=Cov(ˆβ)=(˜XTˆW˜X)1=QΛ1QT,SMSE(ˆβ)=tr(Cov(ˆβ))=tr((˜XTˆW˜X)1)=pj=11λj. (2.7)

    In this section, using the PLM, we introduce a GLTE which includes the LPLE, the ridge estimator, the Liu estimator and the Liu-type estimator. We still use some denotation same as that used in Section 2 such as Q,Λ,ˆW,Z,˜Z,˜X,X without change of meanings. We begin to consider how to combat a multicollinearity of the LPLRM, since in the presence of multicollinearity, the matrix ˜XTˆW˜X becomes ill-conditioned, which leads to a large variance and instability of LPLE though the LPLE is asymptotically unbiased [1]. Inspired by the famous ridge estimation method (a popular penalty estimation used for dealing with multicollinearity), we attempt to construct an appropriate biased estimation to eliminate the multicollinearity in LPLRM and meanwhile the LPLRM are well estimated in accordance with some standard statistical criteria. Using the PLM, in first step, we will simply extend the existing biased estimators such as the ridge estimator, the Liu estimator and the Liu-type estimator to the case of the LPLRM. In second step, we take a complex procedure in which we modify the biasing parameters and the function of biasing parameter, and introduce an objective function of β to construct the GLTE.

    Now, using LPLE, we adopt the ridge estimator [6] to construct logistic partial linear ridge estimator (LPLRE) as follows

    ˆβR(k)=(˜XTˆW˜X+kI)1˜XTˆW˜Z=(˜XTˆW˜X+kI)1(˜XTˆW˜X)ˆβ,

    where k0 is biasing parameter, I is the identity matrix with dimension p×p. The estimates of the nonparametric functions m are given by

    ˆmˆβR(k)=S(ZXˆβR(k)).

    The Bias, Cov, MSEM and SMSE of ˆβR(k) are given by

    Bias(ˆβR(k))=E(ˆβR(k))β=k(˜XTˆW˜X+kI)1β,Cov(ˆβR(k))=Cov((˜XTˆW˜X+kI)1(˜XTˆW˜X)ˆβ)=(˜XTˆW˜X+kI)1˜XˆW˜XT((˜XTˆW˜X+kI)1)T,MSEM(ˆβR(k))=Cov(ˆβR(k))+(Bias(ˆβR(k)))(Bias(ˆβR(k)))T=Q((Λ+kI)1Λ(Λ+kI)1+k2(Λ+kI)1ααT(Λ+kI)1)QT,SMSE(ˆβR(k))=tr(Cov(ˆβR(k)))+(Bias(ˆβR(k)))T(Bias(ˆβR(k)))=pj=1λj(λj+k)2+pj=1k2α2j(λj+k)2. (3.1)

    Liu estimator was firstly proposed for dealing with a multicollinearity problem in a linear regression model by Liu [7]. Here, using LPLE, we adopt the Liu estimator to construct logistic partial linear Liu estimator (LPLLE) as follows

    ˆβL(d)=(˜XTˆW˜X+I)1(˜XTˆW˜Z+dˆβ)=(˜XTˆW˜X+I)1(˜XTˆW˜X+dI)ˆβ,

    where 0<d<1 is a biasing parameter. In addition, the estimates of nonparametric functions m are given by

    ˆmˆβL(d)=S(ZXˆβL(d)).

    The Bias, Cov, MSEM and SMSE of ˆβL(d) are given by

    Bias(ˆβL(d))=E(ˆβL(d))β=(˜XTˆW˜X+I)1(˜XTˆW˜X+dI)β(˜XTˆW˜X+I)1(˜XTˆW˜X+I)β=(˜XTˆW˜X+I)1((d1)I)β,Cov(ˆβL(d))=Cov((˜XTˆW˜X+I)1(˜XTˆW˜X+dI)ˆβ)=(˜XTˆW˜X+I)1(˜XTˆW˜X+dI)(˜XTˆW˜X)1(˜XTˆW˜X+dI)T((˜XTˆW˜X+I)1)T,MSEM(ˆβL(d))=Cov(ˆβL(d))+(Bias(ˆβL(d)))(Bias(ˆβL(d)))T=Q((Λ+I)1(Λ+dI)Λ1(Λ+dI)(Λ+I)1+(d1)2(Λ+I)1ααT(Λ+I)1)QT, (3.2)
    SMSE(ˆβL(d))=tr(Cov(ˆβL(d)))+(Bias(ˆβL(d)))T(Bias(ˆβL(d)))=pj=1(λj+d)2λj(λj+1)2+pj=1(d1)2α2j(λj+1)2. (3.3)

    Liu [8], Inan and Erdogon [13] introduced a Liu-type estimator to modify Liu estimator when they deal with the multicollinearity problem in the LLRM. Using LPLE, we adopt the Liu-type estimator to construct the logistic partial linear Liu-type estimator (LPLLTE) as follows

    ˆβLT(d,k)=(˜XTˆW˜X+kI)1(˜XTˆW˜Zdˆβ)=(˜XTˆW˜X+kI)1(˜XTˆW˜XdI)ˆβ,

    where k>0 and dR are biasing parameters. The estimates of the nonparametric functions m are given by

    ˆmˆβLT(d,k)=S(ZXˆβLT(d,k)).

    The MSEM and SMSE of ˆβLT(d,k) are given as follows

    MSEM(ˆβLT(d,k))=Q((Λ+kI)1(ΛdI)Λ1(ΛdI)(Λ+kI)1+(d+k)2(Λ+kI)1ααT(Λ+kI)1)QT, (3.4)
    SMSE(ˆβLT(d,k))=pj=1(λjd)2λj(λj+k)2+pj=1(d+k)2α2j(λj+k)2. (3.5)

    Kurnaz and Akay [9], Ertan and Akay [16] introduced some new Liu-type estimators when they deal with multicollinearity of linear regression model and LLRM. The new Liu-type estimators are defined as

    ˆβE&A=(XTˆWX+kI)1(XTˆWX+f(k)I)ˆβ,k>0,

    and

    ˆβK&A=(XTX+kI)1(XTX+f(k)I)ˆβ,k>0,

    where ˆβ is any estimator of the regression coefficient vector β, k is a biasing parameter and f(k) is a continuous function of biasing parameter k.

    Using LPLE, we modify the constructions of the new Liu-type estimators ˆβE&A,ˆβK&A to propose a new estimator which is called a generalized Liu-type estimator (GLTE). Setting the following objective function

    F(β)=(˜Z˜Xβ)TˆW(˜Z˜Xβ)+((QKQT)1(QFQT)ˆββ)TQKQT((QKQT)1(QFQT)ˆββ), (3.6)

    where K=diag(k1,k2,,kp) is the matrix of biasing parameters, F=diag(f(k1),f(k2) ,,f(kp)) is the matrix of function f(kj) of biasing parameter, f(kj)=akj+b, kj>0,j=1,,p, a and b are constants (refer to section 5). Minimize the function (3.6) with respect to β, we can obtain the GLTE as follows

    ˆβG(K)=(˜XTˆW˜X+QKQT)1(˜XTˆW˜Z+QFQTˆβ)=(˜XTˆW˜X+QKQT)1(˜XTˆW˜X+QFQT)ˆβ. (3.7)

    In addition, the estimates of the nonparametric functions m are given by

    ˆmˆβG(K)=S(ZXˆβG(K)).

    Remark 1. In our proposed construction (3.7) of GLTE, a group of biasing parameters is considered, which is the key difference from the constructions of the new Liu-type estimators by Kurnaz and Akay [9], Ertan and Akay [16]. We add the matrix of biasing parameters and the matrix of function of biasing parameters in order to ensure that the adjustment of the estimate of each component of the vector β can be different.

    Remark 2. The GLTE is a more generalized estimator which includes the other estimators as special cases:

    (i) ˆβG(K)=ˆβ, for f(kj)=kj, where a=1 and b=0;

    (ii) ˆβG(K)=ˆβR(k), for K=kI, f(kj)=0, where a=0 and b=0;

    (iii) ˆβG(K)=ˆβL(d), for K=I, f(kj)=d, where a=0 and b=d;

    (iv) ˆβG(K)=ˆβLT(d,k), for K=kI, f(kj)=d, where a=0 and b=d.

    The Bias, Cov, MSEM and SMSE of ˆβG(K) are given by

    Bias(ˆβG(K))=E(˜XTˆW˜X+QKQT)1(˜XTˆW˜X+QFQT)ˆβMβ=E(˜XTˆW˜X+QKQT)1(˜XTˆW˜X+QFQT)(˜XTˆW˜X)1˜XTˆW˜Zβ=(˜XTˆW˜X+QKQT)1(˜XTˆW˜X+QFQT)(˜XTˆW˜X)1˜XTˆW˜Xββ=(˜XTˆW˜X+QKQT)1(˜XTˆW˜X+QFQT)ββ=(˜XTˆW˜X+QKQT)1Q(FK)QTβ,Cov(ˆβG(K))=Cov((˜XTˆW˜X+QKQT)1(˜XTˆW˜X+QFQT)ˆβM)=(˜XTˆW˜X+QKQT)1(˜XTˆW˜X+QFQT)(˜XTˆW˜X)1(˜XTˆW˜X+QFQT)T((˜XTˆW˜X+QKQT)1)T,MSEM(ˆβG(K))=Cov(ˆβG(K))+(Bias(ˆβG(K)))(Bias(ˆβG(K)))T=Q((Λ+K)1(Λ+F)Λ1(Λ+F)(Λ+K)1+(Λ+K)1(FK)ααT(FK)(Λ+K)1)QT,SMSE(ˆβG(K))=tr(Cov(ˆβG(K)))+(Bias(ˆβG(K)))T(Bias(ˆβG(K)))=pj=1(λj+f(kj))2λj(λj+kj)2+pj=1(f(kj)kj)2α2j(λj+kj)2. (3.8)

    In this section, we compare the performance of the proposed GLTE with that of the LPLE, the ridge estimator, the Liu estimator and the Liu-type estimator under MSEM criterion.

    Let ˆβ1 and ˆβ2 be any two estimators of the vector β. From literature [26], we know that ˆβ2 is superior to ˆβ1 with respect to the MSEM criteria if and only if (iff) MSEM(ˆβ1)MSEM(ˆβ2) is a positive definite (p.d.) matrix. If MSEM(ˆβ1)MSEM(ˆβ2) is a non-negative definite matrix, then SMSE(ˆβ1)SMSE(ˆβ2)0, but the inverse is not true. To compare the superiority of GLTE ˆβG(K), we will use the following lemma.

    Lemma 1. (Farebrother [27]) Let A be a p.d. matrix, namely A>0, and c be nonzero vector. Then, AccT is p.d. matrix, iff cTA1c1.

    For clarity, we use the following abbreviations, Λ1=Λ+I, Λd=Λ+dI, Λd=ΛdI, Λk=Λ+kI, ΛK=Λ+K, ΛF=Λ+F. We give the following theorems to show the superiority of ˆβG(K) over ˆβ, ˆβR(k), ˆβL(d), ˆβLT(d,k).

    Theorem 1. Let be kj>0 and 2λjkj<f(kj)<kj, j=1,,p. Then MSEM(ˆβ)MSEM(ˆβG(K))>0 iff

    (Bias(ˆβG(K)))TQ(Λ1Λ1KΛFΛ1ΛFΛ1K)QT(Bias(ˆβG(K)))<1,

    where Bias(ˆβG(K))=QΛ1K(FK)α.

    Proof. From Eqs (2.7) and (3.8), we can immediately obtain the difference between the MSEM of ˆβ and ˆβG(K) as follows

    Δ1=MSEM(ˆβ)MSEM(ˆβG(K))=Q(Λ1Λ1KΛFΛ1ΛFΛ1K)QT(Bias(ˆβG(K)))(Bias(ˆβG(K)))T=Qdiag{1λj(λj+f(kj))2λj(λj+kj)2}pj=1QT(Bias(ˆβG(K)))(Bias(ˆβG(K)))T.

    The Λ1Λ1KΛFΛ1ΛFΛ1K is the p.d. matrix if (2λj+kj+f(kj))(kjf(kj))>0,j=1,,p. Since kj>0,j=1,,p, this condition becomes 2λjkj<f(kj)<kj. By Lemma 1, the Theorem 1 is proved.

    Theorem 2. Let be k>0, kj>0 and λjλj(λj+kj)λj+k<f(kj)<λj+λj(λj+kj)λj+k,j=1,,p. Then MSEM(ˆβR(k))MSEM(ˆβG(K))>0 iff

    (Bias(ˆβG(K)))TQ(Λ1kΛΛ1kΛ1KΛFΛ1ΛFΛ1K)QT(Bias(ˆβG(K)))<1,

    where Bias(ˆβG(K))=QΛ1K(FK)α.

    Proof. From Eqs (3.1) and (3.8), we can immediately obtain the difference between the MSEM of ˆβR(k) and ˆβG(K) as follows

    Δ2=MSEM(ˆβR(k))MSEM(ˆβG(K))=Q(Λ1kΛΛ1kΛ1KΛFΛ1ΛFΛ1K)QT+(Bias(ˆβR(k)))(Bias(ˆβR(k)))T(Bias(ˆβG(K)))(Bias(ˆβG(K)))T=Qdiag{λj(λj+k)2(λj+f(kj))2λj(λj+kj)2}pj=1QT+(Bias(ˆβR(k)))(Bias(ˆβR(k)))T(Bias(ˆβG(K)))(Bias(ˆβG(K)))T.

    Since (Bias(ˆβR(k)))(Bias(ˆβR(k)))T is the p.d. matrix, we just need to prove that Q(Λ1kΛΛ1kΛ1KΛFΛ1ΛFΛ1K)QT(Bias(ˆβG(K)))(Bias(ˆβG(K)))T is the p.d. matrix. The Λ1kΛΛ1kΛ1KΛF Λ1ΛFΛ1K is the p.d. matrix if (λjλj+k+λj+f(kj)λj+kj)(λjλj+kλj+f(kj)λj+kj)>0. Since k>0,kj>0, j=1,,p, this condition becomes λjλj(λj+kj)λj+k<f(kj)<λj+λj(λj+kj)λj+k. By Lemma 1, the Theorem 2 is proved.

    Theorem 3. Let be 0<d<1, kj>0 and λj(λj+d)(λj+kj)λj+1<f(kj)<λj+(λj+d)(λj+kj)λj+1,j=1,,p. Then MSEM(ˆβL(d))MSEM(ˆβG(K))>0 iff

    (Bias(ˆβG(K)))TQ(Λ11ΛdΛ1ΛdΛ11Λ1KΛFΛ1ΛFΛ1K)QT(Bias(ˆβG(K)))<1,

    where Bias(ˆβG(K))=QΛ1K(FK)α.

    Proof. From Eqs (3.2) and (3.8), we can immediately obtain the difference between the MSEM of ˆβL(d) and ˆβG(K) as follows

    Δ3=MSEM(ˆβL(d))MSEM(ˆβG(K))=Q(Λ11ΛdΛ1ΛdΛ11Λ1KΛFΛ1ΛFΛ1K)QT+(Bias(ˆβL(d)))(Bias(ˆβL(d)))T(Bias(ˆβG(K)))(Bias(ˆβG(K)))T=Qdiag{(λj+d)2λj(λj+1)2(λj+f(kj))2λj(λj+kj)2}pj=1QT+(Bias(ˆβL(d)))(Bias(ˆβL(d)))T(Bias(ˆβG(K)))(Bias(ˆβG(K)))T.

    Since (Bias(ˆβL(d)))(Bias(ˆβL(d)))T is the p.d. matrix, we just need to prove that Q(Λ11 ΛdΛ1ΛdΛ11Λ1KΛFΛ1ΛFΛ1K)QT(Bias(ˆβG(K)))(Bias(ˆβG(K)))T is the p.d. matrix. The Λ11 ΛdΛ1ΛdΛ11Λ1KΛFΛ1ΛFΛ1K is the p.d. matrix if (λj+dλj+1+λj+f(kj)λj+kj)(λj+dλj+1λj+f(kj)λj+kj) >0. Since 0<d<1,kj>0,j=1,,p, this condition becomes λj(λj+d)(λj+kj)λj+1<f(kj)<λj+(λj+d)(λj+kj)λj+1,j=1,,p. By Lemma 1, the Theorem 3 is proved.

    Theorem 4. Let be k>0, dR, kj>0 and λj(λjd)(λj+kj)λj+k<f(kj)<λj+(λjd)(λj+kj)λj+k, or λj+(λjd)(λj+kj)λj+k<f(kj)<λj(λjd)(λj+kj)λj+k, j=1,,p. Then MSEM(ˆβLT(d,k))MSEM(ˆβG(K))>0 iff

    (Bias(ˆβG(K)))TQ(Λ1kΛdΛ1ΛdΛ1kΛ1KΛFΛ1ΛFΛ1K)QT(Bias(ˆβG(K)))<1,

    where Bias(ˆβG(K))=QΛ1K(FK)α.

    Proof. From Eqs (3.4) and (3.8), we can immediately obtain the difference between the MSEM of ˆβLT(d,k) and ˆβG(K) as follows

    Δ4=MSEM(ˆβLT(d,k))MSEM(ˆβG(K))=Q(Λ1kΛdΛ1ΛdΛ1kΛ1KΛFΛ1ΛFΛ1K)QT+(Bias(ˆβLL(d,k)))(Bias(ˆβLL(d,k)))T(Bias(ˆβG(K)))(Bias(ˆβG(K)))T=Qdiag{(λjd)2λj(λj+k)2(λj+f(kj))2λj(λj+kj)2}pj=1QT+(Bias(ˆβLL(d,k)))(Bias(ˆβLL(d,k)))T(Bias(ˆβG(K)))(Bias(ˆβG(K)))T.

    Since (Bias(ˆβLL(d,k)))(Bias(ˆβLL(d,k)))T is the p.d. matrix, we just need to prove that Q(Λ1k ΛdΛ1ΛdΛ1kΛ1KΛFΛ1ΛFΛ1K)QT(Bias(ˆβG(K)))(Bias(ˆβG(K)))T is the p.d. matrix. The Λ1kΛdΛ1ΛdΛ1kΛ1KΛFΛ1ΛFΛ1K is the p.d. matrix if (λjdλj+k+λj+f(kj)λj+kj)(λjdλj+k λj+f(kj)λj+kj) >0. Since k>0,dR,kj>0,j=1,,p, this condition becomes λj(λjd)(λj+kj)λj+k<f(kj)<λj+(λjd)(λj+kj)λj+k, or λj+(λjd)(λj+kj)λj+k<f(kj)<λj(λjd)(λj+kj)λj+k, j=1,,p. By Lemma 1, the Theorem 4 is proved.

    To make GLTE more effective, it is important to make the appropriate choices of function f(kj) of the biasing parameter and the biasing parameter kj. Inspired by Kurnaz and Akay [9], Ertan and Akay [16], we use the minimum SMSE criterion to select the best function of f(kj) and the optimal values of kj, j=1,,p.

    Note that, the SMSE of ˆβG(K) is a nonlinear function of kj,j=1,,p and denoted by

    g(k1,k2,,kp)=pj=1(λj+f(kj))2λj(λj+kj)2+pj=1(f(kj)kj)2α2j(λj+kj)2.

    We obtain partial derivative of g(k1,k2,,kp) with respect to kj,j=1,,p as follow

    g(k1,k2,,kp)kj=2((λj+f(kj))+λjα2j(f(kj)kj))(f(kj)(λj+kj)(λj+f(kj))λj(λj+kj)3, (5.1)

    after equating the Eq (5.1) to 0, we obtain

    f(kj)=λjα2j1+λjα2jkjλj1+λjα2j, (5.2)

    or

    f(kj)=f(kj)kj+λj(f(kj)1), (5.3)

    where f(kj) is the first derivative of f(kj) with respect to kj, j=1,,p.

    Remark 3. Our construction Eq (3.7) is able to carry out the general form of f(kj) by minimizing SMSE of ˆβG(K), instead of the two special cases Fact1, Fact2 in Kurnaz and Akay [9], Ertan and Akay [16]. This is due to the fact that there is no summation after taking the partial derivative, see Eq (5.1).

    In Eq (5.2), we know that the first derivative f(kj)=λjα2j1+λjα2j, and both λjα2j1+λjα2j and λj1+λjα2j are constants, f(kj) is a linear function. In Eq (5.3), when f(kj)=λjα2j1+λjα2j, f(kj) is also a linear function. Thus, Eqs (5.2) and (5.3) can be unified as a linear function f(kj)=akj+b, j=1,,p. Substituting f(kj)=akj+b into Eq (5.1) and making it equal to 0, we can obtain the optimal value of kj as follows,

    ˆkj=λjb(1+λjˆα2j)a+(a1)λjˆα2j,j=1,,p, (5.4)

    where a and b are constants need to be specified, ˆαj denote the jth element of QTˆβ. From Eq (5.2), we know that 0<a<1 and b<0. From Eq (5.3), we know that b=λj(a1). In practice, we might as well take a=τ and b=λmin(τ1), τ(0,1). However, in Eq (5.4), ˆkj may be negative. Thus, it is better to use the following estimator

    ˆkGj=max{0,ˆkj},j=1,,p, (5.5)

    the estimator of f(kj) is ˆf(kj)=τ+λmin(τ1)ˆkGj. A more detailed discussion of constant τ is given in section 6.

    In the LPLRE method, we use three biasing parameter estimators (namely, ˆkR1, ˆkR2, ˆkR3) recommended by Kibria et al. [12], which have been verified by simulation study to have better performance than other biasing parameter estimators. These parameter estimators are given as follows

    ˆkR1=max(1mj),ˆkR2=pj=1(1qj)1p,ˆkR3=median(1qj),

    where ˆσ2=1np(yiˆπi)2,mj=ˆσ2ˆα2j,qj=λmax(np)ˆσ2+λmaxˆα2j.

    By Liu [7], the optimal estimator of d is obtained by minimizing Eq (3.3) as follows

    ˆdopt=pj=1ˆα2j1(λj+1)2/pj=1ˆα2j+1λj(λj+1)2,

    It is obvious that ˆdopt<1, but ˆdopt may be negative. Refer to [17], we use the following estimator

    ˆdL=max{0,ˆdopt}.

    Based on the method in [13], for the biasing parameters k and d in ˆβLT, we fix k and find the optimal value of d by minimizing Eq (3.5), which is as follows

    ˆdLT=pj=11ˆα2jˆkLT(λj+ˆkLT)2/pj=1ˆα2j+1λj(λj+ˆkLT)2,

    with ˆkLT=pˆαˆα.

    Remark 4. The above mentioned biasing parameter kj and the function f(kj) of the biasing parameter in GLTE, as well as the biasing parameter k in ridge estimator, the biasing parameter d in Liu estimator and the biasing parameters k, d in Liu-type estimator, can also be determined by the generalized cross validation (GCV) criterion (see, [28,29]). Based on the GCV, we are able to simultaneously select the optimal biasing parameters in the biased estimators and the bandwidth of kernel smoother, and obtain the biased estimators with good performance.

    In this section, we present some simulations to compare the performance of the proposed GLTE with that of the other estimators in the LPLRM. From the sample size (n), the degree of collinearity (r) and the number of explanatory variables in linear part (p), we discuss the performance of different estimators of parameters of linear part, as well as the performance of different estimators of nonparametric function in LPLRM. We generate the explanatory variables with following formula given by Zeinal [10], Varathan and Wijekoon [15], Ertan and Akay [16] as

    xij=(1r2)12ξij+rξip,i=1,2,,n,j=1,2,,p,

    where r2 denotes the correlation between any two design variables, which is specified by r=0.70,0.80,0.85, and 0.90, ξij are independent standard normal pseudo-random numbers. The binary response variable yi is generated from the Bernoulli(πi) distribution with

    πi=exp(xTiβ+m(ti))1+exp(xTiβ+m(ti))=exp(β1xi1+β2xi2++βpxip+m(ti))1+exp(β1xi1+β2xi2++βpxip+m(ti)),i=1,,n,

    where β=(0.1,0.6,0.4,0.7)T or β=(0.2,0.3,0.5,0.4,0.5,0.2,0.4)T so that βTβ=1. The nonparametric functions are generated by using the design given in [23] as

    m(ti)=sin(ti)cos(8ti),tiU(0,1).

    The sample sizes are taken as n=200,300,400, and 500. The bandwidth vector is computed by Scott's rule of thumb [30] for the Gaussian kernel. The simulation is repeated M=2,000 times with the above setup and the simulated average scalar mean square error (ASMSE) of estimator (ˆβ) for linear parametric part (xTiβ) is given by

    ASMSE(ˆβ)=1MMl=1(ˆβlβ)T(ˆβlβ),

    where the subscript l indicates the number of repetition, ˆβ represents the estimator of the various methods. The simulated mean squared error (mse) of the estimator vector ˆm(t) of the nonparametric function m(t) is obtained by using the following equation,

    mse(ˆm(t),m(t))=1nMMl=1||ˆml(t)m(t)||22,

    where ||v||22=ni=1v2i for v=(v1,,vn).

    First, we will evaluate the performance of the LPLE, LPLRE, LPLLE, LPLLTE and GLTE. For different combinations of p,r, and n, the ASMSE of the estimator (ˆβ) for each method is shown in Table 1, the mse of the estimator (ˆm(t)) for each method is shown in Table 2.

    Table 1.  The simulated ASMSE values of ˆβ,ˆβR1,ˆβR2,ˆβR3,ˆβL,ˆβLT,ˆβG.
    p=4&τ=0.03 p=7&τ=0.01
    r 0.70 0.80 0.85 0.90 0.70 0.80 0.85 0.90
    n=200
    ˆβ 0.3174 0.4692 0.6252 0.9403 0.6911 1.0891 1.5213 2.3644
    ˆβR1 0.2207 0.2750 0.3115 0.3464 0.4393 0.5442 0.6092 0.6359
    ˆβR2 0.2074 0.2592 0.2963 0.3387 0.4258 0.5357 0.6081 0.6680
    ˆβR3 0.2086 0.2613 0.3000 0.3458 0.4390 0.5598 0.6434 0.7203
    ˆβL 0.2522 0.3424 0.4246 0.5711 0.5211 0.7344 0.9413 1.3091
    ˆβLT 0.1901 0.2545 0.3240 0.4606 0.3572 0.5177 0.6965 1.0615
    ˆβG 0.0978 0.1143 0.1409 0.1940 0.2009 0.2863 0.4055 0.6063
    n=300
    ˆβ 0.1979 0.2949 0.3890 0.5911 0.4205 0.6654 0.9254 1.4517
    ˆβR1 0.1578 0.2120 0.2519 0.3091 0.3185 0.4331 0.5119 0.5967
    ˆβR2 0.1519 0.2032 0.2418 0.2988 0.3140 0.4257 0.5075 0.6050
    ˆβR3 0.1526 0.2042 0.2433 0.3012 0.3204 0.4385 0.5283 0.6394
    ˆβL 0.1707 0.2389 0.2968 0.4066 0.3506 0.5070 0.6527 0.9115
    ˆβLT 0.1303 0.1736 0.2133 0.3040 0.2334 0.3278 0.4362 0.6611
    ˆβG 0.0838 0.0908 0.1052 0.1420 0.1433 0.1910 0.2485 0.3731
    n=400
    ˆβ 0.1370 0.2026 0.2665 0.4102 0.2990 0.4752 0.6649 1.0494
    ˆβR1 0.1160 0.1590 0.1931 0.2533 0.2457 0.3487 0.4312 0.5344
    ˆβR2 0.1134 0.1548 0.1877 0.2470 0.2428 0.3438 0.4264 0.5374
    ˆβR3 0.1140 0.1557 0.1890 0.2489 0.2461 0.3515 0.4396 0.5624
    ˆβL 0.1228 0.1730 0.2169 0.3055 0.2616 0.3875 0.5086 0.7219
    ˆβLT 0.0963 0.1244 0.1521 0.2152 0.1743 0.2429 0.3254 0.4962
    ˆβG 0.0750 0.0743 0.0841 0.1110 0.1196 0.1485 0.1896 0.2727
    n=500
    ˆβ 0.1137 0.1668 0.2223 0.3326 0.2334 0.3684 0.5082 0.7911
    ˆβR1 0.0997 0.1374 0.1721 0.2255 0.2006 0.2893 0.3635 0.4706
    ˆβR2 0.0981 0.1348 0.1682 0.2199 0.1991 0.2863 0.3592 0.4685
    ˆβR3 0.0986 0.1357 0.1694 0.2213 0.2013 0.2913 0.3676 0.4852
    ˆβL 0.1043 0.1469 0.1880 0.2597 0.2102 0.3131 0.4077 0.5750
    ˆβLT 0.0821 0.1062 0.1314 0.1827 0.1459 0.1975 0.2550 0.3716
    ˆβG 0.0710 0.0678 0.0756 0.0952 0.1075 0.1261 0.1550 0.2207

     | Show Table
    DownLoad: CSV
    Table 2.  The simulated mse values of ˆmˆβ, ˆmˆβR1, ˆmˆβR2, ˆmˆβR3, ˆmˆβL, ˆmˆβLT and ˆmˆβG.
    p=4&τ=0.03 p=7&τ=0.01
    r 0.70 0.80 0.85 0.90 0.70 0.80 0.85 0.90
    n=200
    ˆmˆβ 0.3140 0.3163 0.3185 0.3192 0.3481 0.3543 0.3587 0.3611
    ˆmˆβR1 0.3015 0.3022 0.3032 0.3021 0.3298 0.3312 0.3311 0.3283
    ˆmˆβR2 0.2991 0.3000 0.3011 0.3004 0.3275 0.3287 0.3290 0.3271
    ˆmˆβR3 0.2995 0.3004 0.3015 0.3008 0.3287 0.3300 0.3305 0.3287
    ˆmˆβL 0.3060 0.3076 0.3092 0.3093 0.3360 0.3398 0.3422 0.3428
    ˆmˆβLT 0.2881 0.2919 0.2958 0.2992 0.3126 0.3196 0.3254 0.3311
    ˆmˆβG 0.2723 0.2846 0.2923 0.2987 0.3051 0.3187 0.3231 0.3273
    n=300
    ˆmˆβ 0.2886 0.2892 0.2890 0.2886 0.2950 0.3015 0.3038 0.3062
    ˆmˆβR1 0.2827 0.2829 0.2823 0.2812 0.2871 0.2917 0.2924 0.2924
    ˆmˆβR2 0.2819 0.2821 0.2815 0.2805 0.2864 0.2908 0.2915 0.2916
    ˆmˆβR3 0.2822 0.2823 0.2817 0.2807 0.2869 0.2915 0.2922 0.2924
    ˆmˆβL 0.2848 0.2852 0.2847 0.2841 0.2897 0.2951 0.2965 0.2979
    ˆmˆβLT 0.2709 0.2726 0.2739 0.2759 0.2715 0.2790 0.2826 0.2871
    ˆmˆβG 0.2574 0.2664 0.2709 0.2754 0.2648 0.2778 0.2785 0.2809
    n=400
    ˆmˆβ 0.2667 0.2679 0.2686 0.2694 0.2775 0.2815 0.2848 0.2855
    ˆmˆβR1 0.2632 0.2641 0.2646 0.2649 0.2730 0.2761 0.2786 0.2781
    ˆmˆβR2 0.2629 0.2637 0.2642 0.2646 0.2727 0.2756 0.2780 0.2776
    ˆmˆβR3 0.2631 0.2639 0.2644 0.2647 0.2730 0.2760 0.2785 0.2781
    ˆmˆβL 0.2645 0.2655 0.2660 0.2666 0.2745 0.2779 0.2807 0.2809
    ˆmˆβLT 0.2533 0.2553 0.2571 0.2598 0.2600 0.2648 0.2693 0.2721
    ˆmˆβG 0.2409 0.2492 0.2538 0.2587 0.2539 0.2638 0.2680 0.2711
    n=500
    ˆmˆβ 0.2530 0.2557 0.2561 0.2561 0.2703 0.2743 0.2762 0.2783
    ˆmˆβR1 0.2505 0.2523 0.2533 0.2538 0.2674 0.2709 0.2723 0.2736
    ˆmˆβR2 0.2502 0.2528 0.2529 0.2529 0.2673 0.2707 0.2720 0.2734
    ˆmˆβR3 0.2504 0.2529 0.2531 0.2531 0.2675 0.2709 0.2723 0.2737
    ˆmˆβL 0.2513 0.2540 0.2542 0.2550 0.2684 0.2720 0.2736 0.2753
    ˆmˆβLT 0.2419 0.2451 0.2464 0.2489 0.2562 0.2608 0.2638 0.2675
    ˆmˆβG 0.2292 0.2385 0.2426 0.2475 0.2503 0.2598 0.2622 0.2651

     | Show Table
    DownLoad: CSV

    Table 1 clearly reveals that the sample size (n), the degree of collinearity (r) and the number of explanatory variables in linear part (p) influence the ASMSE of estimator (ˆβ) for each method. When any other two parameters in p,r,n are fixed, the ASMSE of each estimator (ˆβ) decreases with the increase of the sample size n, increases with the increase of the degree of collinearity (r), and increases with the increase of the number of explanatory variables for linear part (p). It is observed that the ASMSE of the estimator (ˆβG) is smaller than that of the other estimators for all combinations of p,r,n. It implies that the performance of the estimator (ˆβG) better than that of the other estimators.

    Table 2 indicate that the sample size (n) and the number of explanatory variables in the linear part (p) influence the mse of estimator ˆm(t) of nonparametric function for each method, and the concrete situation of the influence is similar to that in Table 1, and the influence degree is much smaller than that in Table 1. However, the mse of the estimators of the nonparametric functions of the proposed method is still smaller than that of other methods, only less obvious than that of the parametric part. The degree of collinearity (r) has some slight effect on the mse of estimator ˆm(t) of nonparametric function of each mothod, but the mse of estimator ˆm(t) of nonparametric function under the proposed method is generally at the lowest level. These results reflect that the proposed method also has some superior performance in the estimator of nonparametric functions, but it is not obvious in the parameter part.

    It is noted that in Tables 1 and 2, τ values are close to 0, which is not arbitrary. Simulation results and simple theoretical derivation will be explained below.

    In order to study the influence of τ value on the ASMSE of ˆβG, the curves of τ(τ(0,1)) and the ASMSE of ˆβG at p=4,n=200,r=0.70 or 0.90; p=7,n=200,r=0.70 or 0.90 and p=7,n=300,r=0.70 or 0.90 are plotted in Figure 1. The Figure 1 clearly indicates that the ASMSE of ˆβG for various combinations is in a small state when τ value tends to 0, and reaches a maximum value when τ value tends to 1, which is close to the ASMSE of ˆβ (refer to Table 1). The situations of the two endpoints can also be derived theoretically. From ˆf(kj)=τkj+λmin(τ1), Eqs (5.4) and (5.5), it follows that ˆf(kj)λmin and kjλj+λmin(1+λjˆα2j)λjˆα2j as τ0, and ˆf(kj)0 and kj0 as τ1, thus ˆβGe (e is the minimum value of ˆβG) as τ0, and ˆβGˆβ as τ1. In addition, Figure 1 also implies that the ASMSE of ˆβG varies greatly with large p and large r. Large p and large r are more likely to have multicollinearity, it indirectly reflects that the proposed GLTE ˆβG has significant ability to eliminate multicollinearity.

    Figure 1.  ASMSE of the estimators ˆβG versus τ at some combinations of p,r, and n.

    In this subsection, we directly apply the estimator in [16] to the LPLRM for simulation and obtain estimators of unmodified method (UM) as follows

    ˆβUM(k)=(˜XTˆW˜X+kI)1(˜XTˆW˜X+f(k)I)ˆβ,k>0,ˆmˆβUM(k)=S(ZXˆβUM(k)).

    We chose k=ˆkR1 (case Ⅱ) and the seven f(k) functions (Ⅰ, Ⅱ, Ⅲ, Ⅳ, Ⅴ, Ⅵ, Ⅶ) in [16] and simulate with the setting in the beginning of section 6. The simulation results are presented in Table 3, Table 4 together with those of GLTE and LPLE in subsection 6.1.

    Table 3.  The simulated ASMSE values of parameter estimator of LPLE, GLTE and UM.
    n r LPLE UM(Ⅰ) UM(Ⅱ) UM(Ⅲ) UM(Ⅳ) UM(Ⅴ) UM(Ⅵ) UM(Ⅶ) GLTE
    p=4 200 0.70 0.32 0.10 0.20 0.10 0.10 0.11 0.15 0.18 0.10
    200 0.80 0.47 0.11 0.24 0.11 0.11 0.12 0.17 0.22 0.11
    200 0.85 0.63 0.14 0.30 0.14 0.15 0.15 0.21 0.26 0.14
    200 0.90 0.94 0.19 0.36 0.19 0.19 0.20 0.25 0.30 0.19
    300 0.70 0.20 0.09 0.15 0.09 0.08 0.09 0.12 0.14 0.08
    300 0.80 0.29 0.09 0.19 0.09 0.09 0.10 0.14 0.18 0.09
    300 0.85 0.39 0.11 0.23 0.11 0.11 0.12 0.17 0.21 0.11
    300 0.90 0.59 0.14 0.29 0.14 0.15 0.15 0.21 0.26 0.14
    400 0.70 0.14 0.08 0.11 0.08 0.07 0.08 0.09 0.10 0.07
    400 0.80 0.20 0.08 0.14 0.08 0.07 0.08 0.11 0.14 0.07
    400 0.85 0.27 0.08 0.17 0.08 0.08 0.09 0.13 0.16 0.08
    400 0.90 0.41 0.11 0.23 0.11 0.11 0.12 0.17 0.21 0.11
    500 0.70 0.11 0.07 0.09 0.08 0.07 0.07 0.08 0.09 0.07
    500 0.80 0.17 0.07 0.13 0.07 0.07 0.07 0.10 0.12 0.07
    500 0.85 0.22 0.08 0.17 0.09 0.08 0.09 0.13 0.16 0.08
    500 0.90 0.33 0.10 0.21 0.10 0.10 0.11 0.15 0.19 0.10
    p=7 200 0.70 0.69 0.20 0.38 0.20 0.21 0.20 0.25 0.37 0.20
    200 0.80 1.09 0.29 0.49 0.29 0.30 0.30 0.35 0.46 0.29
    200 0.85 1.52 0.41 0.61 0.41 0.42 0.42 0.47 0.60 0.41
    200 0.90 2.36 0.71 0.79 0.61 0.62 0.61 0.65 0.76 0.61
    300 0.70 0.42 0.14 0.27 0.14 0.14 0.14 0.17 0.27 0.14
    300 0.80 0.67 0.19 0.36 0.18 0.19 0.19 0.23 0.36 0.19
    300 0.85 0.93 0.26 0.44 0.25 0.26 0.26 0.31 0.44 0.25
    300 0.90 1.45 0.37 0.55 0.37 0.38 0.37 0.42 0.55 0.37
    400 0.70 0.30 0.12 0.22 0.12 0.12 0.12 0.14 0.21 0.12
    400 0.80 0.48 0.15 0.30 0.15 0.15 0.15 0.19 0.30 0.15
    400 0.85 0.66 0.19 0.36 0.18 0.19 0.19 0.23 0.36 0.19
    400 0.90 1.05 0.28 0.46 0.28 0.28 0.28 0.33 0.46 0.27
    500 0.70 0.23 0.11 0.18 0.11 0.11 0.11 0.12 0.18 0.11
    500 0.80 0.37 0.13 0.26 0.13 0.13 0.13 0.16 0.26 0.13
    500 0.85 0.51 0.15 0.30 0.15 0.15 0.15 0.19 0.30 0.16
    500 0.90 0.79 0.22 0.39 0.22 0.23 0.22 0.27 0.39 0.22
    Note: τ=0.03 while p=4; τ=0.01 while p=7.

     | Show Table
    DownLoad: CSV
    Table 4.  The simulated mse values of nonparametric function estimator of LPLE, GLTE and UM.
    n r LPLE UM(Ⅰ) UM(Ⅱ) UM(Ⅲ) UM(Ⅳ) UM(Ⅴ) UM(Ⅵ) UM(Ⅶ) GLTE
    p=4 200 0.70 0.31 0.27 0.30 0.27 0.27 0.27 0.29 0.30 0.27
    200 0.80 0.32 0.28 0.30 0.28 0.28 0.28 0.29 0.30 0.28
    200 0.85 0.32 0.29 0.31 0.29 0.30 0.30 0.30 0.31 0.29
    200 0.90 0.32 0.30 0.31 0.30 0.30 0.30 0.30 0.31 0.30
    300 0.70 0.29 0.26 0.29 0.26 0.27 0.27 0.28 0.29 0.26
    300 0.80 0.29 0.27 0.29 0.27 0.27 0.28 0.28 0.29 0.27
    300 0.85 0.29 0.27 0.28 0.27 0.27 0.27 0.27 0.28 0.27
    300 0.90 0.29 0.28 0.29 0.28 0.28 0.28 0.29 0.29 0.28
    400 0.70 0.27 0.24 0.26 0.24 0.24 0.24 0.25 0.26 0.24
    400 0.80 0.27 0.25 0.26 0.25 0.25 0.25 0.26 0.26 0.25
    400 0.85 0.27 0.25 0.26 0.25 0.25 0.25 0.26 0.26 0.25
    400 0.90 0.27 0.26 0.26 0.26 0.26 0.26 0.26 0.26 0.26
    500 0.70 0.25 0.23 0.25 0.23 0.23 0.23 0.24 0.25 0.23
    500 0.80 0.26 0.24 0.25 0.24 0.24 0.24 0.25 0.25 0.24
    500 0.85 0.26 0.24 0.25 0.24 0.24 0.24 0.25 0.25 0.24
    500 0.90 0.26 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25
    p=7 200 0.70 0.35 0.31 0.33 0.31 0.31 0.31 0.32 0.33 0.31
    200 0.80 0.35 0.32 0.33 0.32 0.32 0.32 0.32 0.33 0.32
    200 0.85 0.36 0.32 0.32 0.32 0.32 0.32 0.32 0.33 0.32
    200 0.90 0.36 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33
    300 0.70 0.29 0.26 0.28 0.26 0.26 0.26 0.27 0.28 0.26
    300 0.80 0.30 0.28 0.30 0.28 0.28 0.28 0.29 0.30 0.28
    300 0.85 0.30 0.28 0.29 0.28 0.28 0.28 0.28 0.29 0.28
    300 0.90 0.31 0.28 0.29 0.28 0.28 0.28 0.29 0.29 0.28
    400 0.70 0.28 0.25 0.27 0.25 0.25 0.25 0.26 0.27 0.25
    400 0.80 0.28 0.26 0.27 0.26 0.26 0.26 0.27 0.27 0.26
    400 0.85 0.28 0.27 0.28 0.27 0.27 0.27 0.27 0.28 0.27
    400 0.90 0.29 0.27 0.28 0.27 0.27 0.27 0.27 0.28 0.27
    500 0.70 0.27 0.25 0.27 0.25 0.25 0.25 0.26 0.27 0.25
    500 0.80 0.27 0.26 0.27 0.26 0.26 0.26 0.26 0.27 0.26
    500 0.85 0.28 0.26 0.27 0.26 0.26 0.26 0.26 0.27 0.26
    500 0.90 0.28 0.27 0.27 0.27 0.27 0.27 0.27 0.27 0.27
    Note: τ=0.03 while p=4; τ=0.01 while p=7.

     | Show Table
    DownLoad: CSV

    From the results shown in Tables 3 and 4 for the estimation of parameter and nonparametric function, we can see that the performance of GLTE method is better than the UM method under the seven function types. Both GLTE and UM have a large improvement in the estimation of the parameter, and these have a slightly smaller improvement in the estimation of the nonparametric function, comparing with LPLE. Also, as a result of increasing sample size n while keeping r and p fixed, the ASMSE and mse values in LPLE, GLTE and UM methods under various function types are generally decreases. Similarly, when n and p are fixed and r is increased, it is seen that the ASMSE and mse values commonly increased in various methods.

    To motivate the multicollinearity problem in LPLRM, we consider the Indian Liver Patient Dataset from the UCI Repository of Machine Learning Databases [31]. The dataset contains 416 records of liver patients and 167 records of non-liver patients form north east of Andhra Pradesh, India. We consider the following eight variables: the binomial response variable y, taking values 1 if this patient has liver disease and 0 otherwise, the explanatory variables include age of the patient (Age), total bilirubin (TB), direct bilirubin (DB), serum glutamate-pyruvate transaminase (SGPT), serum glutamate-oxaloacetate transaminase (SGOT), total proteins (TP), albumin (ALB). Based on the study of Hartatik et al. [32], there are strong correlations between TB and DB, SGPT and SGOT, but a weak correlation between Age and other variables. However, we are more interested in the curve of the effect of Age on the logarithmic odds of liver disease, so we set Age as a nonparametric variable and the other six explanatory variables are included in the linear part to establish the following LPLRM:

    logpi1pi=f(Agei)+β1TBi+β2DBi+β3SGPTi+β4SGOTi+β5TPi+β6ALBi, (7.1)

    where pi=P(yi=1) is the probability of the ith individual having liver disease. After the final iteration process in section 2, the eigenvalues of the matrix ˜XTˆW˜X are λ1=189342.42, λ2=32497.93, λ3=276.57, λ4=153.35, λ5=16.40, λ6=11.18. Thus the condition number is κ=λmaxλmin=130.1194 showing that there is a multicollinearity problem. For this real data, the parameter estimates and SMSE values of parametric estimator of the model (7.1) under various methods are given in Table 5. The SMSE is used instead of ASMSE because the actual parameter values (β) are unknown during real data modeling. The results from the Table 5 reveal that the presence of multicollinearity affects the parameter estimates and that the SMSE of the proposed estimator ˆβG is smaller than that of the other estimators. Also, all the theoretical conditions in section 4 can be verified in this dataset when τ takes a very small value (τ=0.01) (see Table 6). The estimates of nonparametric functions under various methods are plotted in Figure 2. The Figure 2 shows that the difference between the nonparametric curve (ˆmβG(Age)) of the proposed method and the nonparametric curve (ˆmβ(Age)) of the PLM is the largest, but their shapes are basically similar, while the nonparametric curves of other methods are very similar to the nonparametric curve (ˆmβ(Age)) of the PLM. Combined with the results in Table 2, this implies that the nonparametric function estimator of the proposed GLTE method outperform those of other methods.

    Table 5.  The estimated parameter values and SMSE values.
    ˆβ1 ˆβ2 ˆβ3 ˆβ4 ˆβ5 ˆβ6 SMSE
    ˆβ 0.0121 0.4752 0.0120 0.0039 0.4358 -0.6615 0.1606
    ˆβR1(ˆkR1=1.4202) 0.0251 0.4421 0.0119 0.0040 0.3905 -0.5938 0.1599
    ˆβR2(ˆkR2=0.0118) 0.0122 0.4749 0.0120 0.0039 0.4354 -0.6608 0.1604
    ˆβR3(ˆkR3=0.0148) 0.0122 0.4749 0.0120 0.0039 0.4353 -0.6607 0.1604
    ˆβL(ˆdL=0) 0.0206 0.4528 0.0119 0.0040 0.4028 -0.6122 0.1575
    ˆβLT(ˆkLT=7.0296,ˆdLT=3.7394) 0.0354 0.4182 0.0118 0.0041 0.3610 -0.5497 0.1302
    ˆβG(τ=0.01) 0.1305 0.2079 0.0087 0.0055 0.0496 -0.1113 0.0284

     | Show Table
    DownLoad: CSV
    Table 6.  Verification of the superiority conditions for the proposed GLTE in Theorems 1–4.
    Theorem 1: kj=ˆkGj,j=1,,6 Theorem 2: kj=ˆkGj(j=1,,6),k=ˆkR1=1.4202
    j 2λjkj f(kj) kj λjλj(λj+kj)λj+k f(kj) λj+λj(λj+kj)λj+k
    1 -387029.6000 72.3767 8344.8020 -387028.2000 72.3767 8343.3190
    2 -85304.9900 192.0201 20309.1400 -85302.6800 192.0201 20306.8300
    3 -576.5383 -10.8374 23.3910 -575.0059 -10.8374 21.8586
    4 -10945.8700 95.3205 10639.1800 -10846.8400 95.3205 10540.1400
    5 -32.7933 -11.0713 0.0000 -31.4863 -11.0713 -1.3070
    6 -22.3663 -11.0713 0.0000 -21.1061 -11.0713 -1.2601
    Theorem 3: kj=ˆkGj(j=1,,6),d=ˆdL=0 Theorem 2: kj=ˆkGj(j=1,,6), k=ˆkR2=0.0118
    j λj(λj+d)(λj+kj)λj+1 f(kj) λj+(λj+d)(λj+kj)λj+1 λjλj(λj+kj)λj+k f(kj) λj+λj(λj+kj)λj+k
    1 -387028.6000 72.3767 8343.7580 -387029.6000 72.3767 8344.7890
    2 -85303.3600 192.0201 20307.5100 -85304.9700 192.0201 20309.1200
    3 -575.4576 -10.8374 22.3103 -576.5255 -10.8374 23.3782
    4 -10875.9500 95.3205 10569.2500 -10945.0400 95.3205 10638.3500
    5 -31.8508 -11.0713 -0.9425 -32.7815 -11.0713 -0.0118
    6 -21.4484 -11.0713 -0.9179 -22.3545 -11.0713 -0.0118
    Theorem 4:kj=ˆkGj,k=ˆkLT=7.0296,d=ˆdLT=3.7394 Theorem 2: kj=ˆkGj(j=1,,6), k=ˆkR3=0.0148
    j λj(λjd)(λj+kj)λj+k f(kj) λj+(λjd)(λj+kj)λj+k λjλj(λj+kj)λj+k f(kj) λj+λj(λj+kj)λj+k
    1 -387026.2000 72.3767 192.0201 -387029.6000 72.3767 8344.7860
    2 -85299.6400 192.0201 20303.7900 -85304.9600 192.0201 20309.1100
    3 -573.0583 -10.8374 19.9110 -576.5222 -10.8374 23.3749
    4 -10724.4600 95.3205 10417.7700 -10944.8300 95.3205 10638.1300
    5 -30.4904 -11.0713 -2.3029 -32.7785 -11.0713 -0.0148
    6 -20.3460 -11.0713 -2.0203 -22.3515 -11.0713 -0.0148

     | Show Table
    DownLoad: CSV
    Figure 2.  The estimations of nonparametric part of the model (7.1).

    In this paper, we propose a GLTE to combat the multicollinearity of the linear part in a LPLRM. The GLTE is a more general estimator which includes the other estimators as special cases. Under some theoretical conditions, the performance of GLTE is superior to the LPLE, the ridge estimator, the Liu estimator and the Liu-type estimator. Also, the optimal choices of biasing parameters and function of biasing parameter are carried out, and their empirical choices are suggested. Monte Carlo simulations show that the finite sample performances of the proposed GLTE are better than those of the other estimators. The superior performance of GLTE is obtained when the empirical parameter τ takes a very small value (τ0 & τ0). Also, the estimators are applied to the dataset of Indian Liver Patient, and the obtained results are consistent with the simulation study.

    The research in this article is supported by the National Natural Science Foundation of China under grant number No.61973096. Their financial aids are greatly appreciated.

    The authors declare there is no conflict of interest.



    [1] T. A. Severini, J. G. Staniswalis, Quasi-likelihood estimation in semiparametric models, J. Am. Stat. Assoc., 89 (1994), 501–511. https://doi.org/10.1080/01621459.1994.10476774 doi: 10.1080/01621459.1994.10476774
    [2] H. Sally, Semiparametric regression in likelihood-based models, J. Am. Stat. Assoc., 89 (1994), 1354–1365. https://doi.org/10.1080/01621459.1994.10476874 doi: 10.1080/01621459.1994.10476874
    [3] H. Liang, Y. S. Qin, X. Y. Zhang, D. Ruppert, Empirical likelihood-based inferences for generalized partially linear models, Scand. J. Stat., 36 (2009), 433–443. https://doi.org/10.1111/j.1467-9469.2008.00632.x doi: 10.1111/j.1467-9469.2008.00632.x
    [4] G. Boente, D. Rodriguez, Robust inference in generalized partially linear models, Comput. Stat. Data An., 54 (2010), 2942–2966. https://doi.org/10.1016/j.csda.2010.05.025 doi: 10.1016/j.csda.2010.05.025
    [5] J. Rahman, S. H. Luo, Y. W. Fan, X. H. Liu, Semiparametric efficient inferences for generalised partially linear models, J. Nonparametr. Stat., 32 (2020), 704–724. https://doi.org/10.1080/10485252.2020.1790557 doi: 10.1080/10485252.2020.1790557
    [6] A. E. Hoerl, R. W. Kennard, Ridge regression: biased estimation for nonorthogonal problems, Technometrics, 12 (1970), 55–67. https://doi.org/10.1080/00401706.1970.10488634 doi: 10.1080/00401706.1970.10488634
    [7] K. J. Liu, A new class of blased estimate in linear regression, Commun. Stat. Theor. M., 22 (1993), 393–402. https://doi.org/10.1080/03610929308831027 doi: 10.1080/03610929308831027
    [8] K. J. Liu, Using Liu-type estimator to combat collinearity, Commun. Stat. Theor. M., 32 (2003), 1009–1020. https://doi.org/10.1081/STA-120019959 doi: 10.1081/STA-120019959
    [9] F. S. Kurnaz, K. U. Akay, A new Liu-type estimator, Stat. Pap., 56 (2015), 495–517. https://doi.org/10.1007/s00362-014-0594-6 doi: 10.1007/s00362-014-0594-6
    [10] A. Zeinal, The extended two-type parameter estimator in linear regression model, Commun. Stat. Theor. M., 1 (2021), 1. https://doi.org/10.1080/03610926.2021.1916528 doi: 10.1080/03610926.2021.1916528
    [11] M. R. Özkale, S. Kaçiranlar, The restricted and unrestricted two-parameter estimators, Commun. Stat. Theor. M., 36 (2007), 2707–2725. https://doi.org/10.1080/03610920701386877 doi: 10.1080/03610920701386877
    [12] B. M. G. Kibria, K. Mnsson, G. Shukur, Performance of some logistic ridge regression estimators, Comput. Ecom., 40 (2012), 401–414. https://doi.org/10.1007/s10614-011-9275-x doi: 10.1007/s10614-011-9275-x
    [13] D. Inan, B. E. Erdogan, Liu-type logistic estimator, Commun. Stat. Simul. C., 42 (2013), 1578–1586. https://doi.org/10.1080/03610918.2012.667480 doi: 10.1080/03610918.2012.667480
    [14] Y. Asar, A. Genç, Two-parameter ridge estimator in the binary logistic regression, Commun. Stat. Simul. C., 46 (2017), 7088–7099. https://doi.org/10.1080/03610918.2016.1224348 doi: 10.1080/03610918.2016.1224348
    [15] N. Varathan, P. Wijekoon, Modified almost unbiased Liu estimator in logistic regression, Commun. Stat. Simul. C., 50 (2021), 1009–1020. https://doi.org/10.1080/03610918.2019.1626888 doi: 10.1080/03610918.2019.1626888
    [16] E. Ertan, K. U. Akay, A new Liu-type estimator in binary logistic regression models, Commun. Stat. Theor. M., 51 (2022), 4370–4394. https://doi.org/10.1080/03610926.2020.1813777 doi: 10.1080/03610926.2020.1813777
    [17] N. H. Jadhav, On linearized ridge logistic estimator in the presence of multicollinearity, Computation. Stat., 35 (2020), 667–687. https://doi.org/10.1007/s00180-019-00935-6 doi: 10.1007/s00180-019-00935-6
    [18] N. K. Rashad, Z. Y. Algamal, A new ridge estimator for the poisson regression model, Iran. J. Sci. Technol. Tran. Sci., 43 (2019), 2921–2928. https://doi.org/10.1007/s40995-019-00769-3 doi: 10.1007/s40995-019-00769-3
    [19] A. F. Lukman, K. Ayinde, B. M. G. Kibria, E. T. Adewuyi, Modified ridge-type estimator for the gamma regression model, Commun. Stat. Simul. C., 51 (2022), 5009–5023. https://doi.org/10.1080/03610918.2020.1752720 doi: 10.1080/03610918.2020.1752720
    [20] M. Roozbeh, M. Arashi, Feasible ridge estimator in partially linear models, J. Multivariate Anal., 116 (2013), 35–44. https://doi.org/10.1016/j.jmva.2012.11.006 doi: 10.1016/j.jmva.2012.11.006
    [21] J. B. Wu, Performance of the difference-based almost unbiased Liu estimator in partial linear model, J. Stat. Comput. Sim., 86 (2016), 2874–2887. https://doi.org/10.1080/00949655.2015.1136628 doi: 10.1080/00949655.2015.1136628
    [22] H. Emami, A. Aghamohammadi, Elliptical difference based ridge and Liu type estimators in partial linear measurement error models, Commun. Stat. Theor. M., 50 (2021), 4913–4933. https://doi.org/10.1080/03610926.2018.1472793 doi: 10.1080/03610926.2018.1472793
    [23] F. Akdeniz, M. Roozbeh, Generalized difference-based weighted mixed almost unbiased ridge estimator in partially linear models, Stat. Pap., 60 (2020), 1717–1739. https://doi.org/10.1007/s00362-017-0893-9 doi: 10.1007/s00362-017-0893-9
    [24] J. Wu, B. M. G. Kibria, A generalized difference-based mixed two-parameter estimator in partially linear models, Commun. Stat. Theor. M., 2022. https://doi.org/10.1080/03610926.2021.2024234 doi: 10.1080/03610926.2021.2024234
    [25] P. McCullagh, J. A. Nelder, Generalized linear models, 2nd edition, New York: Chapman & Hall, 1989.
    [26] C. M. Theobald, Generalizations of mean square error applied to ridge regression, J. R. Stat. Soc. B., 36 (1974), 103–106. https://doi.org/10.1111/j.2517-6161.1974.tb00990.x doi: 10.1111/j.2517-6161.1974.tb00990.x
    [27] R. W. Farebrother, Further results on the mean square error of ridge regression, J. R. Stat. Soc. B., 38 (1976), 248–250. https://doi.org/10.1111/j.2517-6161.1976.tb01588.x doi: 10.1111/j.2517-6161.1976.tb01588.x
    [28] M. Amini, M. Roozbeh, Optimal partial ridge estimation in restricted semiparametric regression models, J. Multivariate Anal., 136 (2015), 26–40. https://doi.org/10.1016/j.jmva.2015.01.005 doi: 10.1016/j.jmva.2015.01.005
    [29] M. Roozbeh, Optimal QR-based estimation in partially linear regression models with correlated errors using GCV criterion, Comput. Stat. Data An., 117 (2018), 45–61. https://doi.org/10.1016/j.csda.2017.08.002 doi: 10.1016/j.csda.2017.08.002
    [30] D. W. Scott, Multivariate density estimation: theory, practice, and visualization, Hoboken: Wiley, 1992.
    [31] ILPD (Indian Liver Patient Dataset) Data Set, Machine learning repository, 2012. Available from: http://archive.ics.uci.edu/ml
    [32] H. Hartatik, M. B. Tamam, A. Setyanto, Prediction for diagnosing liver disease in patients using KNN and Naïve Bayes algorithms, ICORIS, 2020, 20288260. https://doi.org/10.1109/ICORIS50180.2020.9320797 doi: 10.1109/ICORIS50180.2020.9320797
  • This article has been cited by:

    1. Zhiyong Qian, Wangsen Xiao, Shulan Hu, The generalization ability of logistic regression with Markov sampling, 2023, 31, 2688-1594, 5250, 10.3934/era.2023267
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1803) PDF downloads(97) Cited by(1)

Figures and Tables

Figures(2)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog