Loading [MathJax]/jax/output/SVG/jax.js
Research article

A study of the equivalence of inference results in the contexts of true and misspecified multivariate general linear models

  • Received: 03 May 2023 Revised: 11 June 2023 Accepted: 15 June 2023 Published: 30 June 2023
  • MSC : 62F12, 62F30, 62J10

  • In practical applications of regression models, we may meet with the situation where a true model is misspecified in some other forms due to certain unforeseeable reasons, so that estimation and statistical inference results obtained under the true and misspecified regression models are not necessarily the same, and therefore, it is necessary to compare these results and to establish certain links between them for the purpose of reasonably explaining and utilizing the misspecified regression models. In this paper, we propose and investigate some comparison and equivalence analysis problems on estimations and predictions under true and misspecified multivariate general linear models. We first give the derivations and presentations of the best linear unbiased predictors (BLUPs) and the best linear unbiased estimators (BLUEs) of unknown parametric matrices under a true multivariate general linear model and its misspecified form. We then derive a variety of necessary and sufficient conditions for the BLUPs/BLUEs under the two competing models to be equal using a series of highly-selective formulas and facts associated with ranks, ranges and generalized inverses of matrices, as well as block matrix operations.

    Citation: Ruixia Yuan, Bo Jiang, Yongge Tian. A study of the equivalence of inference results in the contexts of true and misspecified multivariate general linear models[J]. AIMS Mathematics, 2023, 8(9): 21001-21021. doi: 10.3934/math.20231069

    Related Papers:

    [1] Bo Jiang, Yongge Tian . Equivalent analysis of different estimations under a multivariate general linear model. AIMS Mathematics, 2024, 9(9): 23544-23563. doi: 10.3934/math.20241144
    [2] Nesrin Güler, Melek Eriş Büyükkaya . Statistical inference of a stochastically restricted linear mixed model. AIMS Mathematics, 2023, 8(10): 24401-24417. doi: 10.3934/math.20231244
    [3] Yongge Tian . An effective treatment of adding-up restrictions in the inference of a general linear model. AIMS Mathematics, 2023, 8(7): 15189-15200. doi: 10.3934/math.2023775
    [4] Yongge Tian . Miscellaneous reverse order laws and their equivalent facts for generalized inverses of a triple matrix product. AIMS Mathematics, 2021, 6(12): 13845-13886. doi: 10.3934/math.2021803
    [5] Qian Li, Qianqian Yuan, Jianhua Chen . An efficient relaxed shift-splitting preconditioner for a class of complex symmetric indefinite linear systems. AIMS Mathematics, 2022, 7(9): 17123-17132. doi: 10.3934/math.2022942
    [6] Wenya Shi, Zhixiang Chen . A breakdown-free block conjugate gradient method for large-scale discriminant analysis. AIMS Mathematics, 2024, 9(7): 18777-18795. doi: 10.3934/math.2024914
    [7] D. Jeni Seles Martina, G. Deepa . Some algebraic properties on rough neutrosophic matrix and its application to multi-criteria decision-making. AIMS Mathematics, 2023, 8(10): 24132-24152. doi: 10.3934/math.20231230
    [8] Li Gong, Bo Jiang . A matrix analysis of BLMBPs under a general linear model and its transformation. AIMS Mathematics, 2024, 9(1): 1840-1860. doi: 10.3934/math.2024090
    [9] Guangshuai Zhou, Chuancun Yin . Family of extended mean mixtures of multivariate normal distributions: Properties, inference and applications. AIMS Mathematics, 2022, 7(7): 12390-12414. doi: 10.3934/math.2022688
    [10] Ran-Ran Li, Hao Liu . The maximum residual block Kaczmarz algorithm based on feature selection. AIMS Mathematics, 2025, 10(3): 6270-6290. doi: 10.3934/math.2025286
  • In practical applications of regression models, we may meet with the situation where a true model is misspecified in some other forms due to certain unforeseeable reasons, so that estimation and statistical inference results obtained under the true and misspecified regression models are not necessarily the same, and therefore, it is necessary to compare these results and to establish certain links between them for the purpose of reasonably explaining and utilizing the misspecified regression models. In this paper, we propose and investigate some comparison and equivalence analysis problems on estimations and predictions under true and misspecified multivariate general linear models. We first give the derivations and presentations of the best linear unbiased predictors (BLUPs) and the best linear unbiased estimators (BLUEs) of unknown parametric matrices under a true multivariate general linear model and its misspecified form. We then derive a variety of necessary and sufficient conditions for the BLUPs/BLUEs under the two competing models to be equal using a series of highly-selective formulas and facts associated with ranks, ranges and generalized inverses of matrices, as well as block matrix operations.



    Throughout, the symbol Rm×n stands for the collection of all m×n matrices with real numbers; M, r(M), and R(M) stand for the transpose, the rank, and the range (column space) of a matrix MRm×n, respectively; Im denotes the identity matrix of order m. Two symmetric matrices M and N of the same size are said to satisfy the inequality MN in the Löwner partial ordering if MN is nonnegative definite. The Kronecker product of any two matrices M and N is defined to be MN=(mijN). The vectorization operator of a matrix M=[m1,,mn] is defined to be vec(M)=M=[m1,,mn]. A well-known property on the vec operator of a triple matrix product is MXN=(NM)X. The Moore–Penrose generalized inverse of MRm×n, denoted by M+, is defined by the unique solution G to the four matrix equations MGM=M, GMG=G, (MG)=MG and (GM)=GM. We also denote by PM=MM+, M=EM=ImMM+ and FM=InM+M the three orthogonal projectors induced from M, respectively, which will help in briefly denoting calculation processes related to generalized inverses of matrices. Further information about the orthogonal projectors PM, EM and FM with their applications in the linear statistical models can be found, e.g., in [11,21,22].

    In this paper, we reconsider a multivariate general linear model (for short, MGLM):

    M:  Y=XΘ+ΩΩ,  E(ΩΩ)=0,  Cov(ΩΩ)=Cov{ΩΩ,ΩΩ}=ΣΣ2ΣΣ1, (1.1)

    where it is assumed that YRn×m is an observable random matrix (longitudinal data set), X=(zij)Rn×p is a known model matrix of arbitrary rank (0r(X)min{n,p}), Θ=(θij)Rp×m is a matrix of fixed but unknown parameters, ΩΩRn×m is a matrix of randomly distributed error terms, ΣΣ1=(σ1ij)Rn×n and ΣΣ2=(σ2ij)Rm×m are two known nonnegative definite matrices of arbitrary ranks and ΣΣ2ΣΣ1 means that ΩΩ has a Kronecker product structured covariance matrix.

    We now give some general remarks regarding M in (1.1) and propose a research topic in the context of the model. An MGLM as in (1.1) is a relative direct extension of the most welcome type of univariate general linear models, which means the incorporation of regressing one response variable on a given set of regressors to several response variables on the regressors. This model is also a typical representation of various multivariate regression frameworks yet has been a core issue of study in the theory of multivariate analysis and its applications. Usually in the statistical applications of regression models, we may meet with situations where a true regression model is misspecified in some other forms due to different unforeseeable reasons. In such a case, the estimation and statistical inference results under the true and misspecified models are not necessarily the same, so that we have to face with the work of clearly and reasonably explaining and comparing the results. To illustrate this problem, we typically assume that the model matrix X in (1.1) is misspecified as X0Rn×q, and the covariance matrix ΣΣ2ΣΣ1 in (1.1) is misspecified as V2V1. In this case, (1.1) appears in the following misspecified form:

    N:  Y=X0Θ0+ΩΩ0,  E(ΩΩ0)=0,  Cov(ΩΩ0)=V2V1, (1.2)

    where it is assumed that X0Rn×q is a known model matrix of arbitrary rank, and Θ0Rq×m is a matrix of fixed but unknown parameters. ΩΩ0Rn×m is a matrix of randomly distributed error terms, and V1Rn×n and V2Rm×m are two known nonnegative definite matrices of arbitrary ranks. Because X0, Θ0 and V2V1 in (1.2) can be taken any pre-assumed expressions, a general form as in (1.2) includes almost all misspecified models of (1.1) as its special cases, such as X0Θ0=XΘ+WΓΓ, V2V1=σ2Imn, etc.

    Before proposing and discussing a number of concrete comparison problems regarding inference results and facts in the contexts of (1.1) and (1.2), we review some relevant methods and techniques that can be conveniently used in the analysis of multivariate general linear models. Recall that the Kronecker products and vec operations of matrices are popular and useful tools in dealing with matrix operations associated with MGLMs. Referring to these operations, we can alternatively represent the two models in (1.1) and (1.2) as the following ordinary linear models:

    ˆM: Y=(ImX)Θ+ΩΩ,  E(ΩΩ)=0,  Cov(ΩΩ)=ΣΣ2ΣΣ1, (1.3)
    ˆN: Y=(ImX0)Θ0+ΩΩ0,  E(ΩΩ0)=0,  Cov(ΩΩ0)=V2V1. (1.4)

    Given (1.1), a primary task is to estimate or predict certain functions of the unknown parameter matrices Θ and ΩΩ in (1.1). To do so, we construct a set of parametric functions containing both Θ and ΩΩ as follows:

    R=KΘ+JΩΩ,  R=(ImK)Θ+(ImJ)ΩΩ, (1.5)

    where it is assumed that K and J are k×p and k×n matrices, respectively. In this case,

    E(R)=KΘ,  Cov(R)=(ImJ)(ΣΣ2ΣΣ1)(ImJ), (1.6)
    Cov{R,Y}=Cov{JΩΩ,ΩΩ}=(ImJ)(ΣΣ2ΣΣ1). (1.7)

    When K=X and J=In, (1.5) becomes R=XΘ+ΩΩ=Y, the observed response matrix. Hence, (1.5) includes all matrix operations in (1.1) as its special cases. Thus, the construction of R can be used to identify their estimations and predictions of Θ and ΩΩ, simultaneously. Under the misspecified assumptions in (1.2), a general matrix of parametric functions is given by

    R0=K0Θ0+J0ΩΩ0,  R0=(ImK0)Θ0+(ImJ0)ΩΩ0, (1.8)

    where it is assumed that K0 and J0 are k×q and k×n matrices, respectively.

    Notice that the assumptions in the contexts of (1.1) and (1.2) are apparently different in representations. Thus, this fact means the statistical inference results on R derived from (1.1) and those on R0 derived from (1.2) are not necessarily the same, and of course, the findings under (1.2) generally are incorrect conclusions. Even so, there is a possibility that certain calculational results under (1.1) and (1.2) coincide. This possibility prompts statisticians to consider the comparison and relevance problems of inference results under the two models, especially to establish the relationships of predictions/estimations of unknown parameters under the two models. A classic investigation on relationships between true and misspecified linear models was given in [17], while many investigations on comparison problems of predictions/estimations of unknown parameters under true models and their misspecified models can be found in the literature; see, e.g., [2,3,4,5,8,9,12,14,15,16,17,18,23,26]. Recently, [30] discussed some kinds of relationships between true models and their misspecified forms under a general linear model, [7] considered the equivalence of predictions/estimations under an MGLM with augmentation, and [31] considered simultaneous prediction issues under an MGLM with future observations.

    In this paper, we focus on the problems pertaining to the comparisons of the best linear unbiased predictors (for short, BLUPs) of R derived from (1.1) and those of R0 derived from (1.2). The BLUPs now are known as one of the important parametric methods of predicting unknown parameters, which is a core concept and conventional topic in the regression analysis of linear statistical models, and many general and special contributions in relation to BLUPs under linear statistical models were given in the literature. This paper is mainly concerned with the connection analysis of the BLUPs of R under (1.1) and R0 under (1.2).

    The rest of this paper is organized as follows. In Section 2, we introduce notation and a collection of matrix analysis tools that we shall use to characterize matrix equalities that involve generalized inverses and give the definitions of predictability and the BLUPs of parameter matrix under (1.1). In Section 3, we present some basic estimation and inference theory regarding an MGLM, including analytical expressions of the BLUPs and their mathematical and statistical properties and features in the contexts of (1.1) and (1.2). In Section 4, we derive several groups of identifying conditions for the BLUPs to equal under the true and misspecified MGLMs using a series of precise and analytical tools in matrix theory. Some conclusions and remarks are given in Section 5. The proofs of the main results are given in the Appendix.

    For the purpose of establishing and describing equalities for different predictions/estimations in the context of linear statistical models, we need to adopt a selection of commonly-used matrix rank formulas and equivalent facts about matrix equalities in the following three lemmas, which will underpin the establishments and simplifications of various complicated matrix expressions and matrix equalities that appear in the statistical inference of MGLMs.

    Lemma 2.1. Let S and T be two sets composed by matrices of the same size. Then,

    STminSS,TTr(ST)=0, (2.1)
    STmaxSSminTTr(ST)=0. (2.2)

    Lemma 2.2. [13] Let ARm×n,BRm×k, and CRl×n. Then, the following rank equalities hold:

    r[A,B]=r(A)+r(EAB)=r(B)+r(EBA), (2.3)
    r[AC]=r(A)+r(CFA)=r(C)+r(AFC), (2.4)
    r[AABB0]=r[A,B]+r(B). (2.5)

    In particular, the following results hold.

    (a)r[A,B]=r(A)R(B)R(A)AA+B=BEAB=0.

    (b)r[AC]=r(A)R(C)R(A)CA+A=CCFA=0.

    (c)r[A, B]=r(A)+r(B)R(A)R(B)={0}R((EAB))=R(B)R((EBA))=R(A).

    (d)r[AC]=r(A)+r(C)R(A)R(C)={0}R(CFA)=R(C)R(AFC)=R(A).

    Lemma 2.3. [25] Let XRn×p,X0Rn×q, and let ΣΣ1, V1Rn×n be two nonnegative definite matrices. Then,

    r[XX0ΣΣ100X]=r[X,X0,ΣΣ1]+r(X),  r[XX0V100X0]=r[X,X0,V1]+r(X0), (2.6)
    r[XX0ΣΣ1V100X0000X0]=r[X,X0,ΣΣ1,V1]+r(X)+r(X0). (2.7)

    Lemma 2.4. [24,29] Let ARm×n,BRm×k, and CRl×n. Then, the maximum and minimum ranks of ABW and ABWC with respect to the variable matrix W are given by the following analytical formulas:

    maxWRk×nr(ABW)=min{r[A,B],  n}, (2.8)
    minWRk×nr(ABW)=r[A,B]r(B), (2.9)
    maxWRk×lr(ABWC)=min{r[A,B],  r[AC]}. (2.10)

    Below, we give an existing result about the general solution of a basic linear matrix equation.

    Lemma 2.5. [20] The linear matrix equation AX=B is consistent if and only if r[A,B]=r(A), or equivalently,AA+B=B. In this case, the general solution of the equation can be written in the parametric form X=A+B+(IA+A)U, where U is an arbitrary matrix.

    Lemma 2.6. Let ARm×n and BRm×k and assume that R(A)=R(B). Then,

    XA=0XB=0. (2.11)

    For the purpose of clearly and analytically solving the matrix minimization problem in (3.1), we need to use the following existing result on a constrained quadratic matrix-valued function minimization problem, which was proved in [27].

    Lemma 2.7. [27] Let

    f(X)=(XC+D)M(XC+D)  s.t.  XA=B, (2.12)

    where ARp×q,BRn×q,CRp×m,DRn×m are given,MRm×m is positive semi-definite, and the matrix equation XA=B is consistent. Then, there always exists a solution X0 of X0A=B such that

    f(X)f(X0) (2.13)

    holds for all solutions of XA=B. In this case, the matrix X0 satisfying (2.13) is determined by the following consistent matrix equation:

    X0[A,CMCA]=[B,DMCA], (2.14)

    while the general expression of X0 and the corresponding f(X0) are given by

    X0=argminXA=Bf(X)=[B,DMCA][A,CMCA]++V[A,CMC], (2.15)
    f(X0)=minXA=Bf(X)=KMKKMC(ACMCA)+CMK, (2.16)
    f(X)f(X0)=(XCMCA+DMCA)(ACMCA)+(XCMCA+DMCA), (2.17)

    where K=BA+C+D, and VRn×p is arbitrary.

    In this section, we present a review of the most important theoretical concepts concerning an MGLM. As usual, the unbiasedness of given linear predictions/estimations with respect to certain unknown parameters in an MGLM is an important property, but there are often many possible unbiased predictions/estimations for the same parameters. The classic statistical concept of predictability was originated from [6], while the predictability/estimability concepts of parameters in an MGLM were established in [1,19]. Under the assumptions in (1.1), the predictability/estimability of the unknown parameters is defined as follows.

    Definition 3.1. Let the parametric matrix R be as given in (1.5).

    (a) The matrix R is said to be predictable under (1.1) if there exists a k×n matrix L such that E(LYR)=0.

    (b) The vector R is said to be predictable under (1.3) if there exists an mk×mn matrix L such that E(LYR)=0.

    Definition 3.2. Let the parametric matrix R be as given in (1.5).

    (a) Given that R is predictable under (1.1), if there exists a matrix L0 such that

    Cov(L0YR)=min s.t. E(L0YR)=0 (3.1)

    holds in the Löwner partial ordering, the linear statistic L0Y is defined to be the best linear unbiased predictor (BLUP) of R under (1.1), and is described by

    L0Y=BLUPM(R)=BLUPM(KΘ+JΩΩ). (3.2)

    If J=0 or K=0 in (1.5), the L0Y satisfying (3.1) is called the best linear unbiased estimator (BLUE) of KΘ and the BLUP of JΩΩ under (1.1), respectively, and is described by

    L0Y=BLUEM(KΘ),  L0Y=BLUPM(JΩΩ), (3.3)

    respectively.

    (b) Given that R is predictable under (1.3), if there exists a matrix L0 such that

    Cov(L0YR)=min s.t. E(L0YR)=0 (3.4)

    hold in the Löwner partial ordering, the linear statistic L0Y is defined to be the BLUP of R under (1.3) and is described by

    L0Y=BLUPˆM(R)=BLUPˆM((ImK)Θ+(ImJ)ΩΩ). (3.5)

    If J=0 or K=0 in (1.5), the L0Y satisfying (3.4) is called the BLUE of (ImK)Θ and the BLUP of (ImJ)ΩΩ under (1.3), respectively, and is denoted by

    L0Y=BLUEˆM((ImK)Θ),  L0Y=BLUPˆM((ImJ)ΩΩ), (3.6)

    respectively.

    Admittedly, BLUPs/BLUEs of unknown parameters were common concepts and principles in the statistical inference of parametric models, which were highly appraised and regarded in the domain of regression analysis due to their simple and optimal performances and properties, while the study of BLUPs/BLUEs and the related issues were core parts in the research field of statistics and applications. As a fundamental and theoretical tool in matrix theory, the analytical solution of the constrained quadratic matrix-valued function optimization problem in the Löwner partial ordering in Lemma 2.7 was used to derive a number of exact and analytical formulas for calculating BLUPs/BLUEs and their properties under various linear regression frameworks; see, e.g., [5,7,10,28,30,31].

    In this section, we present a sequence of existing formulas, results, and facts about the predictability/estimability and the analytical formulas of the BLUPs of R and R0 in (1.5) and (1.8). Recall that the unbiasedness of predictions/estimations and the lowest dispersion matrices, as formulated in (3.1), are two of the most intrinsic requirements in statistical inference in the context of regression models, which can conveniently be interpreted as some special cases of mathematical optimization problems with regard to constrained quadratic matrix-valued functions in the Löwner partial ordering.

    Due to the linear nature of an MGLM, we see from (1.1) and (1.5) that LYR and LYR can be rewritten as

    LYR=LXΘ+LΩΩKΘJΩΩ=(LXK)Θ+(LJ)ΩΩ, (3.7)
    LYR=(Im(LXK))Θ+(Im(LJ))ΩΩ. (3.8)

    Hence, the expectations of LYR and LYR can be expressed as

    E(LYR)=(LXK)Θ,  E(LYR)=(Im(LXK))Θ. (3.9)

    The covariance matrix of LYR can be expressed as

    Cov(LYR)=(Im(LJ))Cov(ΩΩ)(Im(LJ))=(Im(LJ))(ΣΣ2ΣΣ1)(Im(LJ))=ΣΣ2(LJ)ΣΣ1(LJ)=ΣΣ2f(L), (3.10)

    where f(L)=(LJ)ΣΣ1(LJ).

    Concerning the predictability and the BLUP of R in (1.5), we use the following existing results.

    Theorem 3.1. [7] Let R be as given in (1.5). Then, the following statements are equivalent:

    (a)R is predictable by Y in (1.1).

    (b)R(ImK)R(ImX).

    (c)R(K)R(X).

    Theorem 3.2. [7,31] Assume R in (1.5) is predictable. Then,

    Cov(LYR)=min s.t., E(LYR)=0L[X,ΣΣ1X]=[K,JΣΣ1X]. (3.11)

    The matrix equation in (3.11), called the BLUP equation associated with R, is consistent as well, i.e.,

    [K,JΣΣ1X][X,ΣΣ1X]+[X,ΣΣ1X]=[K,JΣΣ1X] (3.12)

    holds under Theorem 3.1(c), while the general expressions of L and the corresponding BLUPM(R) can be written as

    BLUPM(R)=PK;J;X;ΣΣ1Y=([K,JΣΣ1X][X,ΣΣ1X]++U[X,ΣΣ1X])Y, (3.13)

    where URk×n is arbitrary. In particular,

    BLUEM(KΘ)=([K,0][X,ΣΣ1X]++U[X,ΣΣ1X])Y, (3.14)
    BLUPM(JΩΩ)=([0,JΣΣ1X][X,ΣΣ1X]++U[X,ΣΣ1X])Y, (3.15)

    where URk×n is arbitrary. Further, the following results hold.

    (a) ([21,p. 123]) r[X,ΣΣ1X]=r[X,ΣΣ1],R[X,ΣΣ1X]=R[X,ΣΣ1], and R(X)R(ΣΣ1X)={0}.

    (b) L=PK;J;X;ΣΣ1 is unique if and only if r[X,ΣΣ1]=n.

    (c) BLUPM(R) is unique if and only if R(Y)R[X,ΣΣ1].

    (d) The expectation, the dispersion matrices of BLUPM(R) and RBLUPM(R) and covariance matrix between BLUPM(R) and R are unique, and they are given by

    E(BLUPM(R)R)=0, (3.16)
    Cov(BLUPM(R))=ΣΣ2([K,JΣΣ1X][X,ΣΣ1X]+)ΣΣ1([K,JΣΣ1X[X,ΣΣ1X]+), (3.17)
    Cov{BLUPM(R),R}=ΣΣ2[K,JΣΣ1X][X,ΣΣ1X]+ΣΣ1J, (3.18)
    Cov(R)Cov(BLUPM(R))=ΣΣ2JΣΣ1J  ΣΣ2([K,JΣΣ1X][X,ΣΣ1X]+)ΣΣ1([K,JΣΣ1X][X,ΣΣ1X]+), (3.19)
    Cov(RBLUPM(R))=ΣΣ2([K,JΣΣ1X][X,ΣΣ1X]+J)ΣΣ1                               ×([K,JΣΣ1X][X,ΣΣ1X]+J). (3.20)

    (e) BLUPM(R),BLUEM(KΘ) and BLUPM(JΩΩ) satisfy

    BLUPM(R)=BLUEM(KΘ)+BLUPM(JΩΩ), (3.21)
    Cov{BLUEM(KΘ),BLUPM(JΩΩ)}=0, (3.22)
    Cov(BLUPM(R))=Cov(BLUEM(KΘ))+Cov(BLUPM(JΩΩ)). (3.23)

    (f) BLUPM(TR)=TBLUPM(R) holds for any matrix TRt×k.

    Concerning the BLUE of the mean matrix XΘ and the BLUP of the error matrix ΩΩ in (1.1), we have the following result.

    Corollary 3.1. The mean matrix XΘ in (1.1) is always estimable, and the following results hold.

    (a) The general expression of BLUEM(XΘ) can be written as

    BLUEM(XΘ)=([X,0][X,ΣΣ1X]++U[X,ΣΣ1X])Y (3.24)

    with

    E(BLUEM(XΘ))=XΘ, (3.25)
    Cov(BLUEM(XΘ))=ΣΣ2([X,0][X,ΣΣ1X]+)ΣΣ1([X,0][X,ΣΣ1X]+), (3.26)

    where URn×n is arbitrary.

    (b) The general expression of BLUPM(ΩΩ) can be written as

    BLUPM(ΩΩ)=([0,ΣΣ1X][X,ΣΣ1X]++U[X,ΣΣ1X])Y=(ΣΣ1(XΣΣ1X)++U[X,ΣΣ1X])Y (3.27)

    with

    Cov{BLUPM(ΩΩ),ΩΩ}=Cov(BLUPM(ΩΩ))=ΣΣ2ΣΣ1(XΣΣ1X)+ΣΣ1, (3.28)
    Cov(ΩΩBLUPM(ΩΩ))=Cov(ΩΩ)Cov(BLUPM(ΩΩ))=ΣΣ2ΣΣ1ΣΣ2ΣΣ1(XΣΣ1X)+ΣΣ1, (3.29)

    where URn×n is arbitrary.

    (c) The three matrices Y,BLUEM(XΘ), and BLUPM(ΩΩ) satisfy

    Y=BLUEM(XΘ)+BLUPM(ΩΩ), (3.30)
    Cov{BLUEM(XΘ),BLUPM(ΩΩ)}=0, (3.31)
    Cov(Y)=Cov(BLUEM(XΘ))+Cov(BLUPM(ΩΩ)). (3.32)

    Referring to Theorem 3.2, we obtain the BLUP of R0 in (1.8) as follows.

    Theorem 3.3. Assume that R0 is as given in (1.8). Then, the matrix equation

    L0[X0,V1X0]=[K0,J0V1X0] (3.33)

    is solvable for L0 if and only if R(K0)R(X0). In this case, the general solution of the equation, denoted by L0=PK0;J0;X0;V1, and the corresponding BLUP of R0 under the misspecified model in (1.2) are given by

    BLUPN(R0)=PK0;J0;X0;V1Y=([K0,J0V1X0][X0,V1X0]++U0[X0,V1X0])Y, (3.34)

    where U0Rk×n is arbitrary. In particular,

    BLUEN(K0Θ0)=([K0,0][X0,V1X0]++U0[X0,V1X0])Y, (3.35)
    BLUPN(J0ΩΩ0)=([0,J0V1X0][X0,V1X0]++U0[X0,V1X0])Y, (3.36)

    where U0Rk×n is arbitrary. Under the assumptions in (1.1),

    E(BLUPN(R0))=PK0;J0;X0;V1XΘ, (3.37)
    Cov(BLUPN(R0))=ΣΣ2(PK0;J0;X0;V1ΣΣ1PK0;J0;X0;V1), (3.38)

    where both PK0;J0;X0;V1X and PK0;J0;X0;V1ΣΣ1 are not necessarily unique. Further, the following results hold:

    (a) PK0;J0;X0;V1X is unique if and only if R(X)R[X0,V1].

    (b) PK0;J0;X0;V1ΣΣ1 is unique if and only if R(ΣΣ1)R[X0,V1].

    (c) BLUPN(R0) is unique if and only if R(Y)R[X0,V1].

    In this section, we address the following eight problems:

    (I){PK;J;X;ΣΣ1}{PK0;J0;X0;V1}, so that {BLUPM(R)}{BLUPN(R0)} holds definitely,

    (II){PK;J;X;ΣΣ1}{PK0;J0;X0;V1}, so that {BLUPM(R)}{BLUPN(R0)} holds definitely,

    (III){PK;J;X;ΣΣ1}{PK0;J0;X0;V1}, so that {BLUPM(R)}{BLUPN(R0)} holds definitely,

    (IV){PK;J;X;ΣΣ1}={PK0;J0;X0;V1}, so that {BLUPM(R)}={BLUPN(R0)} holds definitely,

    (V){BLUPM(R)}{BLUPN(R0)} holds with probability 1,

    (VI){BLUPM(R)}{BLUPN(R0)} holds with probability 1,

    (VII){BLUPM(R)}{BLUPN(R0)} holds with probability 1,

    (VIII){BLUPM(R)}={BLUPN(R0)} holds with probability 1,

    for the BLUPs defined and obtained in the preceding section.

    In order to obtain satisfactory conclusions about the above BLUPs' equalities problems, we first present three manifest rules for delineating equalities of different linear statistics under multivariate linear models, and we then go on to describe some enabling methods to establish equalities between two linear statistics. Assume that two linear statistics G1Y and G2Y are given under (1.1). When establishing equalities between two linear statistics, the following three cases should be addressed for the purpose of delineating equalities of estimators formulated above.

    Definition 4.1. Let Y be as given in (1.1), and let G1, G2Rk×n.

    (I) The equality G1Y=G2Y is said to hold definitely if G1=G2.

    (II) The equality G1Y=G2Y is said to hold with probability 1 if both E(G1YG2Y)=0 and Cov((ImG1)Y(ImG2)Y)=0.

    (III)G1Y and G2Y are said to have the same expectation matrices and dispersion matrices if both E(G1Y)=E(G2Y) and Cov[(ImG1)Y]=Cov((ImG2)Y) hold.

    These three types of equalities are not necessarily equivalent since they are defined from different criteria for purposes of comparison and contrast. These facts show that equalities of linear statistics under (1.1) can all be characterized by the corresponding linear and quadratic matrix equations. In particular, under the assumption in (1.1),

    E(G1YG2Y)=0  and  Cov((ImG1)Y(ImG2)Y)=0G1XG2X=0 and [(ImG1)(ImG2)](ΣΣ2ΣΣ1)[(ImG1)(ImG2)]=0G1X=G2X  and  [(ImG1)(ImG2)](ΣΣ2ΣΣ1)=0G1X=G2X  and  ΣΣ2(G1G2)ΣΣ1=0. (4.1)

    Because ΣΣ2 is a non-zero matrix, the equality (G1G2)ΣΣ1=0 holds. Combining the two equalities in (4.1), obtain

    (G1G2)[X,ΣΣ1]=0. (4.2)

    Applying Lemma 2.6 to it yields the following result.

    Lemma 4.1. Let Y be as given in (1.1), and let G1, G2Rk×n. Then,

    G1Y=G2Yholds with probability 1(G1G2)[X,ΣΣ1]=0(G1G2)[X,ΣΣ1X]=0. (4.3)

    Further, let {G1} and {G2} be two sets of matrices of the same size. Then, the following results hold.

    (a) {G1Y}{G2Y} holds with probability 1 if and only if

    minG1{G1},G2{G2}r((G1G2)[X,ΣΣ1X])=0. (4.4)

    (b) {G1Y}{G2Y} holds with probability 1 if and only if

    maxG1{G1}minG2{G2}r((G1G2)[X,ΣΣ1X])=0. (4.5)

    Because the coefficient matrices PK;J;X;ΣΣ1 and PK0;J0;X0;V1 in (3.13) and (3.34) are not necessarily unique, we use

    {PK;J;X;ΣΣ1},                                {PK0;J0;X0;V1}, (4.6)
    {BLUPM(R)}={PK;J;X;ΣΣ1Y},   {BLUPN(R0)}={PK0;J0;X0;V1Y} (4.7)

    to denote the collections of all coefficient matrices and the corresponding BLUPs. Under the assumption that (1.1) is a true model, the predictor BLUPN(R0) in (3.34) is not necessarily unbiased for R. Concerning the expectation of BLUPN(R0), we have the following result.

    Theorem 4.1. Let BLUPM(R) and PK;J;X;ΣΣ1 be as given in (3.13), let BLUPN(R0) and PK0;J0;X0;V1 be as given in (3.34), and define

    M1=[XX0V100X0]  and  N1=[K,K0,J0V1].

    Then, we have the following results:

    (a) The following two statements are equivalent:

    (i) There exists a PK0;J0;X0;V1 such that PK0;J0;X0;V1X=K.

    (ii) R(N1)R(M1).

    In this case, the general expression of PK0;J0;X0;V1 of PK0;J0;X0;V1X=K is

    PK0;J0;X0;V1=[K0,J0V1X0][X0,V1X0]++(K[K0,J0V1X0][X0,V1X0]+X)([X0,V1X0]X)+[X0,V1X0]+W(In([X0,V1X0]X)([X0,V1X0]X)+)[X0,V1X0], (4.8)

    where the matrix W is arbitrary. Correspondingly,BLUPN(R0)=PK0;J0;X0;V1Y satisfies E(BLUPN(R0)BLUPM(R))=0, namely,BLUPN(R0) and BLUPM(R) have the same expectation.

    (b) The following two statements are equivalent:

    (i) All PK0;J0;X0;V1 satisfy PK0;J0;X0;V1X=K.

    (ii) R(N1)R(M1) and R(X)R[X0,V1].

    Correspondingly, all BLUPN(R0)=PK0;J0;X0;V1Y satisfy E(BLUPN(R0)BLUPM(R))=0.

    Theorem 4.2. Let BLUPM(R) and PK;J;X;ΣΣ1 be as given in (3.13),letBLUPN(R0) and PK0;J0;X0;V1 be as given in (3.34), and define

    M2=[XX0ΣΣ1V100X0000X0]  and  N2=[K,K0,JΣΣ1,J0V1]. (4.9)

    Then, the following results hold.

    (a) There exist PK0;J0;X0;V1 and PK;J;X;ΣΣ1 such that PK0;J0;X0;V1=PK;J;X;ΣΣ1 if and only if R(M2)R(N2). In this case,{BLUPM(R)}{BLUPN(R0)} holds definitely.

    (b) {PK;J;X;ΣΣ1}{PK0;J0;X0;V1} if and only if R(M2)R(N2) and R[X,ΣΣ1]R[X0,V1]. In this case,{BLUPM(R)}{BLUPN(R0)} holds definitely.

    (c) {PK;J;X;ΣΣ1}{PK0;J0;X0;V1} if and only if R(M2)R(N2) and R[X,ΣΣ1]R[X0,V1]. In this case,{BLUPM(R)}{BLUPN(R0)} holds definitely.

    (d) {PK;J;X;ΣΣ1}={PK0;J0;X0;V1} if and only if R(M2)R(N2) and R[X,ΣΣ1]=R[X0,V1]. In this case,{BLUPM(R)}={BLUPN(R0)} holds definitely.

    Theorem 4.3. Let BLUPM(R) and PK;J;X;ΣΣ1 be as given in (3.13), let BLUPN(R0) and PK0;J0;X0;V1 be as given in (3.34), and let M2 and N2 be as given in (4.9). Then, the following three statements are equivalent:

    (a) {BLUPM(R)}{BLUPN(R0)} holds with probability 1.

    (b) {BLUPM(R)}{BLUPN(R0)} holds with probability 1.

    (c) R(M2)R(N2).

    Combining Theorems 4.2 and 4.3, we obtain the following result.

    Corollary 4.1. Let BLUPM(R) and PK;J;X;ΣΣ1 be as given in (3.13), let BLUPN(R0) and PK0;J0;X0;V1 be as given in (3.34), and let M2 and N2 be as given in (4.9). Then, the following five statements are equivalent:

    (a) There exist PK0;J0;X0;V1 and PK;J;X;ΣΣ1 such that PK0;J0;X0;V1=PK;J;X;ΣΣ1.

    (b) {BLUPM(R)}{BLUPN(R0)} holds definitely.

    (c) {BLUPM(R)}{BLUPN(R0)} holds with probability 1.

    (d) {BLUPM(R)}{BLUPN(R0)} holds with probability 1.

    (e) R(M2)R(N2).

    Theorem 4.4. Let BLUPM(R) and PK;J;X;ΣΣ1 be as given in (3.13), let BLUPN(R0) and PK0;J0;X0;V1 be as given in (3.34), and let M2 and N2 be as given in (4.9). Then, the following three statements are equivalent:

    (a) {BLUPM(R)}{BLUPN(R0)} holds with probability 1.

    (b) {BLUPM(R)}={BLUPN(R0)} holds with probability 1.

    (c) R(M2)R(N2) and R[X,ΣΣ1]R[X0,V1].

    Combining Theorems 4.2 and 4.4, we obtain the following result.

    Corollary 4.2. Let BLUPM(R) and PK;J;X;ΣΣ1 let be as given in (3.13),letBLUPN(R0) and PK0;J0;X0;V1 be as given in (3.34), and let M2 and N2 be as given in (4.9). Then, the following four statements are equivalent:

    (a) {BLUPM(R)}{BLUPN(R0)} holds definitely.

    (b) {BLUPM(R)}{BLUPN(R0)} holds with probability 1.

    (c) {BLUPM(R)}={BLUPN(R0)} holds with probability 1.

    (d) R(M2)R(N2) and R[X,ΣΣ1]R[X0,V1].

    The problems on misspecifications and comparisons of linear statistical models are certain specific subjects in the estimation and inference theory of regression models, which include a diversity of concrete issues for discrimination and consideration. As one such problem, we offered in the preceding sections an overview and analysis of the equivalence problems of BLUPs/BLUEs under a pair of true and misspecified MGLMs through the effective uses of various precise and analytical methods and techniques in linear algebra and matrix theory. It is not difficult to understand the resulting facts from mathematical and statistical aspects, and thereby we can take them as a group of theoretical contributions in the statistical inference under certain general model assumptions. This specific study also shows that there are many deep and connotative problems in the classic regression frameworks that we can put forward from theoretical and applied points of view and can reasonably solve them by various known and novel ideas, methods and techniques in different branches of mathematical theory. Specifically, the resulting facts once again illustrate the crucial role and influence of the matrix algebra in dealing with statistical inference problems with regard to parametric models.

    Finally, we would like to point out that more intriguing and sophisticated formulas, equalities, and inequalities associated with predictions/estimations, as demonstrated in the preceding sections, can be derived with some efforts under multivariate linear models with various specified assumptions, which will help in building a more solid theoretical and methodological foundation in the framework of parametric regressions.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The second author was supported in part by the Shandong Provincial Natural Science Foundation ZR2019MA065.

    The authors would like to express their sincere thanks to anonymous reviewers for their helpful comments and suggestions.

    The authors declare that they have no conflict of interest.

    Proof of Theorem 2.6. It follows from Lemma 2.5 that R(A)=R(B) implies AA+B=B and BB+A=A. Therefore,

    XA=0XAA+B=0XB=0,XB=0XBB+A=0XA=0,

    as required for (2.11).

    Proof of Theorem 4.1. From (3.34), the equation PK0;J0;X0;V1X=K can be written as

    [K0,J0V1X0][X0,V1X0]+X+U0[X0,V1X0]X=K. (A.1)

    By Lemma 2.5, the equation is solvable for U0 if and only if

    r[K[K0,J0V1X0][X0,V1X0]+X[X0,V1X0]X]=r([X0,V1X0]X). (A.2)

    It is necessary to simplify the rank equality by the formulas in Section 2, and there will be reasonable and detailed calculation steps needed to remove the generalized inverses on both sides of (A.2). We proceed to this goal by applying (2.3), (2.4) and then simplifying by elementary block matrix operations to both sides of (A.2):

    r[K[K0,J0V1X0][X0,V1X0]+X[X0,V1X0]X]=r[K[K0,J0V1X0][X0,V1X0]+X0X[X0,V1X0]]r[X0,V1X0]=r[K[K0,J0V1X0][X0,V1X0]+X0X[X0,V1X0]]r[X0,V1]=r[K[K0,J0V1X0]X[X0,V1X0]]r[X0,V1]=r[XX0V100X0KK0J0V1]r(X0)r[X0,V1]=r[M1N1]r(X0)r[X0,V1],

    and

    r([X0,V1X0]X)=r[X,X0,V1X0]r[X0,V1X0]=r[X,X0,V1]r[X0,V1].

    Substituting the rank equalities into (A.2) and then simplifying, we obtain r[M1N1]=r(M1), that is, R(N1)R(M1) holds by Lemma 2.2(b), thus establishing the equivalence of (i) and (ii) in (a). In this case, the general solution of (A.1) by Lemma 2.5 is

    U0=(K[K0,J0V1X0][X0,V1X0]+X)([X0,V1X0]X)+   +W[In([X0,V1X0]X)([X0,V1X0]X)+], (A.3)

    where W is arbitrary. Substitution of (A.3) into PK0;J0;X0;V1 in (3.34) gives (4.8).

    Equation (A.1) holds for all U0 if and only if [K[K0,J0V1X0][X0,V1X0]+X[X0,V1X0]X]=0, that is,

    r[M1N1]=r[X0,V1]+r(X0). (A.4)

    Also note from (2.3) and (2.6) that

    r(M1)=r[X,X0,V1]+r(X0)r[X0,V1]+r(X0), (A.5)
    r[M1N1]r(M1)r[X0,V1]+r(X0). (A.6)

    Combining (A.4) and (A.6) yields

    r[M1N1]=r(M1)=r[X0,V1]+r(X0),

    or equivalently, R(N1)R(M1) and R(X)R[X0,V1] hold, thus establishing the equivalence of (i) and (ii) in (b).

    Proof of Theorem 4.2. From (3.13) and (3.34), the general expression of the difference PK;J;X;ΣΣ1PK0;J0;X0;V1 can be written as

    PK;J;X;ΣΣ1PK0;J0;X0;V1=G+U[X,ΣΣ1X]U0[X0,V1X0], (A.7)

    where G=[K,JΣΣ1X][X,ΣΣ1X]+[K0,J0V1X0][X0,V1X0]+, and U,U0Rk×n are arbitrary. Applying (2.9) to (A.7), we obtain

    minPK;J;X;ΣΣ1,PK0;J0;X0;V1r(PK;J;X;ΣΣ1PK0;J0;X0;V1)=minU,U0r(G+U[X,ΣΣ1X]U0[X0,V1X0])=r[G[X,ΣΣ1X][X0,V1X0]]r[[X,ΣΣ1X][X0,V1X0]]. (A.8)

    It is easy to show by (2.3), (2.4) and elementary block matrix operations that

    r[G[X,ΣΣ1X][X0,V1X0]]=r[[K,JΣΣ1X][X,ΣΣ1X]+[K0,J0V1X0][X0,V1X0]+00In[X,ΣΣ1X]0In0[X0,V1X0]]        r[X,ΣΣ1X]r[X0,V1X0]=r[0[K,JΣΣ1X][K0,J0V1X0]In[X,ΣΣ1X]0In0[X0,V1X0]]r[X,ΣΣ1]r[X0,V1]=r[0[K,JΣΣ1X][K0,J0V1X0]0[X,ΣΣ1X][X0,V1X0]In00]r[X,ΣΣ1]r[X0,V1]=r[[K,JΣΣ1X][K0,J0V1X0][X,ΣΣ1X][X0,V1X0]]+nr[X,ΣΣ1]r[X0,V1]=r[XX0ΣΣ1V100X0000X0KK0JΣΣ1J0V1]+nr(X)r(X0)r[X,ΣΣ1]r[X0,V1]=r[M2N2]+nr(X)r(X0)r[X,ΣΣ1]r[X0,V1], (A.9)

    and

    r[[X,ΣΣ1X][X0,V1X0]]=r[In[X,ΣΣ1X]0In0[X0,V1X0]]r[X,ΣΣ1]r[X0,V1]=r[0[X,ΣΣ1X][X0,V1X0]In00]r[X,ΣΣ1]r[X0,V1]=r[XX0ΣΣ1V100X0000X0]+nr(X)r(X0)r[X,ΣΣ1]r[X0,V1]=r(M2)+nr(X)r(X0)r[X,ΣΣ1]r[X0,V1]. (A.10)

    Substitution of (A.9) and (A.10) into (A.8) gives

    minPK;J;X;ΣΣ1,PK0;J0;X0;V1r(PK;J;X;ΣΣ1PK0;J0;X0;V1)=r[M2N2]r(M2). (A.11)

    Setting the right-hand side of (A.11) equal to zero and applying Lemma 2.2(b) yields the first statement in (a). Applying (2.9) to (A.7) gives rise to

    minPK;J;X;ΣΣ1r(PK;J;X;ΣΣ1PK0;J0;X0;V1)=minUr(G+U[X,ΣΣ1X]U0[X0,V1X0])=r[GU0[X0,V1X0][X,ΣΣ1X]]r([X,ΣΣ1X])=r[GU0[X0,V1X0][X,ΣΣ1X]]+r[X,ΣΣ1]n. (A.12)

    Further by (2.10),

    maxU0r[GU0[X0,V1X0][X,ΣΣ1X]]=maxU0r([G[X,ΣΣ1X]][Ik0]U0[X0,V1X0])=min{r[G[X,ΣΣ1X][X0,V1X0]],  r[GIk[X,ΣΣ1X]0]}=min{r[G[X,ΣΣ1X][X0,V1X0]],  k+nr[X,ΣΣ1]}=min{r[M2N2]+nr(X)r(X0)r[X,ΣΣ1]r[X0,V1],  k+nr[X,ΣΣ1]}=min{k,  r[M2N2]r(X)r(X0)r[X0,V1]}+nr[X,ΣΣ1]. (A.13)

    Combining (A.12) and (A.13) yields

    maxPK0;J0;X0;V1minPK;J;X;ΣΣ1r(PK;J;X;ΣΣ1PK0;J0;X0;V1)=min{k,  r[M2N2]r(X)r(X0)r[X0,V1]}=min{k,  r[M2N2]r(M2)+(r[X,X0,ΣΣ1,V1]r[X0,V1])}  (by (2.7)). (A.14)

    Setting the right-hand side of (A.14) equal to zero yields r[M2N2]=r(M2) and r[X,X0,ΣΣ1,V1]=r[X0,V1], or equivalently, R(M2)R(N2) and R[X,ΣΣ1]R[X0,V1] hold by Lemma 2.2(a) and (b). Combining this fact with (2.2) yields the first statement in (b).

    By the structural symmetry of PK;J;X;ΣΣ1 and PK0;J0;X0;V1, we obtain

    maxPK;J;X;ΣΣ1minPK0;J0;X0;V1r(PK;J;X;ΣΣ1PK0;J0;X0;V1)=min{k,  r[M2N2]r(M2)+(r[X,X0,ΣΣ1,V1]r[X,ΣΣ1])}. (A.15)

    Setting the right-hand side of (A.15) equal to zero and applying Lemma 2.2(a) and (b) yields the first statement in (c). Combining (b) and (c) yields (d).

    Proof of Theorem 4.3. From Definition 4.1(II) and (4.4), that result (a) is equivalent to

    (PK;J;X;ΣΣ1PK0;J0;X0;V1)[X,ΣΣ1X]=0. (A.16)

    Substituting (3.13) and (3.34) into (A.16) and then simplifying, we obtain

    U0[X0,V1X0][X,ΣΣ1X]=G, (A.17)

    where G=[K,JΣΣ1X][K0,J0V1X0][X0,V1X0]+[X,ΣΣ1X] and U0Rk×n is arbitrary. From Lemma 2.5, the matrix equation is solvable for U0 if and only if

    r[G[X0,V1X0][X,ΣΣ1X]]=r([X0,V1X0][X,ΣΣ1X]). (A.18)

    Applying (2.3) and (2.4) to both sides and then simplifying, we obtain

    r[G[X0,V1X0][X,ΣΣ1X]]=r[[K,JΣΣ1X][K0,J0V1X0][X0,V1X0]+[X,ΣΣ1X]0[X,ΣΣ1X][X0,V1X0]] r[X0,V1X0]=r[[K,JΣΣ1X][K0,J0V1X0][X,ΣΣ1X][X0,V1X0]]r[X0,V1]=r[XX0ΣΣ1V100X0000X0KK0JΣΣ1J0V1]r(X)r(X0)r[X0,V1]=r[M2N2]r(X)r(X0)r[X0,V1], (A.19)

    and

    r([X0,V1X0][X,ΣΣ1X])=r[X,ΣΣ1,X0,V1]r[X0,V1]=r(M2)r(X)r(X0)r[X0,V1]. (A.20)

    Substitution of (A.19) and (A.20) into (A.18) leads to r[M2N2]=r(M2), or equivalently, R(M2)R(N2) by Lemma 2.2(b), thus establishing the equivalence of (a) and (c).

    It follows from Lemma 4.1(b) that the statement in (b) holds if and only if

    maxPK;J;X;ΣΣ1minPK0;J0;X0;V1r((PK;J;X;ΣΣ1PK0;J0;X0;V1)[X,ΣΣ1X])=0. (A.21)

    By (2.9), (3.11), (A.19) and (A.20),

    maxPK;J;X;ΣΣ1minPK0;J0;X0;V1r((PK;J;X;ΣΣ1PK0;J0;X0;V1)[X,ΣΣ1X])=minU0r(GU0[X0,V1X0][X,ΣΣ1X])=r[G[X0,V1X0][X,ΣΣ1X]]r([X0,V1X0][X,ΣΣ1X])=r[M2N2]r(M2). (A.22)

    Equation (A.21) thereby is equivalent to r[M2,N2]=r(M2), that is, R(M2)R(N2) holds by Lemma 2.2(b). Combining the fact with (2.2) leads to the equivalence of (b) and (c).

    Proof of Theorem 4.4. It follows from Lemma 4.1(b) that the statement in (a) holds if and only if

    maxPK0;J0;X0;V1minPK;J;X;ΣΣ1r((PK;J;X;ΣΣ1PK0;J0;X0;V1)[X,ΣΣ1X])=0. (A.23)

    From (2.7), (2.8), (3.11), (3.13), (3.34) and (A.19), we obtain

    maxPK0;J0;X0;V1minPK;J;X;ΣΣ1r((PK;J;X;ΣΣ1PK0;J0;X0;V1)[X,ΣΣ1X])=maxU0r(GU0[X0,V1X0][X,ΣΣ1X])=min{r[G[X0,V1X0][X,ΣΣ1X]],  k}=min{r[M2N2]r(X)r(X0)r[X0,V1],  k}=min{r[M2N2]r(M2)+(r[X,X0,ΣΣ1,V1]r[X0,V1]),  k}, (A.24)

    where G=[K,JΣΣ1X][K0,J0V1X0][X0,V1X0]+[X,ΣΣ1X], and U0Rk×n is arbitrary.

    Setting the right-hand side of (A.24) equal to zero yields r[M2N2]=r(M2) and r[X,X0,ΣΣ1,V1]=r[X0,V1], or equivalently, R(M2)R(N2) and R[X,ΣΣ1]R[X0,V1] hold by Lemma 2.2(a) and (b). Combining this fact with (2.2) leads to the equivalence of (a) and (c). Combining these facts with Theorem 4.3 yields the equivalence of (b) and (c).



    [1] J. Baksalary, R. Kala, Criteria for estimability in multivariate linear models, Math. Operationsforsch. u. Statist., 7 (1976), 5–9. http://dx.doi.org/10.1080/02331887608801273 doi: 10.1080/02331887608801273
    [2] J. Baksalary, T. Mathew, Linear sufficiency and completeness in an incorrectly specified general Gauss-Markov model, Sankhyā: The Indian Journal of Statistics, 48 (1986), 169–180.
    [3] J. Baksalary, T. Mathew, Admissible linear estimation in a general Gauss-Markov model with an incorrectly specified dispersion matrix, J. Multivariate Anal., 27 (1988), 53–67. http://dx.doi.org/10.1016/0047-259X(88)90115-7 doi: 10.1016/0047-259X(88)90115-7
    [4] P. Bhimasankaram, S. Rao Jammalamadaka, Updates of statistics in a general linear model: a statistical interpretation and applications, Commun. Stat.-Simul. Comput., 23 (1994), 789–801. http://dx.doi.org/10.1080/03610919408813199 doi: 10.1080/03610919408813199
    [5] S. Gan, Y. Sun, Y. Tian, Equivalence of predictors under real and over-parameterized linear models, Commun. Stat.-Theor. Meth., 46 (2017), 5368–5383. http://dx.doi.org/10.1080/03610926.2015.1100742 doi: 10.1080/03610926.2015.1100742
    [6] A. Goldberger, Best linear unbiased prediction in the generalized linear regression models, J. Am. Stat. Assoc., 57 (1962), 369–375. http://dx.doi.org/10.2307/2281645 doi: 10.2307/2281645
    [7] B. Jiang, Y. Tian, On equivalence of predictors/estimators under a multivariate general linear model with augmentation, J. Korean Stat. Soc., 46 (2017), 551–561. http://dx.doi.org/10.1016/j.jkss.2017.04.001 doi: 10.1016/j.jkss.2017.04.001
    [8] W. Li, Y. Tian, R. Yuan, Statistical analysis of a linear regression model with restrictions and superfluous variables, J. Ind. Manag. Optim., 19 (2023), 3107–3127. http://dx.doi.org/10.3934/jimo.2022079 doi: 10.3934/jimo.2022079
    [9] C. Lu, S. Gan, Y. Tian, Some remarks on general linear model with new regressors, Stat. Probil. Lett., 97 (2015), 16–24. http://dx.doi.org/10.1016/j.spl.2014.10.015 doi: 10.1016/j.spl.2014.10.015
    [10] C. Lu, Y. Sun, Y. Tian, Two competing linear random-effects models and their connections, Stat. Papers, 59 (2018), 1101–1115. http://dx.doi.org/10.1007/s00362-016-0806-3 doi: 10.1007/s00362-016-0806-3
    [11] A. Markiewicz, S. Puntanen, All about the with its applications in the linear statistical models, Open Math., 13 (2015), 33–50. http://dx.doi.org/10.1515/math-2015-0005 doi: 10.1515/math-2015-0005
    [12] A. Markiewicz, S. Puntanen, G. Styan, The legend of the equality of OLSE and BLUE: highlighted by C. R. Rao in 1967, In: Methodology and applications of statistics, Cham: Springer, 2021, 51–76. http://dx.doi.org/10.1007/978-3-030-83670-2_3
    [13] G. Marsaglia, G. Styan, Equalities and inequalities for ranks of matrices, Linear Multilinear A., 2 (1974), 269–292. http://dx.doi.org/10.1080/03081087408817070 doi: 10.1080/03081087408817070
    [14] T. Mathew, Linear estimation with an incorrect dispersion matrix in linear models with a common linear part, J. Am. Stat. Assoc., 78 (1983), 468–471. http://dx.doi.org/10.2307/2288660 doi: 10.2307/2288660
    [15] T. Mathew, On inference in a general linear model with an incorrect dispersion matrix, In: Linear statistical inference, New York: Springer, 1985,200–210. http://dx.doi.org/10.1007/978-1-4615-7353-1_161985
    [16] T. Mathew, P. Bhimasankaram, Optimality of BLUE's in a general linear model with incorrect design matrix, J. Stat. Plan. Infer., 8 (1983), 315–329. http://dx.doi.org/10.1016/0378-3758(83)90048-4 doi: 10.1016/0378-3758(83)90048-4
    [17] S. Mitra, B. Moore, Gauss-Markov estimation with an incorrect dispersion matrix, Sankhyā: The Indian Journal of Statistics, 35 (1973), 139–152.
    [18] D. Nel, Tests for equality of parameter matrices in two multivariate linear models, J. Multivariate Anal., 61 (1997), 29–37. http://dx.doi.org/10.1006/jmva.1997.1661 doi: 10.1006/jmva.1997.1661
    [19] W. Oktaba, The general multivariate Gauss-Markov model of the incomplete block design, Aust. NZ J. Stat., 45 (2003), 195–205. http://dx.doi.org/10.1111/1467-842X.00275 doi: 10.1111/1467-842X.00275
    [20] R. Penrose, A generalized inverse for matrices, Math. Proc. Cambridge, 51 (1955), 406–413. http://dx.doi.org/10.1017/S0305004100030401 doi: 10.1017/S0305004100030401
    [21] S. Puntanen, G. Styan, J. Isotalo, Matrix tricks for linear statistical models, Berlin: Springer, 2011. http://dx.doi.org/10.1007/978-3-642-10473-2
    [22] C. Rao, S. Mitra, Generalized inverse of a matrices and its applications, New York: Wiley, 1971.
    [23] J. Rong, X. Liu, On misspecification of the covariance matrix in linear models, Far East Journal of Theoretical Statistics, 25 (2008), 209–219.
    [24] Y. Tian, The maximal and minimal ranks of some expressions of generalized inverses of matrices, SEA Bull. Math., 25 (2002), 745–755. http://dx.doi.org/10.1007/s100120200015 doi: 10.1007/s100120200015
    [25] Y. Tian, Some decompositions of OLSEs and BLUEs under a partitioned linear model, Int. Stat. Rev., 75 (2007), 224–248. http://dx.doi.org/10.1111/j.1751-5823.2007.00018.x doi: 10.1111/j.1751-5823.2007.00018.x
    [26] Y. Tian, On equalities for BLUEs under mis-specified Gauss-Markov models, Acta. Math. Sin.-English Ser., 25 (2009), 1907–1920. http://dx.doi.org/10.1007/s10114-009-6375-9 doi: 10.1007/s10114-009-6375-9
    [27] Y. Tian, A new derivation of BLUPs under random-effects model, Metrika, 78 (2015), 905–918. http://dx.doi.org/10.1007/s00184-015-0533-0 doi: 10.1007/s00184-015-0533-0
    [28] Y. Tian, Matrix rank and inertia formulas in the analysis of general linear models, Open Math., 15 (2017), 126–150. http://dx.doi.org/10.1515/math-2017-0013 doi: 10.1515/math-2017-0013
    [29] Y. Tian, S. Cheng, The maximal and minimal ranks of ABXC with applications, New York J. Math., 9 (2003), 345–362.
    [30] Y. Tian, B. Jiang, A new analysis of the relationships between a general linear model and its mis-specified forms, J. Korean Stat. Soc., 46 (2017), 182–193. http://dx.doi.org/10.1016/j.jkss.2016.08.004 doi: 10.1016/j.jkss.2016.08.004
    [31] Y. Tian, C. Wang, On simultaneous prediction in a multivariate general linear model with future observations, Stat. Probil. Lett., 128 (2017), 52–59. http://dx.doi.org/10.1016/j.spl.2017.04.007 doi: 10.1016/j.spl.2017.04.007
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1348) PDF downloads(61) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog