In this note, we first corrected a result of Alakhrass [
Citation: Lihong Hu, Junjian Yang. Inequalities on 2×2 block accretive partial transpose matrices[J]. AIMS Mathematics, 2024, 9(4): 8805-8813. doi: 10.3934/math.2024428
[1] | Moh. Alakhrass . A note on positive partial transpose blocks. AIMS Mathematics, 2023, 8(10): 23747-23755. doi: 10.3934/math.20231208 |
[2] | Mohammad Al-Khlyleh, Mohammad Abdel Aal, Mohammad F. M. Naser . Interpolation unitarily invariant norms inequalities for matrices with applications. AIMS Mathematics, 2024, 9(7): 19812-19821. doi: 10.3934/math.2024967 |
[3] | Sourav Shil, Hemant Kumar Nashine . Positive definite solution of non-linear matrix equations through fixed point technique. AIMS Mathematics, 2022, 7(4): 6259-6281. doi: 10.3934/math.2022348 |
[4] | Kanjanaporn Tansri, Sarawanee Choomklang, Pattrawut Chansangiam . Conjugate gradient algorithm for consistent generalized Sylvester-transpose matrix equations. AIMS Mathematics, 2022, 7(4): 5386-5407. doi: 10.3934/math.2022299 |
[5] | Junyuan Huang, Xueqing Chen, Zhiqi Chen, Ming Ding . On a conjecture on transposed Poisson n-Lie algebras. AIMS Mathematics, 2024, 9(3): 6709-6733. doi: 10.3934/math.2024327 |
[6] | Arnon Ploymukda, Kanjanaporn Tansri, Pattrawut Chansangiam . Weighted spectral geometric means and matrix equations of positive definite matrices involving semi-tensor products. AIMS Mathematics, 2024, 9(5): 11452-11467. doi: 10.3934/math.2024562 |
[7] | Pattrawut Chansangiam, Arnon Ploymukda . Riccati equation and metric geometric means of positive semidefinite matrices involving semi-tensor products. AIMS Mathematics, 2023, 8(10): 23519-23533. doi: 10.3934/math.20231195 |
[8] | Nunthakarn Boonruangkan, Pattrawut Chansangiam . Convergence analysis of a gradient iterative algorithm with optimal convergence factor for a generalized Sylvester-transpose matrix equation. AIMS Mathematics, 2021, 6(8): 8477-8496. doi: 10.3934/math.2021492 |
[9] | Arnon Ploymukda, Pattrawut Chansangiam . Metric geometric means with arbitrary weights of positive definite matrices involving semi-tensor products. AIMS Mathematics, 2023, 8(11): 26153-26167. doi: 10.3934/math.20231333 |
[10] | Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007 |
In this note, we first corrected a result of Alakhrass [
Let Mn be the set of n×n complex matrices. Mn(Mk) is the set of n×n block matrices with each block in Mk. For A∈Mn, the conjugate transpose of A is denoted by A∗. When A is Hermitian, we denote the eigenvalues of A in nonincreasing order λ1(A)≥λ2(A)≥...≥λn(A); see [2,7,8,9]. The singular values of A, denoted by s1(A),s2(A),...,sn(A), are the eigenvalues of the positive semi-definite matrix |A|=(A∗A)1/2, arranged in nonincreasing order and repeated according to multiplicity as s1(A)≥s2(A)≥...≥sn(A). If A∈Mn is positive semi-definite (definite), then we write A≥0(A>0). Every A∈Mn admits what is called the cartesian decomposition A=ReA+iImA, where ReA=A+A∗2, ImA=A−A∗2. A matrix A∈Mn is called accretive if ReA is positive definite. Recall that a norm ||⋅|| on Mn is unitarily invariant if ||UAV||=||A|| for any A∈Mn and unitary matrices U,V∈Mn. The Hilbert-Schmidt norm is defined as ||A||22=tr(A∗A).
For A,B>0 and t∈[0,1], the weighted geometric mean of A and B is defined as follows
A♯tB =A1/2(A−1/2BA−1/2)tA1/2. |
When t=12, A♯12B is called the geometric mean of A and B, which is often denoted by A♯B. It is known that the notion of the (weighted) geometric mean could be extended to cover all positive semi-definite matrices; see [3, Chapter 4].
Let A,B,X∈Mn. For 2×2 block matrix M in the form
M=(AXX∗B)∈M2n |
with each block in Mn, its partial transpose of M is defined by
Mτ=(AX∗XB). |
If M and Mτ≥0, then we say it is positive partial transpose (PPT). We extend the notion to accretive matrices. If
M=(AXY∗B)∈M2n, |
and
Mτ=(AY∗XC)∈M2n |
are both accretive, then we say that M is APT (i.e., accretive partial transpose). It is easy to see that the class of APT matrices includes the class of PPT matrices; see [6,10,13].
Recently, many results involving the off-diagonal block of a PPT matrix and its diagonal blocks were presented; see [5,11,12]. In 2023, Alakhrass [1] presented the following two results on 2×2 block PPT matrices.
Theorem 1.1 ([1], Theorem 3.1). Let (AXX∗B) be PPT and let X=U|X| be the polar decomposition of X, then
|X|≤(A♯tB)♯(U∗(A♯1−tB)U),t∈[0,1]. |
Theorem 1.2 ([1], Theorem 3.2). Let (AXX∗B) be PPT, then for t∈[0,1],
ReX≤(A♯tB)♯(A♯1−tB)≤(A♯tB)+(A♯1−tB)2, |
and
ImX≤(A♯tB)♯(A♯1−tB)≤(A♯tB)+(A♯1−tB)2. |
By Theorem 1.1 and the fact si+j−1(XY)≤si(X)sj(Y)(i+j≤n+1), the author obtained the following corollary.
Corollary 1.3 ([1], Corollary 3.5). Let (AXX∗B) be PPT, then for t∈[0,1],
si+j−1(X)≤si(A♯tB)sj(A♯1−tB). |
Consequently,
s2j−1(X)≤sj(A♯tB)sj(A♯1−tB). |
A careful examination of Alakhrass' proof in Corollary 1.3 actually revealed an error. The right results are si+j−1(X)≤si(A♯tB)12sj((A♯1−tB)12) and s2j−1(X)≤sj((A♯tB)12)sj((A♯1−tB)12). Thus, in this note, we will give a correct proof of Corollary 1.3 and extend the above inequalities to the class of 2×2 block APT matrices. At the same time, some relevant results will be obtained.
Before presenting and proving our results, we need the following several lemmas of the weighted geometric mean of two positive matrices.
Lemma 2.1. [3, Chapter 4] Let X,Y∈Mn be positive definite, then
1) X♯Y=max{Z:Z=Z∗,(XZZY)≥0}.
2) X♯Y=X12UY12 for some unitary matrix U.
Lemma 2.2. [4, Theorem 3] Let X,Y∈Mn be positive definite, then for every unitarily invariant norm,
||X♯tY||≤||X1−tYt||≤||(1−t)X+tY||. |
Now, we give a lemma that will play an important role in the later proofs.
Lemma 2.3. Let M=(AXY∗B)∈M2n be APT, then for t∈[0,1],
(ReA♯tReBX+Y2X∗+Y∗2ReA♯1−tReB) |
is PPT.
Proof: Since M is APT, we have that
ReM=(ReAX+Y2X∗+Y∗2ReB) |
is PPT.
Therefore, ReM≥0 and ReMτ≥0.
By the Schur complement theorem, we have
ReB−X∗+Y∗2(ReA)−1X+Y2≥0, |
and
ReA−X∗+Y∗2(ReB)−1X+Y2≥0. |
Compute
X∗+Y∗2(ReA♯tReB)−1X+Y2=X∗+Y∗2((ReA)−1♯t(ReB)−1)X+Y2=(X∗+Y∗2(ReA)−1X+Y2)♯t(X∗+Y∗2(ReB)−1X+Y2)≤ReB♯tReA. |
Thus,
(ReB♯tReA)−X∗+Y∗2(ReA♯tReB)−1X+Y2≥0. |
By utilizing (ReB♯tReA)=ReA♯1−tReB, we have
(ReA♯tReBX+Y2X∗+Y∗2ReA♯1−tReB)≥0. |
Similarly, we have
(ReA♯tReBX∗+Y∗2X+Y2ReA♯1−tReB)≥0. |
This completes the proof.
First, we give the correct proof of Corollary 1.3.
Proof: By Theorem 1.1, there exists a unitary matrix U∈Mn such that |X|≤(A♯tB)♯(U∗(A♯1−tB)U). Moreover, by Lemma 2.1, we have (A♯tB)♯(U∗(A♯1−tB)U)=(A♯tB)12V(U∗(A♯1−tB)12U). Now, by si+j−1(AB)≤si(A)sj(B), we have
si+j−1(X)≤si+j−1((A♯tB)♯(U∗(A♯1−tB)U))=si+j−1((A♯tB)12VU∗(A♯1−tB)12U)≤si((A♯tB)12)sj((A♯1−tB)12), |
which completes the proof.
Next, we generalize Theorem 1.1 to the class of APT matrices.
Theorem 2.4. Let M=(AXY∗B) be APT, then
|X+Y2|≤(ReA♯tReB)♯(U∗(ReA♯1−tReB)U), |
where U∈Mn is any unitary matrix such that X+Y2=U|X+Y2|.
Proof: Since M is an APT matrix, we know that
(ReA♯tReBX+Y2X∗+Y∗2ReB♯1−tReA) |
is PPT.
Let W be a unitary matrix defined as W=(I00U). Thus,
W∗(ReA♯tReBX∗+Y∗2X+Y2ReA♯1−tReB)W=(ReA♯tReB|X+Y2||X+Y2|U∗(ReA♯1−tReB)U)≥0. |
By Lemma 2.1, we have
|X+Y2|≤(ReA♯tReB)♯(U∗(ReA♯1−tReB)U). |
Remark 1. When M=(AXY∗B) is PPT in Theorem 2.4, our result is Theorem 1.1. Thus, our result is a generalization of Theorem 1.1.
Using Theorem 2.4 and Lemma 2.2, we have the following.
Corollary 2.5. Let M=(AXY∗B) be APT and let t∈[0,1], then for every unitarily invariant norm ||⋅|| and some unitary matrix U∈Mn,
||X+Y2||≤||(ReA♯tReB)♯(U∗(ReA♯1−tReB)U)||≤||(ReA♯tReB)+U∗(ReA♯1−tReB)U2||≤||ReA♯tReB||+||ReA♯1−tReB||2≤||(ReA)1−t(ReB)t||+||(ReA)t(ReB)1−t||2≤||(1−t)ReA+tReB||+||tReA+(1−t)ReB||2. |
Proof: The first inequality follows from Theorem 2.4. The third one is by the triangle inequality. The other conclusions hold by Lemma 2.2.
In particular, when t=12, we have the following result.
Corollary 2.6. Let M=(AXY∗B) be APT, then for every unitarily invariant norm ||⋅|| and some unitary matrix U∈Mn,
||X+Y2||≤||(ReA♯ReB)♯(U∗(ReA♯ReB)U)||≤||(ReA♯ReB)+U∗(ReA♯ReB)U2||≤||ReA♯ReB||≤||(ReA)12(ReB)12||≤||ReA+ReB2||. |
Squaring the inequalities in Corollary 2.6, we get a quick consequence.
Corollary 2.7. If M=(AXY∗B) is APT, then
tr((X∗+Y∗2)(X+Y2))≤tr((ReA♯ReB)2)≤tr(ReAReB)≤tr((ReA+ReB2)2). |
Proof: Compute
tr((X∗+Y∗2)(X+Y2))≤tr((ReA♯ReB)∗(ReA♯ReB))=tr((ReA♯ReB)2)≤tr((ReA)(ReB))≤tr((ReA+ReB2)2). |
It is known that for any X,Y∈Mn and any indices i,j such that i+j≤n+1, we have si+j−1(XY)≤si(X)sj(Y) (see [2, Page 75]). By utilizing this fact and Theorem 2.4, we can obtain the following result.
Corollary 2.8. Let M=(AXY∗B) be APT, then for any t∈[0,1], we have
si+j−1(X+Y2)≤si((ReA♯tReB)12)sj((ReA♯1−tReB)12). |
Consequently,
s2j−1(X+Y2)≤sj((ReA♯tReB)12)sj((ReA♯1−tReB)12). |
Proof: By Lemma 2.1 and Theorem 2.4, observe that
si+j−1(X+Y2)=si+j−1(|X+Y2|)≤si+j−1((ReA♯tReB)♯(U∗(ReA♯1−tReB)U))=si+j−1((ReA♯tReB)12V(U∗(ReA♯1−tReB)U)12)≤si((ReA♯tReB)12V)sj((U∗(ReA♯1−tReB)U)12)=si((ReA♯tReB)12)sj((ReA♯1−tReB)12). |
Finally, we study the relationship between the diagonal blocks and the real part of the off-diagonal blocks of the APT matrix M.
Theorem 2.9. Let M=(AXY∗B) be APT, then for all t∈[0,1],
Re(X+Y2)≤(ReA♯tReB)♯(ReA♯1−tReB)≤(ReA♯tReB)+(ReA♯1−tReB)2, |
and
Im(X+Y2)≤(ReA♯tReB)♯(ReA♯1−tReB)≤(ReA♯tReB)+(ReA♯1−tReB)2. |
Proof: Since M is APT, we have that
ReM=(ReAX+Y2X∗+Y∗2ReB) |
is PPT.
Therefore,
(ReA♯tReBRe(X+Y2)Re(X∗+Y∗2)ReA♯1−tReB)=12(ReA♯tReBX+Y2X∗+Y∗2ReA♯1−tReB)+12(ReA♯tReBX∗+Y∗2X+Y2ReA♯1−tReB)≥0. |
So, by Lemma 2.1, we have
Re(X+Y2)≤(ReA♯tReB)♯(ReA♯1−tReB). |
This implies the first inequality.
Since ReM is PPT, we have
(ReA−iX+Y2iX∗+Y∗2ReB)=(I00iI)(ReM)(I00−iI)≥0,(ReAiX∗+Y∗2−iX+Y2ReB)=(I00−iI)((ReM)τ)(I00iI)≥0. |
Thus,
(ReA−iX+Y2iX∗+Y∗2ReB) |
is PPT.
By Lemma 2.3,
(ReA♯tReB−iX+Y2iX∗+Y∗2ReA♯1−tReB) |
is also PPT.
So,
12(ReA♯tReB−iX+Y2iX∗+Y∗2ReA♯1−tReB)+12(ReA♯tReBiX∗+Y∗2−iX+Y2ReA♯1−tReB)≥0, |
which means that
(ReA♯tReBIm(X+Y2)Im(X+Y2)ReA♯1−tReB)≥0. |
By Lemma 2.1, we have
Im(X+Y2)≤(ReA♯tReB)♯(ReA♯1−tReB). |
This completes the proof.
Corollary 2.10. Let (ReAX+Y2X+Y2ReB)≥0. If X+Y2 is Hermitian and t∈[0,1], then,
X+Y2≤(ReA♯tReB)♯(ReA♯1−tReB)≤(ReA♯tReB)+(ReA♯1−tReB)2. |
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The work is supported by National Natural Science Foundation (grant No. 12261030), Hainan Provincial Natural Science Foundation for High-level Talents (grant No. 123RC474), Hainan Provincial Natural Science Foundation of China (grant No. 124RC503), the Hainan Provincial Graduate Innovation Research Program (grant No. Qhys2023-383 and Qhys2023-385), and the Key Laboratory of Computational Science and Application of Hainan Province.
The authors declare that they have no conflict of interest.
[1] |
M. Alakhrass, A note on positive partial transpose blocks, AIMS Mathematics, 8 (2023), 23747–23755. https://doi.org/10.3934/math.20231208 doi: 10.3934/math.20231208
![]() |
[2] | R. Bhatia, Matrix analysis, New York: Springer, 1997. https://doi.org/10.1007/978-1-4612-0653-8 |
[3] | R. Bhatia, Positive definite matrices, Princeton: Princeton University Press, 2007. |
[4] |
R. Bhatia, P. Grover, Norm inequalities related to the matrix geometric mean, Linear Algebra Appl., 437 (2012), 726–733. https://doi.org/10.1016/j.laa.2012.03.001 doi: 10.1016/j.laa.2012.03.001
![]() |
[5] |
X. Fu, P. S. Lau, T. Y. Tam, Inequalities on 2×2 block positive semidefinite matrices, Linear Multilinear A., 70 (2022), 6820–6829. https://doi.org/10.1080/03081087.2021.1969327 doi: 10.1080/03081087.2021.1969327
![]() |
[6] |
X. Fu, L. Hu, S. A. Haseeb, Inequalities for partial determinants of accretive block matrices, J. Inequal. Appl., 2023 (2023), 101. https://doi.org/10.1186/s13660-023-03008-x doi: 10.1186/s13660-023-03008-x
![]() |
[7] |
S. Hayat, J. H. Koolen, F. Liu, Z. Qiao, A note on graphs with exactly two main eigenvalues, Linear Algebra Appl., 511 (2016), 318–327. https://doi.org/10.1016/j.laa.2016.09.019 doi: 10.1016/j.laa.2016.09.019
![]() |
[8] |
S. Hayat, M. Javaid, J. H. Koolen, Graphs with two main and two plain eigenvalues, Appl. Anal. Discr. Math., 11 (2017), 244–257. https://doi.org/10.2298/AADM1702244H doi: 10.2298/AADM1702244H
![]() |
[9] |
J. H. Koolen, S. Hayat, Q. Iqbal, Hypercubes are determined by their distance spectra, Linear Algebra Appl., 505 (2016), 97–108. https://doi.org/10.1016/j.laa.2016.04.036 doi: 10.1016/j.laa.2016.04.036
![]() |
[10] |
L. Kuai, An extension of the Fiedler-Markham determinant inequality, Linear Multilinear A., 66 (2018), 547–553. https://doi.org/10.1080/03081087.2017.1304521 doi: 10.1080/03081087.2017.1304521
![]() |
[11] |
E. Y. Lee, The off-diagonal block of a PPT matrix, Linear Algebra Appl., 486 (2015), 449–453. https://doi.org/10.1016/j.laa.2015.08.018 doi: 10.1016/j.laa.2015.08.018
![]() |
[12] |
M. Lin, Inequalities related to 2×2 block PPT matrices, Oper. Matrices, 9 (2015), 917–924. http://doi.org/10.7153/oam-09-54 doi: 10.7153/oam-09-54
![]() |
[13] |
H. Xu, X. Fu, S. A. Haseeb, Trace inequalities related to 2×2 block sector matrices, Oper. Matrices, 17 (2023), 367–374. http://doi.org/10.7153/oam-2023-17-26 doi: 10.7153/oam-2023-17-26
![]() |