In this paper, we study a kind of dual generalized inverses of dual matrices, which is called the dual group inverse. Some necessary and sufficient conditions for a dual matrix to have the dual group inverse are given. If one of these conditions is satisfied, then compact formulas and efficient methods for the computation of the dual group inverse are given. Moreover, the results of the dual group inverse are applied to solve systems of linear dual equations. The dual group-inverse solution of systems of linear dual equations is introduced. The dual analog of the real least-squares solution and minimal P-norm least-squares solution are obtained. Some numerical examples are provided to illustrate the results obtained.
Citation: Jin Zhong, Yilin Zhang. Dual group inverses of dual matrices and their applications in solving systems of linear dual equations[J]. AIMS Mathematics, 2022, 7(5): 7606-7624. doi: 10.3934/math.2022427
[1] | Qi Xiao, Jin Zhong . Characterizations and properties of hyper-dual Moore-Penrose generalized inverse. AIMS Mathematics, 2024, 9(12): 35125-35150. doi: 10.3934/math.20241670 |
[2] | Zengtai Gong, Jun Wu, Kun Liu . The dual fuzzy matrix equations: Extended solution, algebraic solution and solution. AIMS Mathematics, 2023, 8(3): 7310-7328. doi: 10.3934/math.2023368 |
[3] | Yinlan Chen, Min Zeng, Ranran Fan, Yongxin Yuan . The solutions of two classes of dual matrix equations. AIMS Mathematics, 2023, 8(10): 23016-23031. doi: 10.3934/math.20231171 |
[4] | Rashad Abdel-Baky, Mohamed Khalifa Saad . Some characterizations of dual curves in dual 3-space D3. AIMS Mathematics, 2021, 6(4): 3339-3351. doi: 10.3934/math.2021200 |
[5] | Fengxia Zhang, Ying Li, Jianli Zhao . The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261 |
[6] | Roa Makki . Some characterizations of non-null rectifying curves in dual Lorentzian 3-space D31. AIMS Mathematics, 2021, 6(3): 2114-2131. doi: 10.3934/math.2021129 |
[7] | Hsien-Chung Wu . Numerical method for solving the continuous-time linear programming problems with time-dependent matrices and piecewise continuous functions. AIMS Mathematics, 2020, 5(6): 5572-5627. doi: 10.3934/math.2020358 |
[8] | S. Chakravarty, P. Guha . The DH5 system and the Chazy and Ramamani equations. AIMS Mathematics, 2025, 10(3): 6318-6337. doi: 10.3934/math.2025288 |
[9] | Dong Wang, Ying Li, Wenxv Ding . The least squares Bisymmetric solution of quaternion matrix equation AXB=C. AIMS Mathematics, 2021, 6(12): 13247-13257. doi: 10.3934/math.2021766 |
[10] | Faik Babadağ . A new approach to Jacobsthal, Jacobsthal-Lucas numbers and dual vectors. AIMS Mathematics, 2023, 8(8): 18596-18606. doi: 10.3934/math.2023946 |
In this paper, we study a kind of dual generalized inverses of dual matrices, which is called the dual group inverse. Some necessary and sufficient conditions for a dual matrix to have the dual group inverse are given. If one of these conditions is satisfied, then compact formulas and efficient methods for the computation of the dual group inverse are given. Moreover, the results of the dual group inverse are applied to solve systems of linear dual equations. The dual group-inverse solution of systems of linear dual equations is introduced. The dual analog of the real least-squares solution and minimal P-norm least-squares solution are obtained. Some numerical examples are provided to illustrate the results obtained.
A dual number ˆa [1] is usually denoted in the form
ˆa=a+εa0, |
where the real numbers a and a0 are respectively the prime part and the dual part of ˆa, and ε is the dual unit satisfying ε≠0, 0ε=ε0=0, 1ε=ε1=ε and ε2=0. Actually, dual numbers can be defined over both the real and the complex fields [2], and in many cases, real numbers will suffice. Since a pure dual number εa0 has no inverse, the set of dual numbers is not a field, but a ring [3]. For more details of the dual numbers, such as representations of dual numbers, functions of dual numbers, mathematical expressions of dual operations, the reader is referred to [4,5] and references therein. The dual number, originally introduced by Clifford in 1873 [6], was further developed by Study in 1901 [7] to represent the dual angle in spatial geometry. Because of conciseness of notation, dual numbers and their algebra have been powerful and convenient tools for the analysis of mechanical systems, and have attracted a lot of attention over the last three decades because of their applicability to various areas of engineering like kinematic analysis [1,4], robotics [8,9], screw motion [10] and rigid body motion analysis [11].
A matrix with dual number entries is called a dual matrix. If A and B are two m×n (or m-by-n) real matrices, then the m×n dual matrix ˆA, is defined as ˆA=A+εB, where A and B are respectively called the prime part and the dual part of ˆA. In particular, if A and B are n×n real matrices, then we say that ˆA is a square dual matrix. Many terminologies of dual matrices, such as matrix multiplication, inverse of a dual matrix, QR and SVD decompositions, can be defined on the analogy of those of real matrices. For example, for two dual matrices ˆA1=A1+εB1 and ˆA2=A2+εB2, ˆA1ˆA2=A1A2+ε(A1B2+B1A2), where A1,A2,B1,B2 have appropriate dimensions. If a real n×n matrix A is nonsingular, then the n×n dual matrix ˆA=A+εB is also nonsingular and the inverse of ˆA is given by ˆA−1=A−1−εA−1BA−1 [4]. For rectangular dual matrices and singular square dual matrices, it is natural to study their dual generalized inverses. Angeles [1] investigated the usefulness of dual generalized inverses in kinematic analyses based on dual numbers. However, it should be noted that many important properties of dual generalized inverses of dual matrices are much different from those of real matrices. For example, it is well-known that the Moore-Penrose generalized inverse of a real matrix exists, while it was shown in [12,13] that the dual Moore-Penrose generalized inverse (DMPGI, for short) of a rank-deficient dual matrix may not exist and there are uncountably many dual matrices that do not have them. Hence, it is interesting to investigate the necessary and sufficient conditions for the existence of dual generalized inverses and find efficient methods to compute them when they exist. The existence, computations and applications of dual generalized inverses have been a topic of recent interest. de Falco et al. [14] discussed the mathematical conditions of existence for different types of dual generalized inverses. Moreover, solutions of some meaningful kinematic problems were discussed to demonstrate the usefulness and versatility of dual generalized inverses. Pennestrì et al. [15] proposed novel and computationally efficient algorithms/formulas for the computation of the MPDGI. Udwadia et al. [13] investigated the question of whether all dual matrices have dual Moore-Penrose generalized inverses and showed that there are uncountably many dual matrices that do not have them. Udwadia [12] studied properties of the DMPGI and used them to solve systems of linear dual equations. Wang [16] gave some necessary and sufficient conditions for a dual matrix to have the DMPGI, and a compact formula for the computation of the DMPGI was also given.
As showed in [12], dual generalized inverses are powerful tools to study the solutions and least-squares solutions of systems of linear dual equations. Many applications of dual algebra in kinematics require numerical solutions of systems of linear dual equations. A common approach is to split the system of dual linear equations into the real part and dual part and then form a system of real equations. The solutions of the real part and dual part of the system of linear dual equations are usually computed separately. However, the availability of a dual generalized inverse of the coefficient matrix allows the simultaneous computation of the dual equation in a single step, and thus improve the overall computational efficiency. For this reason, the existence and the availability of an efficient method to compute dual generalized inverses could be of interest for the researchers.
Motivated by the above results, in this paper, we consider another kind of dual generalized inverse of dual matrices, which is called the dual group inverse. The dual group inverses, although exist only for square dual matrices, have some properties of inverse matrix that the DMPGI does not possess, for instance, a square dual matrix commutes with its dual group inverse. Furthermore, the dual group inverses also provide solutions and least-squares solutions of linear dual equations. Hence, they more closely resemble the true inverse of a dual matrix. For real cases, there are applications of the group inverse in various fields such as Markov chains [17], stationary iteration [18], fuzzy linear systems [19]. It is known that the group inverse of a real square matrix A exists if and only if the index of A is 1. However, for a square dual matrix ˆA whose prime part has index 1, the dual group inverse of ˆA may not exist. Hence, it is also interesting to investigate the existence and computations of the dual group inverse.
In this paper, we study the existence and computations of the dual group inverse, and discuss the usefulness of the dual group inverse in solving systems of linear dual equations. The outline of the rest of this paper is as follows. In Section 2, we present some notations which will be used later and review some preliminaries briefly. In Section 3, we give some necessary and sufficient conditions for a dual matrix to have the dual group inverse, compact formulas for the computation of the dual group inverse are also given. In Section 4, we discuss the applications of the dual group inverse in solving systems of linear dual equations. The concept of the dual group-inverse solution is introduced. The dual analog of the real minimum P-norm least-squares solution is obtained through the dual group inverse. Some numerical examples are provided to illustrate the results obtained.
Throughout this paper the following notations and definitions are used. Cm×n, Rm×n and Dm×n denote the set of all m×n complex matrices, real matrices and dual matrices respectively. Rn and Dn denote the set of all n-dimensional real column vectors and n-dimensional dual column vectors respectively. For a real matrix A, R(A) is the range of A and N(A) is the null space of A. The index of a matrix A∈Rn×n, is the smallest positive integer such that rank(Ak) = rank(Ak+1), and denoted by Ind(A). For a square dual matrix ˆA=A+εB, ˆAT=AT+εBT, where AT is the transpose of A. For a nonsingular real matrix P, P−1ˆAP=P−1AP+εP−1BP.
We first give the definition of the dual Moore-Penrose generalized inverse of a dual matrix in the following, which is analogous to real matrices.
Definition 2.1. The dual Moore-Penrose generalized inverse of a dual matrix ˆA∈Dm×n, denoted by ˆA†, is the unique matrix ˆX∈Dn×m satisfying the following dual Penrose equations
ˆAˆXˆA=ˆA,ˆXˆAˆX=ˆX,(ˆAˆX)T=ˆAˆX,(ˆXˆA)T=ˆXˆA. |
The Drazin inverse is a generalized inverse which is defined only for square matrices, and has many applications in the theory of finite Markov chains, singular linear difference equations, cryptograph etc (see [20,21]).
Definition 2.2. [22] Let A∈Rn×n and Ind(A)=k. Then the matrix X∈Rn×n satisfying
AkXA=Ak,XAX=X,AX=XA |
is called the Drazin inverse of A, and is denoted by X=AD.
If Ind(A) = 1, then this special case of the Drazin inverse is known as the group inverse.
Definition 2.3. [22] Let A∈Rn×n. If X∈Rn×n satisfies
AXA=A,XAX=X,AX=XA, |
then X is called the group inverse of A, and is denoted by A#. It is known that A has a group inverse if and only if Ind(A)=1. For example, diagonalizable matrices have index 1, thus, a singular diagonalizable matrix has group inverse.
The dual group inverse of a square dual matrix can be defined similarly as the group inverse of a square real matrix.
Definition 2.4. Let ˆA∈Dn×n. If a dual matrix ˆX∈Dn×n satisfies
ˆAˆXˆA=ˆA,ˆXˆAˆX=ˆX,ˆAˆX=ˆXˆA, |
then ˆX is called the dual group inverse of ˆA, and is denoted by ˆA#.
The following lemma gives a block representation of a real square matrix that has the group inverse.
Lemma 2.1. [22] Let A∈Rn×n. Then A has a group inverse if and only if there exist nonsingular matrices P and C such that
A=P[C000]P−1. | (2.1) |
In this case, it is easy to verify that
A#=P[C−1000]P−1. | (2.2) |
The block representation (2.1) is the Jordan canonical form of A.
The following three lemmas play important roles in Section 3.
Lemma 2.2. [22] Let A,P∈Cn×n satisfy P2=P. Then
(i) PA=A if and only if R(A)⊂R(P).
(ii) AP=A if and only if N(P)⊂N(A).
Lemma 2.3. [23] If
M=[AC0D], |
then M# exists if and only if A# and D# exist and (I−AA#)C(I−DD#)=0. If so,
M#=[A#X0D#], |
where X=−A#CD#+(A#)2C(I−DD#)+(I−AA#)C(D#)2.
Lemma 2.4. [24] Let A∈Cm×n, B∈Cm×m, C∈Cn×n with Ind(B) = k and Ind(C) = l. Then
(i) rank[ABkCl0]=rank(Bk)+rank(Cl)+rank[(Im−BBD)A(In−CDC)].
(ii) rank[ABk]=rank(Bk)+rank(A−BBDA).
(iii) rank[ACl]=rank(Cl)+rank(A−ACDC).
In this section, we study the dual group inverses of dual matrices. Some necessary and sufficient conditions for the existence of the dual group inverse are given. Some efficient methods for the computation of the dual group inverse are also presented. We firstly give a necessary and sufficient condition for a dual matrix to be the dual group inverse of a given dual matrix ˆA=A+εB.
Lemma 3.1. Let ˆA=A+εB be a dual matrix with A,B∈Rn×n and Ind(A)=1. Then an n×n dual matrix ˆG=G+εR is a dual group inverse of ˆA if and only if G=A# and
B=AA#B+ARA+BA#A, | (3.1) |
R=A#AR+A#BA#+RAA#, | (3.2) |
AR+BA#=RA+A#B. | (3.3) |
Proof. By Definition 2.4, ˆG=G+εR is a dual group inverse of ˆA=A+εB if and only if
(A+εB)(G+εR)(A+εB)=AGA+ε(AGB+ARA+BGA)=A+εB, |
(G+εR)(A+εB)(G+εR)=GAG+ε(GAR+GBG+RAG)=(G+εR), |
(A+εB)(G+εR)=AG+ε(AR+BG)=GA+ε(GB+RA)=(G+εR)(A+εB). |
Then, we can see from the prime parts of the above equalities that AGA=A, GAG=G and AG=GA, i.e., G=A#. On the other hand, we can see from the dual parts of the above equalities that B=AGB+ARA+BGA, R=GAR+GBG+RAG and AR+BG=GB+RA. Hence, the equalities (3.1)–(3.3) follow.
It is well-known that for a real square matrix A, if A# exists, then it is unique. We now show that it is also true for dual matrices.
Theorem 3.1. Let ˆA=A+εB be a dual matrix with A,B∈Rn×n and Ind(A)=1. If the dual group inverse of ˆA exists, then it is unique.
Proof. According to Lemma 3.1, if the dual group inverse of ˆA=A+εB exists, then it must be of the form A#+εR. Let ˆG1=A#+εR1 and ˆG2=A#+εR2 be two dual group inverses of ˆA. To show the uniqueness of ˆA#, we need only to show that R1=R2.
It follows from (3.1) that
B=AA#B+AR1A+BA#A | (3.4) |
and
B=AA#B+AR2A+BA#A. | (3.5) |
Subtracting (3.4) from (3.5) gives
A(R1−R2)A=0. | (3.6) |
Similarly, we can observe from (3.2) that
R1=A#AR1+A#BA#+R1AA# | (3.7) |
and
R2=A#AR2+A#BA#+R2AA#. | (3.8) |
Comparing (3.7) and (3.8) we have
R1−R2=A#A(R1−R2)+(R1−R2)AA#. | (3.9) |
Furthermore, it can be seen from (3.3) that
AR1+BA#=R1A+A#B | (3.10) |
and
AR2+BA#=R2A+A#B. | (3.11) |
Subtracting (3.10) from (3.11) gives
A(R1−R2)=(R1−R2)A. | (3.12) |
Now, postmultiplying (3.6) by A# yields A(R1−R2)AA#=0. Thus, by (3.12),
0=A(R1−R2)AA#=(R1−R2)AAA#=(R1−R2)A=A(R1−R2). | (3.13) |
Substituting (3.13) into (3.9) we get
R1−R2=A#A(R1−R2)+(R1−R2)AA#=0, |
i.e., R1=R2, which completes the proof.
Although Lemma 3.1 provides a necessary and sufficient condition for a dual matrix to be the dual group inverse of a given dual matrix, however, it is still not easy for us to see whether the dual group inverse of a given dual matrix exists. The following theorem gives some sufficient and necessary conditions for the existence of ˆA# under the assumption that the prime part of ˆA has index 1. If one of these conditions is satisfied, we give a compact formula for the computation of ˆA#.
Theorem 3.2. Let ˆA=A+εB be a dual matrix with A,B∈Rn×n and Ind(A)=1. Then the following conditions are equivalent:
(i) The dual group inverse of ˆA exists;
(ii) ˆA=P[C000]P−1+εP[B1B2B30]P−1, where C and P are nonsingular matrices;
(iii) (I−AA#)B(I−AA#)=0;
(iv) [AB0A]# exists;
(v) rank[BAA0]=2rank(A);
(vi) ˆA† exists.
Furthermore, if the dual group inverse of ˆA exists, then
ˆA#=A#+εR, | (3.14) |
where
R=−A#BA#+(A#)2B(I−AA#)+(I−AA#)B(A#)2. | (3.15) |
Proof. (i)⟹(ii): If Ind(A)=1, then A and A# have the block matrix forms (2.1) and (2.2), respectively. Let B=P[B1B2B3B4]P−1 and R=P[R1R2R3R4]P−1. If the dual group inverse of ˆA exists, then by Lemma 3.1, the condition (3.1) is satisfied, i.e.,
[B1B2B3B4]=[I000][B1B2B3B4]+[C000][R1R2R3R4][C000]+[B1B2B3B4][I000]=[2B1+CR1CB2B30]. |
It can be seen from the above equality that B4=0.
(ii) ⟹(iii): If
A=P[C000]P−1,B=P[B1B2B30]P−1, |
then a direct calculation shows that
(I−AA#)B(I−AA#)=P[000I][B1B2B30][000I]P−1=0. |
(iii) ⟹(i): By Lemma 2.1, there exist nonsingular matrices P and C such that A and A# are of the forms (2.1) and (2.2), respectively. Let B=P[B1B2B3B4]P−1. Then it follows from (I−AA#)B(I−AA#)=0 that
[000I][B1B2B3B4][000I]=[000B4]=0, |
which implies that B4=0.
Now, denote
ˆG=P[C−1000]P−1+εP[−C−1B1C−1C−2B2B3C−20]P−1. |
Then
ˆAˆG=(P[C000]P−1+εP[B1B2B30]P−1)×(P[C−1000]P−1+εP[−C−1B1C−1C−2B2B3C−20]P−1)=P[I000]P−1+εP[0C−1B2B3C−10]P−1, |
ˆGˆA=(P[C−1000]P−1+εP[−C−1B1C−1C−2B2B3C−20]P−1)×(P[C000]P−1+εP[B1B2B30]P−1)=P[I000]P−1+εP[0C−1B2B3C−10]P−1. |
Hence, ˆAˆG=ˆGˆA.
Moreover,
ˆAˆGˆA=(P[I000]P−1+εP[0C−1B2B3C−10]P−1)×(P[C000]P−1+εP[B1B2B30]P−1)=P[C000]P−1+εP[B1B2B30]P−1=ˆA |
and
ˆGˆAˆG=(P[I000]P−1+εP[0C−1B2B3C−10]P−1)×(P[C−1000]P−1+εP[−C−1B1C−1C−2B2B3C−20]P−1)=P[C−1000]P−1+εP[−C−1B1C−1C−2B2B3C−20]P−1=ˆG. |
Therefore, ˆA# exists and ˆA#=ˆG.
Since
A#BA#=P[C−1B1C−1000]P−1, |
(A#)2B(I−AA#)=P[C−2000][B1B2B30][000I]P−1=P[0C−2B200]P−1, |
(I−AA#)B(A#)2=P[000I][B1B2B30][C−2000]P−1=P[00B3C−20]P−1, |
then ˆG=ˆA#=A#+ε[−A#BA#+(A#)2B(I−AA#)+(I−AA#)B(A#)2], which completes the proofs of (3.14) and (3.15).
Since Ind(A) = 1, then by (i) of Lemma 2.4,
rank[BAA0]=2rank(A)+rank[(I−AA#)B(I−AA#)]. | (3.16) |
It can be seen from (3.16) that (I−AA#)B(I−AA#)=0 if and only if rank[BAA0]=2rank(A). Thus (iii) ⇔(v) follows.
The equivalence of (iii) and (iv) follows directly from Lemma 2.3, and the equivalence of (v) and (vi) follows from the equivalence of (a) and (c) in [16,Theorem 2.2], in which the author showed that ˆA† exists if and only if rank[BAA0]=2rank(A).
It should be noticed from Theorem 3.2 that for a square dual matrix ˆA whose prime part has index 1, ˆA# exists if and only if ˆA† exists. However, if the index of the prime part of a square dual matrix ˆA is not 1, then ˆA# does not exist, but ˆA† may exist. Moreover, it is easy to see that if ˆA# exists, then for any nonsingular real matrix P, (P−1ˆAP)# also exists and (P−1ˆAP)#=P−1ˆA#P, whereas the DMPGI does not possess this property.
If the prime part of a dual matrix ˆA=A+εB has index 1, then it is not difficult for us to see whether ˆA# exists. For example, we can calculate the rank of A and the rank of the 2×2 block matrix [BAA0] to see if the condition (v) in Theorem 3.2 holds. On the other hand, it is an unexpected result that if the dual group inverse of ˆA exists, then by Lemma 2.3, the prime part and the dual part of ˆA# are respectively the (1, 1)-entry and the (1, 2)-entry of the group inverse of the 2×2 upper triangular block matrix [AB0A], where A and B are respectively the prime part and the dual part of ˆA. In another word, ˆA# is completely determined by the group inverse of the block matrix [AB0A]. Hence, in order to obtain ˆA#, we need only to compute [AB0A]#, which is an efficient method to compute ˆA#.
Corollary 3.1. Let ˆA=A+εB be a dual matrix. If ˆA# exists, then
ˆA#=[I0][AB0A]#[IεI]. |
Corollary 3.2. Let ˆA=A+εB be a dual matrix with A,B∈Rn×n and Ind(A)=1. Then the following statements are equivalent:
(i) The dual group inverse of ˆA exists and ˆA#=A#−εA#BA#;
(ii) AA#B=BAA#=B;
(iii) R(B)⊂R(A) and N(A)⊂N(B);
(iv) rank[BA]=rank[BA]=rank(A).
Proof. (i) ⟹(ii): If ˆA# exists, then by (ii) of Theorem 3.2,
ˆA=P[C000]P−1+εP[B1B2B30]P−1. |
Moreover, if ˆA#=A#−εA#BA#, then by (3.15) we have that (A#)2B(I−AA#)+(I−AA#)B(A#)2=0, i.e.,
[C−2000][B1B2B30][000I]+[000I][B1B2B30][C−2000]=[0C−2B2B3C−20]=0. |
Thus, C−2B2=0 and B3C−2=0, i.e., B2=0 and B3=0. Now, B=P[B1000]P−1 and it is easy to see that AA#B=BAA#=B.
(ii) ⟹(i): If AA#B=BAA#=B, then (I−AA#)B(I−AA#)=0. Thus, by Theorem 3.2, the dual group inverse of ˆA exists. It is also clear that (A#)2B(I−AA#)+(I−AA#)B(A#)2=0. Therefore, it follows from (3.15) that ˆA#=A#−εA#BA#.
The equivalence of (ii) and (iii) follows immediately from Lemma 2.2.
(ii) ⟺(iv): We can observe from (ii) and (iii) of Lemma 2.4 that
rank[BA]=rank(A)+rank(B−AA#B) |
and
rank[BA]=rank(A)+rank(B−BAA#). |
Hence, AA#B=BAA#=B if and only if rank[BA]=rank[BA]=rank(A).
Example 3.1. We provide an example from kinematics in which three line-vectors from points pi to qi, 1≤i≤3, are drawn on a flat plane that lies in the plane y=2 in an inertial coordinate frame. The points have coordinates
p1=(1,2,2),p2=(3,2,2),p3=(5,2,4) |
and
q1=(3,2,3),q2=(4,2,3),q3=(8,2,6). |
Then the dual matrix of line-vectors is
ˆA=[213000112]+ε[2243−12−4−2−6]:=A+εB, |
where the i-th column of the prime part and the dual part of ˆA are respectively qi−pi and pi×qi,1≤i≤3.
Since rank(A)=rank(A2)=2, then A# exists. Moreover, rank[BAA0]=4=2rank(A). Hence, by Theorem 3.2, ˆA† and ˆA# exist. It follows from [16] that
ˆA†=A†−ε[A†BA†−(ATA)†BT(I−AA†)−(I−A†A)BT(AAT)†]=[1.00000.0000−1.3333−1.00000.00001.66670.00000.00000.3333]+ε[−2.666710.6667−2.00003.3333−12.33332.00000.6667−1.66670.0000]. |
On the other hand,
[AB0A]#=[2−5−327−78−5100013−35−22−132−2160390002−5−3000000000−132]. |
Therefore,
ˆA#=[2−5−3000−132]+ε[27−78−5113−35−22−216039]. |
Example 3.2. Let ˆA=A+εB be a dual matrix, where
A=[1−100001−33],B=[2−430001−56]. |
Then rank(A)=rank(A2)=2 implies that A# exists and
A#=[1.0000−1.00000.00000.00000.00000.0000−0.33330.11110.3333]. |
Moreover, it is not difficult to see that rank[BA]=rank[BA]=2. Hence, by Corollary 3.2, ˆA# exists and
ˆA#=A#−εA#BA#=[1.0000−1.00000.00000.00000.00000.0000−0.33330.11110.3333]+ε[1.0000−1.66671.00000.00000.00000.0000−0.66670.44440.3333]. |
In this section, we consider the linear dual equation
ˆAˆx=ˆb, | (4.1) |
where ˆA∈Dn×n and ˆx, ˆb∈Dn.
It was shown in [12] that the DMPGI plays an important role in the solutions and least-squares solutions of systems of linear dual equations. Despite the DMPGI, it appears that the dual group inverse also has some kind of least-squares and minimal properties.
Define the range and the null space of a dual matrix ˆA=A+εB∈Dn×n as follows.
R(ˆA)={ˆw∈Dn:ˆw=ˆAˆz,ˆz∈Dn}={Ax+ε(Ay+Bx):x,y∈Rn}, | (4.2) |
N(ˆA)={ˆz∈Dn:ˆAˆz=0}={x+εy:Ax=0,Ay+Bx=0,x,y∈Rn}. | (4.3) |
It is not difficult to see that if ˆA# exists, then R(ˆA#)=R(ˆA) and N(ˆA#)=N(ˆA).
We firstly give a necessary and sufficient condition for the Eq (4.1) to be consistent under the assumption that ˆA# exists, which is analogous to the case when A and b are real. We omit the proof.
Theorem 4.1. If the dual group inverse of ˆA∈Dn×n exists, then the linear dual equation (4.1) is consistent if and only if ˆAˆA#ˆb=ˆb. In this case, the general solution to (4.1) is
ˆx=ˆA#ˆb+(I−ˆAˆA#)ˆz, | (4.4) |
where ˆz∈Dn is an arbitrary dual vector.
Notice that the condition ˆAˆA#ˆb=ˆb in Theorem 4.1 is equivalent to ˆb∈R(ˆA). If ˆA#ˆb is a solution to (4.1), then we call it the dual group-inverse solution to the linear dual equation (4.1). Although, in the case ˆb∉R(ˆA), the dual vector ˆA#ˆb is not a solution to (4.1), for convenience, we also call it the dual group-inverse solution.
Next, we present some characterizations of the dual group-inverse solution ˆA#ˆb.
Theorem 4.2. If the dual group inverse of a dual matrix ˆA=A+εB∈Dn×n exists, then ˆA#ˆb is the unique solution in R(ˆA) of
ˆA2ˆx=ˆAˆb. | (4.5) |
Proof. Firstly, if ˆA# exists, then it is obvious that the Eq (4.5) is consistent and ˆA#ˆb is a solution.
It is clear that ˆA#ˆb∈R(ˆA#)=R(ˆA). Suppose that ˆu is another solution in R(ˆA) of (4.5). Then ˆu−ˆA#ˆb∈R(ˆA). On the other hand, since ˆu and ˆA#ˆb are solutions of (4.5), then ˆA2(ˆu−ˆA#ˆb)=0, which implies that ˆA#ˆA2(ˆu−ˆA#ˆb)=ˆA(ˆu−ˆA#ˆb)=0, i.e., ˆu−ˆA#ˆb∈N(ˆA). Hence, ˆu−ˆA#ˆb∈R(ˆA)⋂N(ˆA).
Recall that for a real square matrix A, if A# exists, then R(A)⋂N(A)={0}. Next, we conclude that R(ˆA)⋂N(ˆA)={0}, thus ˆu=ˆA#ˆb and the uniqueness of ˆA#ˆb in R(ˆA) follows.
Indeed, for any ˆz∈R(ˆA)⋂N(ˆA), we can see from (4.2) that there exist x,y∈Rn such that ˆz=Ax+ε(Ay+Bx). Moreover, since ˆAˆz=0, then
(A+εB)[Ax+ε(Ay+Bx)]=A2x+ε[A2y+(AB+BA)x]=0. |
Thus A2x=0 and A2y+(AB+BA)x=0.
It can be seen from A2x=A(Ax)=0 that Ax∈R(A)⋂N(A)={0}. Therefore, Ax=0. In this case, 0=A2y+(AB+BA)x=A2y+ABx=A(Ay+Bx), i.e, Ay+Bx∈N(A). On the other hand, since ˆA# exists, then it follows from Theorem 3.2 that (I−AA#)B(I−AA#)=0, i.e.,
B=AA#B+BAA#−AA#BAA#. | (4.6) |
Substituting (4.6) into Ay+Bx we get
Ay+Bx=Ay+(AA#B+BAA#−AA#BAA#)x=A(y+A#Bx)∈R(A). |
Now, Ay+Bx∈R(A)⋂N(A)={0}, i.e., Ay+Bx=0. Therefore, ˆz=Ax+ε(Ay+Bx)=0, which implies that R(ˆA)⋂N(ˆA)={0}.
Since (4.5) is analogous to the normal equation ˆATˆAˆx=ˆATˆb of (4.1), we shall call the linear dual equation (4.5) the group normal equation of (4.1). It is obvious that each solution of (4.1) is also a solution of (4.5).
The P-norm of a real vector x is defined as ∥x∥P=∥P−1x∥2, where ∥⋅∥2 is the Eulidean norm and P is a nonsingular matrix that transforms A into its Jordan canonical form (2.1) (see [25]). For ˆx=u+εv∈Dn, Udwadia [12] defined a norm of ˆx as
<ˆx>=∥u∥2+∥v∥2. | (4.7) |
In this paper, we define a norm of ˆx=u+εv as
∥ˆx∥=√∥u∥22+∥v∥22. | (4.8) |
We will show that the expression for the norm given in (4.8) indeed define a norm.
(i) ∥ˆx∥≥0, and ∥ˆx∥=0 if and only if ˆx=0.
(ii) For a real scalar k, ∥kˆx∥=√∥ku∥22+∥kv∥22=|k|√∥u∥22+∥v∥22=|k|∥ˆx∥.
(iii) For two dual vectors ˆx=u1+εv1, ˆy=u2+εv2∈Dn,
∥ˆx+ˆy∥2=∥u1+u2∥22+∥v1+v2∥22=∥u1∥22+∥u2∥22+∥v1∥22+∥v2∥22+2u1Tu2+2v1Tv2≤∥u1∥22+∥u2∥22+∥v1∥22+∥v2∥22+2∥u1∥2∥u2∥2+2∥v1∥2∥v2∥2≤∥u1∥22+∥u2∥22+∥v1∥22+∥v2∥22+2√(∥u1∥22+∥v1∥22)(∥u2∥22+∥v2∥22)=(√∥u1∥22+∥v1∥22+√∥u2∥22+∥v2∥22)2=(∥ˆx∥+∥ˆy∥)2, |
i.e., ∥ˆx+ˆy∥≤∥ˆx∥+∥ˆy∥.
Furthermore, in order to study the minimal properties of the dual group-inverse solution, we define the P-norm of a dual vector ˆx=u+εv as
∥ˆx∥P=∥P−1ˆx∥=√∥P−1u∥22+∥P−1v∥22. | (4.9) |
For a dual vector ˆx, by considering the square of the norms defined in (4.7) and (4.8), we can see that the norm of ˆx defined in (4.8) is not greater than the norm of ˆx defined in (4.7). Moreover, the norm defined in (4.9) is a generalization of the norm defined in (4.8), since the norm defined in (4.9) will be reduced to the norm defined in (4.8) if we replace the nonsingular matrix P by the identity matrix I.
When the matrix A (Ind(A) = 1) and the vector b are real, it was shown in [24,26] that A#b is the unique minimal P-norm least-squares solution of the inconsistent equation Ax=b. Now we consider the problem of finding the dual vector ˆx that is analogous to looking for x that makes ∥Ax−b∥P as small as possible when A and b are real. We will show in the following theorem that although the dual group-inverse solution may not be a least-squares solution to the linear dual equation (4.1), because we can not guarantee that ∥ˆAˆA#ˆb−ˆb∥P≤∥ˆAˆx−ˆb∥P for any x∈Dn, but the dual group-inverse solution provides a small P-norm of the error ˆe=ˆAˆx−ˆb.
Theorem 4.3. Let ˆA=A+εB∈Dn×n, ˆb∈Dn be such that ˆA# exists. Then
(i) The choices of ˆx=ˆA#ˆb+(I−ˆAˆA#)ˆz, where ˆz∈Dn, give a small P-norm of the error of the inconsistent equation ˆAˆx=ˆb. The norm of the error
∥ˆe∥P=∥ˆAˆx−ˆb∥P=∥ˆAˆA#ˆb−ˆb∥P, | (4.10) |
where ˆx satisfies the group normal equation (4.5).
(ii) The dual group-inverse solution ˆA#ˆb has a small P-norm among ˆx=ˆA#ˆb+(I−ˆAˆA#)ˆz, where ˆz∈Dn. In addition, among the solutions of (4.5), ˆA#ˆb is the unique dual vector which is orthogonal to the null space of ˆAT.
Proof. (i) Denoting ^w1=ˆAˆA#ˆb−ˆb=u1+εv1 and ^w2=ˆAˆx−ˆAˆA#ˆb=u2+εv2. Then
∥ˆe∥P=∥ˆAˆx−ˆb∥P=∥^w1+^w2∥P=√∥u1+u2∥2P+∥v1+v2∥2P. | (4.11) |
It can be seen from the block representations of ˆA and ˆA# in Theorem 3.2 that
[P−1(ˆAˆA#−I)P]T(P−1ˆAP)=([000−I]+ε[0(B3C−1)T(C−1B2)T0])×([C000]+ε[B1B2B30])=ε[00(C−1B2)TC−B30]:=εM. |
If we denote P−1ˆb=x+εy and P−1(ˆx−ˆA#ˆb)=w+εz, then
(P−1^w1)T(P−1^w2)=[P−1(ˆAˆA#ˆb−ˆb)]T[P−1(ˆAˆx−ˆAˆA#ˆb)]=[P−1(ˆAˆA#−I)PP−1ˆb]T[P−1ˆAPP−1(ˆx−ˆA#ˆb)]=(P−1ˆb)T[P−1(ˆAˆA#−I)P]T(P−1ˆAP)[P−1(ˆx−ˆA#ˆb)]=εxTMw, |
i.e., the prime part of (P−1^w1)T(P−1^w2) is zero. Thus (P−1u1)T(P−1u2)=0.
Hence, ∥u1+u2∥2P=∥P−1(u1+u2)∥22=(P−1u1+P−1u2)T(P−1u1+P−1u2)=∥u1∥2P+∥u2∥2P+2(P−1u1)T(P−1u2)=∥u1∥2P+∥u2∥2P.
Substituting the above equality into (4.11) we get an upper bound for ∥ˆe∥P, i.e,
∥ˆe∥P=∥ˆAˆx−ˆb∥P≤√∥u1∥2P+∥u2∥2P+(∥v1∥P+∥v2∥P)2:=δ1. | (4.12) |
Notice that the dual vector ^w1 depends only on ˆA and ˆb, in order to obtain the smallest value of the upper bound δ1 given in (4.12), we can choose ˆx to make ^w2=0 so that ∥u2∥P=∥v2∥P=0.
We remark that if ˆA# exists, then the group normal equation (4.5) is equivalent to ˆAˆx=ˆAˆA#ˆb, and it is not difficult to see that the general solution of the group normal equation (4.5) is
ˆx=ˆA#ˆb+(I−ˆAˆA#)ˆz, |
where z is an arbitrary dual vector.
Hence, the choices of ˆx that satisfy (4.5) will cause ^w2 to vanish. The P-norm of the error is given by
∥ˆe∥P=∥^w1∥P=√∥u1∥2P+∥v1∥2P=∥ˆAˆA#ˆb−ˆb∥P. |
(ii) Denoting the dual vectors ˆA#ˆb=^μ1=α1+εβ1, (I−ˆAˆA#)ˆz=^μ2=α2+εβ2. Then
(P−1^μ1)T(P−1^μ2)=(P−1ˆA#ˆb)T[P−1(I−ˆAˆA#)ˆz]=(P−1ˆA#PP−1ˆb)T[P−1(I−ˆAˆA#)PP−1ˆz]=(P−1ˆb)T(P−1ˆA#P)T[P−1(I−ˆAˆA#)P](P−1ˆz) |
and it can be seen from the block representations of ˆA and ˆA# that the prime part of (P−1ˆA#P)T[P−1(I−ˆAˆA#)P] is zero. Thus the prime part of (P−1^μ1)T(P−1^μ2) is also zero, i.e., (P−1α1)T(P−1α2)=0. Therefore, ∥α1+α2∥2P=∥α1∥2P+∥α2∥2P.
Thus, as before, we obtain an upper bound for the P-norm of the dual vector ˆx given by
∥ˆx∥P≤√∥α1∥2P+∥α2∥2P+(∥β1∥P+∥β2∥P)2:=δ2. | (4.13) |
To make δ2 in (4.13) as small as possible, we can choose ˆz=0 such that ^μ2=(I−ˆAˆA#)ˆz=0. In this case,
∥ˆx∥P=∥^μ1∥P=√∥α1∥2P+∥β1∥2P=∥ˆA#ˆb∥P. |
By Theorem 4.1, N(ˆAT)=[I−ˆAT(ˆAT)#]ˆz, where ˆz is an arbitrary dual vector. For any ˆy∈R(ˆA), there exists a dual vector ˆx∈Dn such that ˆy=ˆAˆx. It follows that ˆyT[I−ˆAT(ˆAT)#]ˆz=ˆxTˆAT[I−ˆAT(ˆAT)#]ˆz=0, i.e., R(ˆA) is orthogonal to N(ˆAT). Moreover, by Theorem 4.2, ˆA#ˆb is the unique solution in R(ˆA) of ˆA2ˆx=ˆAˆb, therefore ˆA#ˆb is orthogonal to N(ˆAT).
Example 4.1. Consider the inconsistent equation ˆAˆx=ˆb given in [12], where
ˆA=[121211332]+ε[1472583614]:=A+εB,ˆb=[8.27.315.1]+ε[30.232.853.6]. |
Then A is diagonalizable, i.e.,
A=[0.40820.7071−0.30150.4082−0.7071−0.30150.81650.00000.9045][5000−10000][0.40820.7071−0.30150.4082−0.7071−0.30150.81650.00000.9045]−1:=PDP−1. |
It is obvious that rank(A)=rank(A2)=2. Hence, A# exists. Moreover, since rank[BAA0]=4, then by Theorem 3.2, ˆA# exists.
Therefore, by Corollary 3.1,
ˆA#=[10][AB0A]#[1ε]=[−0.44000.56000.04000.5600−0.44000.04000.12000.12000.0800]+ε[−0.92000.1600−0.2000−0.88000.20000.1600−0.3600−1.2000−0.0800]. |
The P-norm of the error is
∥ˆe∥P=∥ˆAˆA#ˆb−ˆb∥P=√∥P−1u∥22+∥P−1v∥22=0.7452, |
where u=[−0.0800,−0.0800,0.2400]T and v=[−0.3800,−0.3000,−0.3000]T.
On the other hand,
ˆA#ˆb=[1.08401.98403.0680]+ε[−2.17201.2840−1.0720]:=x+εy. |
Then
∥ˆA#ˆb∥P=√∥P−1x∥22+∥P−1y∥22=4.6795. |
We will show in the following that if ˆA# exists and ˆA#=A#−εA#BA#, then the dual group-inverse solution ˆA#ˆb is the minimal P-norm least-squares solution to the inconsistent equation ˆAˆx=ˆb.
Theorem 4.4. Let ˆA=A+εB∈Dn×n, ˆb∈Dn be such that ˆA# exists and ˆA#=A#−εA#BA#. Then ˆx∗ satisfies
∥ˆb−ˆAˆx∗∥P=minˆx∈Dn∥ˆb−ˆAˆx∥P |
if and only if ˆx∗ satisfies the group normal equation (4.5). Moreover, the dual group-inverse solution ˆA#ˆb is the unique minimal P-norm solution of (4.5).
Proof. Write ˆb=ˆAˆA#ˆb+(I−ˆAˆA#)ˆb. Then
∥ˆb−ˆAˆx∥2P=∥ˆAˆA#ˆb−ˆAˆx∥2P+∥(I−ˆAˆA#)ˆb∥2P+2[P−1(ˆA#ˆb−ˆx)]T(P−1ˆAP)T[P−1(I−ˆAˆA#)P]P−1ˆb. | (4.14) |
If ˆA#=A#−εA#BA#, then ˆA# has the block representation
ˆA#=P[C−1000]P−1+εP[−C−1B1C−1000]P−1. | (4.15) |
It can be deduced from (2.1) and (4.15) that the third term of the right hand side of (4.14) vanishes. Hence,
∥ˆb−ˆAˆx∥2P=∥ˆAˆA#ˆb−ˆAˆx∥2P+∥(I−ˆAˆA#)ˆb∥2P≥∥(I−ˆAˆA#)ˆb∥2P, |
the equality holds if and only if ˆAˆx=ˆAˆA#ˆb.
On the other hand, since ˆAˆx=ˆAˆA#ˆb is equivalent to (4.5) and the general solution to (4.5) is
ˆx=ˆA#ˆb+(I−ˆAˆA#)ˆz. |
Then
∥ˆA#ˆb+(I−ˆAˆA#)ˆz∥2P=∥ˆA#ˆb∥2P+∥(I−ˆAˆA#)ˆz∥2P≥∥ˆA#ˆb∥2P. |
Equality in the above relation holds if and only if (I−ˆAˆA#)ˆz=0, i.e., ˆx=ˆA#ˆb.
Corollary 4.1. Let ˆA=A+εB∈Dn×n, ˆb∈Dn be such that ˆA# exists and ˆA#=A#−εA#BA#. Then, if ˆAˆx=ˆb is consistent, then ˆA#ˆb is the unique minimal P-norm solution of ˆAˆx=ˆb; if ˆAˆx=ˆb is inconsistent, then ˆA#ˆb is the unique minimal P-norm least-squares solution of ˆAˆx=ˆb.
This paper mainly studied the existence, computations and applications of the dual group inverse. We have shown some differences between the dual group inverse of square dual matrices and the group inverse of square real matrices, especially in the existence and computations. An interesting phenomenon is that for a dual matrix ˆA whose prime part has index 1, ˆA# exists if and only if ˆA† exists. If the dual group inverse of a dual matrix ˆA=A+εB exists, then ˆA# can be easily obtained by computing the group inverse of the 2×2 upper triangular block matrix [AB0A].
We also discussed the applications of the dual group inverse in solving systems of linear dual equations. Some results which are analogous to the real matrices were obtained. For one thing, if the coefficient dual matrix of the linear dual equation ˆAˆx=ˆb exists and ˆA#=A#+ε[−A#BA#+(A#)2B(I−AA#)+(I−AA#)B(A#)2], then the least-squares and minimal properties of the linear dual equation ˆAˆx=ˆb are somewhat different from those of the real case. For another, if the coefficient dual matrix of the linear dual equation ˆAˆx=ˆb exists and ˆA#=A#−εA#BA#, then the least-squares and minimal properties of the linear dual equation ˆAˆx=ˆb are almost the same as those of the real case.
We can see from Theorem 3.2 that the condition Ind(A) = 1 is necessary for the existence of the dual group inverse of the dual matrix ˆA=A+εB. However, the indices of the prime parts of many dual matrices from kinematics and mechanisms may be larger than one. In this case, in order to deal with some problems in kinematics and mechanisms, we have to introduce some new dual generalized inverses. That will be our future work.
The authors would like to thank the referees for their valuable comments and suggestions. This work is supported by the Program of Qingjiang Excellent Young Talents, Jiangxi University of Science and Technology (JXUSTQJYX2017007).
The authors declare that there are no conflicts of interest regarding the publication of this paper.
[1] | J. Angeles, The dual generalized inverses and their applications in kinematic synthesis, In: Latest advances in robot kinematics, Springer, 2012, 1–10. https://doi.org/10.1007/978-94-007-4620-6_1 |
[2] | H. H. Cheng, S. Thompson, Dual polynomials and complex dual numbers for analysis of spatial mechanisms, In: Proceedings of the ASME 1996 Design Engineering Technical Conference and Computers in Engineering Conference, Irvine, California, USA, 1996. https://doi.org/10.1115/96-DETC/MECH-1221 |
[3] | G. F. Simmons, Introduction to topology and mordern analysis, Krieger Publishing Company, 1963. |
[4] | J. Angeles, The application of dual algebra to kinematic analysis, In: J. Angeles, E. Zakhariev, Computational methods in mechanical systems, Springer-Verlag, Heidelberg, 1998, 3–31. https://doi.org/10.1007/978-3-662-03729-4_1 |
[5] |
E. Pennestrì, R. Stefanelli, Linear algebra and numerical algorithms using dual numbers, Multibody Syst. Dyn., 18 (2007), 323–344. https://doi.org/10.1007/s11044-007-9088-9 doi: 10.1007/s11044-007-9088-9
![]() |
[6] |
M. A. Clifford, Preliminary sketch of biquaternions, Proc. Lond. Math. Soc., s1-4 (1871), 381–395. https://doi.org/10.1112/plms/s1-4.1.381 doi: 10.1112/plms/s1-4.1.381
![]() |
[7] | E. Study, Geometrie der dynamen, Teubner, Leipzig, 1903. |
[8] |
Y. L. Gu, J. Luh, Dual-number transformation and its applications to robotics, IEEE J. Robot. Autom., 3 (1987), 615–623. https://doi.org/10.1109/JRA.1987.1087138 doi: 10.1109/JRA.1987.1087138
![]() |
[9] |
H. Heiβ, Homogeneous and dual matrices for treating the kinematic problem of robots, JFAC Proc. Vol., 19 (1986), 51–55. https://doi.org/10.1016/S1474-6670(17)59452-5 doi: 10.1016/S1474-6670(17)59452-5
![]() |
[10] | Y. Jin, X. Wang, The application of the dual number methods to Scara kinematic, In: International Conference on Mechanic Automation and Control Engineering, IEEE, 2010, 3871–3874. https://doi.org/10.1109/MACE.2010.5535409 |
[11] | E. Pennestrì, P. P. Valentini, Linear dual algebra algorithms and their application to kinematics, In: Multibody dynamics. Computational methods in applied sciences, Springer, 2009, https://doi.org/10.1007/978-1-4020-8829-2_11 |
[12] |
F. E. Udwadia, Dual generalized inverses and their use in solving systems of linear dual euqation, Mech. Mach. Theory, 156 (2021), 104158. https://doi.org/10.1016/j.mechmachtheory.2020.104158 doi: 10.1016/j.mechmachtheory.2020.104158
![]() |
[13] |
F. E. Udwadia, E. Pennestrì, D. de Falco, Do all dual matrices have dual Moore-Penrose generalized inverses, Mech. Mach. Theory, 151 (2020), 103878. https://doi.org/10.1016/j.mechmachtheory.2020.103878 doi: 10.1016/j.mechmachtheory.2020.103878
![]() |
[14] |
D. de Falco, E. Pennestrì, F. E. Udwadia, On generalized inverses of dual matrices, Mech. Mach. Theory, 123 (2018), 89–106. https://doi.org/10.1016/j.mechmachtheory.2017.11.020 doi: 10.1016/j.mechmachtheory.2017.11.020
![]() |
[15] |
E. Pennestrì, P. P. Valentini, D. de Falco, The Moore-Penrose dual generalized inverse matrix with application to kinematic synthesis of spatial linkages, J. Mech. Des., 140 (2018), 1–7. https://doi.org/10.1115/1.4040882 doi: 10.1115/1.4040882
![]() |
[16] |
H. Wang, Characterizations and properties of the MPDGI and DMPGI, Mech. Mach. Theory, 158 (2021), 104212. https://doi.org/10.1016/j.mechmachtheory.2020.104212 doi: 10.1016/j.mechmachtheory.2020.104212
![]() |
[17] |
C. D. Meyer, The role of the group generalized inverse in the theory of finite Markov chains, SIAM Rev., 17 (1975), 443–464. http://dx.doi.org/10.1137/1017044 doi: 10.1137/1017044
![]() |
[18] |
N. J. Higham, P. A. Knight, Finite precision behavior of stationary iteration for solving singular systems, Linear Algebra Appl., 192 (1993), 165–186. http://dx.doi.org/10.1016/0024-3795(93)90242-G doi: 10.1016/0024-3795(93)90242-G
![]() |
[19] |
B. Mihailović, V. M. Jerković, B. Malešević, Solving fuzzy linear systems using a block representation of generalized inverses: The group inverse, Fuzzy Sets Syst., 353 (2018), 44–65. http://dx.doi.org/10.1016/j.fss.2017.11.007 doi: 10.1016/j.fss.2017.11.007
![]() |
[20] | S. L. Campbell, C. D. Meyer, Generalized inverses of linear transformations, SIAM, 2009. http://dx.doi.org/10.1137/1.9780898719048 |
[21] |
J. Levine, R. E. Hartwig, Applications of Drazin inverse to the Hill cryptographic systems, Cryptologia, 4 (1980), 71–85. http://dx.doi.org/10.1080/0161-118091854906 doi: 10.1080/0161-118091854906
![]() |
[22] | G. Wang, Y. Wei, S. Qiao, Generalized inverses: Theory and computations, Springer, Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0146-9 |
[23] |
X. Chen, R. E. Hartwig, The group inverse of a triangular matrix, Linear Algebra Appl., 237/238 (1996), 97–108. https://doi.org/10.1016/0024-3795(95)00561-7 doi: 10.1016/0024-3795(95)00561-7
![]() |
[24] | Y. Tian, Rank equalities related to generalized inverses of matrices and their applications, Master Thesis, Montreal, Quebec, Canada, 2000. |
[25] |
Y. Wei, Index splitting for the Drazin inverse and the singular linear system, Appl. Math. Comput., 95 (1998), 115–124. https://doi.org/10.1016/S0096-3003(97)10098-4 doi: 10.1016/S0096-3003(97)10098-4
![]() |
[26] |
Y. Wei, H. Wu, Additional results on index splitting for Drazin inverse solutions of singular linear systems, Electron. J. Linear Al., 8 (2001), 83–93. https://doi.org/10.13001/1081-3810.1062 doi: 10.13001/1081-3810.1062
![]() |
1. | Yuhang Liu, Haifeng Ma, Dual core generalized inverse of third-order dual tensor based on the T-product, 2022, 41, 2238-3603, 10.1007/s40314-022-02114-8 | |
2. | Hongxing Wang, Chong Cui, Yimin Wei, The QLY least-squares and the QLY least-squares minimal-norm of linear dual least squares problems, 2024, 72, 0308-1087, 1985, 10.1080/03081087.2023.2223348 | |
3. | Min Zeng, Yongxin Yuan, The solution of the dual matrix equation A⊤X+X⊤A=D, 2023, 23074108, 100141, 10.1016/j.kjs.2023.10.008 | |
4. | Hongxing Wang, Chong Cui, Yimin Wei, The perturbation of Drazin inverse and dual Drazin inverse, 2024, 12, 2300-7451, 10.1515/spma-2023-0110 | |
5. | Xiaoji Liu, Yuyan Chen, Hongxing Wang, Solutions of the system of dual matrix equation AXB=B=BXA in two partial orders, 2025, 52, 23074108, 100325, 10.1016/j.kjs.2024.100325 | |
6. | Jin Zhong, Yilin Zhang, Dual Drazin inverses of dual matrices and dual Drazin-inverse solutions of systems of linear dual equations, 2023, 37, 0354-5180, 3075, 10.2298/FIL2310075Z | |
7. | Min Zeng, Yongxin Yuan, On the solutions of the dual matrix equation A⊤XA=B, 2023, 3, 2767-8946, 210, 10.3934/mmc.2023018 | |
8. | Tianhe Jiang, Hongxing Wang, Yimin Wei, Perturbation of Dual Group Generalized Inverse and Group Inverse, 2024, 16, 2073-8994, 1103, 10.3390/sym16091103 | |
9. | Hongxing Wang, Chong Cui, Xiaoji Liu, Dual r-rank decomposition and its applications, 2023, 42, 2238-3603, 10.1007/s40314-023-02490-9 | |
10. | Hongxing Wang, Tianhe Jiang, Properties and characterizations of dual sharp orders, 2023, 433, 03770427, 115321, 10.1016/j.cam.2023.115321 | |
11. | Hongxing Wang, Tianhe Jiang, Qiuli Ling, Yimin Wei, Dual core-nilpotent decomposition and dual binary relation, 2024, 684, 00243795, 127, 10.1016/j.laa.2023.12.014 | |
12. | Hongxing Wang, Pei Huang, Characterizations and Properties of Dual Matrix Star Orders, 2023, 2096-6385, 10.1007/s42967-023-00255-z | |
13. | Hongxing Wang, Ju Gao, The dual index and dual core generalized inverse, 2023, 21, 2391-5455, 10.1515/math-2022-0592 | |
14. | Qi Xiao, Jin Zhong, Characterizations and properties of hyper-dual Moore-Penrose generalized inverse, 2024, 9, 2473-6988, 35125, 10.3934/math.20241670 | |
15. | Xue Hua, Shuangzhe Liu, Hongxing Wang, Matrix Differentiation for DGGI, WDGGI and GDGI on Dual Real Matrix Ring, 2025, 48, 0126-6705, 10.1007/s40840-025-01852-2 |