Research article

Uncertainty principle for vector-valued functions

  • Received: 04 January 2024 Revised: 08 March 2024 Accepted: 15 March 2024 Published: 01 April 2024
  • MSC : 42B10, 94A12

  • The uncertainty principle for vector-valued functions of L2(Rn,Rm) with n2 are studied. We provide a stronger uncertainty principle than the existing one in literature when m2. The phase and the amplitude derivatives in the sense of the Fourier transform are considered when m=1. Based on these definitions, a generalized uncertainty principle is given.

    Citation: Feifei Qu, Xin Wei, Juan Chen. Uncertainty principle for vector-valued functions[J]. AIMS Mathematics, 2024, 9(5): 12494-12510. doi: 10.3934/math.2024611

    Related Papers:

    [1] Guangjian Li, Guangjun He, Mingfa Zheng, Aoyu Zheng . Uncertain multi-objective dynamic weapon-target allocation problem based on uncertainty theory. AIMS Mathematics, 2023, 8(3): 5639-5669. doi: 10.3934/math.2023284
    [2] Muhammad Nazam, Hijaz Ahmad, Muhammad Waheed, Sameh Askar . On the Perov's type $ (\beta, F) $-contraction principle and an application to delay integro-differential problem. AIMS Mathematics, 2023, 8(10): 23871-23888. doi: 10.3934/math.20231217
    [3] Francisco J. Mendoza-Torres, Juan A. Escamilla-Reyna, Daniela Rodríguez-Tzompantzi . The Jordan decomposition of bounded variation functions valued in vector spaces. AIMS Mathematics, 2017, 2(4): 635-646. doi: 10.3934/Math.2017.4.635
    [4] Mohammed Al-Refai, Dumitru Baleanu . Comparison principles of fractional differential equations with non-local derivative and their applications. AIMS Mathematics, 2021, 6(2): 1443-1451. doi: 10.3934/math.2021088
    [5] Dojin Kim, Lee-Chae Jang, Seongook Heo, Patcharee Wongsason . Note on fuzzifying probability density function and its properties. AIMS Mathematics, 2023, 8(7): 15486-15498. doi: 10.3934/math.2023790
    [6] Clara Burgos, Juan Carlos Cortés, Elena López-Navarro, Rafael Jacinto Villanueva . Probabilistic analysis of linear-quadratic logistic-type models with hybrid uncertainties via probability density functions. AIMS Mathematics, 2021, 6(5): 4938-4957. doi: 10.3934/math.2021290
    [7] Muhammad Nazam, Aftab Hussain, Asim Asiri . On a common fixed point theorem in vector-valued $ b $-metric spaces: Its consequences and application. AIMS Mathematics, 2023, 8(11): 26021-26044. doi: 10.3934/math.20231326
    [8] Nattapong Kamsrisuk, Donny Passary, Sotiris K. Ntouyas, Jessada Tariboon . Quantum calculus with respect to another function. AIMS Mathematics, 2024, 9(4): 10446-10461. doi: 10.3934/math.2024510
    [9] Kefan Liu, Jingyao Chen, Jichao Zhang, Yueting Yang . Application of fuzzy Malliavin calculus in hedging fixed strike lookback option. AIMS Mathematics, 2023, 8(4): 9187-9211. doi: 10.3934/math.2023461
    [10] Ying Fang, Guo Cheng, Zhongfeng Qu . Optimal reinsurance for both an insurer and a reinsurer under general premium principles. AIMS Mathematics, 2020, 5(4): 3231-3255. doi: 10.3934/math.2020208
  • The uncertainty principle for vector-valued functions of L2(Rn,Rm) with n2 are studied. We provide a stronger uncertainty principle than the existing one in literature when m2. The phase and the amplitude derivatives in the sense of the Fourier transform are considered when m=1. Based on these definitions, a generalized uncertainty principle is given.



    In 1843, Irish mathematician Hamilton proposed the concept of quaternion, which is one of his greatest contributions to mathematical science. This discovery expanded the complex number field to higher dimensional space. Quaternion has been widely used in many fields, such as color image processing, modern physics, geostatics and so on [1,2,3,4]. However, processing some complex discrete-time signals requires some complex number systems of higher order. As a generalization of complex numbers, quaternion is easy to be thought of. Because of its non-commutative structure, quaternion is not suitable for digital signal processing. To solve this problem, Sch¨utte and Wenzel introduced the reduced biquaternion and proposed their applications for the implementation of a digital filter in 1990 [5]. Reduced biquaternion is a kind of commutative quaternion. Using commutativity, reduced biquaternion and reduced biquaternion matrix have great achievements in many practical problems. For example, [6] applied reduced biquaternion in digital signal and image processing; [7] investigated two types of multistate Hopfield neural networks based on reduced biquaternion; [8] defined the reduced biquaternion canonical transform that can be used in color image processing; [9] proposed an algorithm for computing eigenvalues, eigenvectors, and singular value decomposition of reduced biquaternion matrices, and applied it in color image processing.

    Matrix equation is an important branch of matrix theory, and many engineering application problems are modeled as matrix equation problems [10]. A linear matrix equation plays an important role in stability analysis of linear dynamic systems and theoretical development of nonlinear systems. For example, the Sylvester matrix equation is widely used in control theory [11,12], model reduction [13], image processing [14] and so on. The Lyapunov matrix equation is closely related to the H2 norm of discrete-time linear systems [15], and plays an important role in studying the stability and accurate observability of the systems [16]. With the applications of reduced biquaternion and reduced biquaternion matrix becoming more and more extensive, many scholars are more and more interested in solving reduced biquaternion matrix equations. [17] studied the minimal norm least squares solution of the reduced biquaternion matrix equation AX=B using e1e2 representation, and applied it to color image restoration; [18] studied Hermitian solution of reduced biquaternion matrix equation (AXB,CXD)=(E,G) by complex representation; [19] proposed the real vector representation method of reduced biquaternion using the semi-tensor product of real matrices to solve the least squares (anti)-Hermitian solution of reduced biquaternion matrix equation ki=1AiXBi=C. In this paper, we will also use semi-tensor product as a basic tool to study matrix equation problems.

    The semi-tensor product of real matrices was proposed by Cheng [20], which is a generalization of ordinary matrix multiplication and has quasi-commutativity under certain conditions. In this paper, we extend the semi-tensor product of real matrices to reduced biquaternion matrices, and then some new conclusions of reduced biquaternion matrix under vector operator are proposed by using semi-tensor product of reduced biquaternion matrices. Using these new conclusions, we study the reduced biquaternion matrix equation

    lp=1ApXBp=C. (1.1)

    Some contributions are summarized as follows:

    1. Semi-tensor product of real matrices is generalized to reduced biquaternion matrices, and then some new results of reduced biquaternion matrices under vector operator are proposed, so that the reduced biquaternion matrix equation is directly transformed into reduced biquaternion linear equations.

    2. Inspired by the H-representation method, we define the GH-representation method to eliminate redundant elements in reduced biquaternion matrices with special structure, so as to improve operation efficiency. We give the GH-representation of anti-Hermitian matrix, Skew-Persymmetric matrix and Skew-Bisymmetric matrix, respectively.

    3. Using semi-tensor product of matrices and the structure matrix of multiplication of reduced biquaternion, a more widely defined complex representation matrix of reduced biquaternion matrix is defined, which is called LC-representation.

    4. Compared with the real vector representation method in [19], the method proposed in this paper is superior in time. The method which we proposed is applied to color image restoration.

    The remainder of this paper is organized as follows: Section 2 introduces the basic knowledge of reduced biquaternion, reduced biquaternion matrix and semi-tensor product of the reduced biquaternion matrices. Some new results are stated and proved in Section 3, including the vector operator of reduced biquaternion matrix, LC-representation and GH-representation; Section 4 gives the expression of the least squares solution of Problems 1, 2 and 3, the necessary and sufficient conditions for the compatibility and the expression of general solutions are obtained in corollary; In Section 5, corresponding algorithms are given, the effectiveness of the algorithms is verified by the corresponding numerical examples and a comparison between the method in this paper and the existed is made; Section 6 applies the proposed method to color image restoration; Section 7 summarizes the content of this paper.

    Notations: R/C/QRB represent the set of real number/complex number/reduced biquaternion, respectively. Rn/Cn represent the set of all real/complex column vectors with order n, respectively. Rm×n/Cm×n/Qm×nRB represent the set of all m×n real matrices/complex matrices/reduced biquaternion matrices, respectively. ˉA/AT/AH/A represent the conjugate/the transpose/the conjugate transpose/Moore-Penrose inverse of matrix A, respectively. Re(A) and Im(A) represent the real and imaginary parts of matrix A, respectively. ˉaij represents the conjugate of aij. δin is the ith column of identity matrix In. represents the Kronecker product of matrices. / represent left semi-tensor product of matrices and right semi-tensor product of matrices, respectively. F represents the Frobenius norm of a matrix or Eucliden norm of a vector.

    In this section, we give some necessary preliminaries, which will be used throughout this paper.

    Definition 2.1. [6] The set of reduced biquaternion is expressed as

    QRB={q=q11+q12i+q13j+q14k,q11,q12,q13,q14R},

    where i,j,k satisfy

    i2=k2=1, j2=1, ij=ji=k, ik=ki=j, jk=kj=i.

    A reduced biquaternion q can be uniquely represented as q=q1+q2j, where q1=q11+q12i, q2=q13+q14iC. The modulus of q is defined as |q|=q112+q122+q132+q142. Similarly, a reduced biquaternion matrix A=A11+A12i+A13j+A14k can also be uniquely represented as A=A1+A2j, where A1=A11+A12i, A2=A13+A14iCm×n. The norm of A is defined as A(F)=A112F+A122F+A132F+A142F.

    For semi-tensor product of real matrices, please refer to [21,22] for details. Now, we generalize semi-tensor product of real matrices to reduced biquaternion matrices.

    Definition 2.2. Suppose AQm×nRB, BQp×qRB, left semi-tensor product of A and B is defined as

    AB=(AItn)(BItp),

    and right semi-tensor product of A and B is defined as

    AB=(ItnA)(ItpB),

    where t=lcm(n,p) is the least common multiple of n and p.

    Remark 2.1. Left semi-tensor product of reduced biquaternion matrices and right semi-tensor product of reduced biquaternion matrices are collectively called semi-tensor product of reduced biquaternion matrices. When n=p, semi-tensor product of reduced biquaternion matrices is ordinarily reduced biquaternion matrix multiplication.

    Example 2.1. Let A=(1+i2j3ki+j),B=(ik)T. Then

    AB=A(BI2)=(1+i2j3ki+j)(i00ik00k)=(i43ijk)=(1+i2j)i+(3ki+j)k,AB=A(I2B)=(1+i2j3ki+j)(i0k00i0k)=(1+2ki4j)(1+i2j)i+(3ki+j)k.

    It can be seen from Example 2.1 that left semi-tensor product of reduced biquaternion matrices satisfies the multiplication of block matrices while right semi-tensor product of reduced biquaternion matrices does not. This is also the biggest difference between these two matrix multiplications, which makes the application range of left semi-tensor product of reduced biquaternion matrices wider than that of right semi-tensor product of reduced biquaternion matrices.

    Since the left semi-tensor product of reduced biquaternion matrices is used more widely, the semi-tensor product of reduced biquaternion matrices mentioned below refers to left semi-tensor product of reduced biquaternion matrices.

    Definition 2.3. Suppose A=(aij)Qm×nRB, denote

    Vc(A)=(a11, a21,, am1,, a1n, a2n,, amn)TQmn×1RB,
    Vr(A)=(a11, a12,, a1n,, am1, am2,, amn)TQmn×1RB.

    Theorem 2.1. Suppose A=(aij)Qm×nRB,B=(bij)Qs×tRB, and for any positive integer p, then

    (1) W[m,n]Vr(A)=Vc(A),W[n,m]Vc(A)=Vr(A),

    (2) W[s,p]BW[p,t]A=(IpB)A,

    where W[m,n]=(Inδ1m,Inδ2m,,Inδmm) is called swap matrix.

    The above equations are easily obtained by direct calculation.

    First, several kinds of reduced biquaternion matrices with symmetric structure are introduced.

    Definition 2.4. Let A=(aij)Qn×nRB, denote AH=(ˉaji)Qn×nRB, A(H)=(ˉanj+1,ni+1)Qn×nRB, and A(H)=VnAHVn. Vn has the following form, Vn=(11...1), in which the other elements are zero.

    (1) AQn×nRB is called anti-Hermitian matrix if A=AH, denoted by AHn×nRB.

    (2) AQn×nRB is called Skew-Persymmetric matrix if A=A(H), denoted by APn×nRB.

    (3) AQn×nRB is called Skew-Bisymmetric matrix if aij=ani+1,nj+1=ˉaji, denoted by ABn×nRB.

    For the above-mentioned special symmetric matrices, this paper studies the following problems.

    Problem 1 Suppose ApQm×nRB,BpQn×qRB(p=1,,l),CQm×qRB, and

    SAH={XXAHn×nRB,lp=1ApXBpC(F)=min},

    find out XAHSAH such that

    XAH(F)=minXSAHX(F).

    Problem 2 Suppose ApQm×nRB, BpQn×qRB (p=1,,l), CQm×qRB, and

    SAP={XXAPn×nRB,lp=1ApXBpC(F)=min},

    find out XAPSAP such that

    XAP(F)=minXSAPX(F).

    Problem 3 Suppose ApQm×nRB, BpQn×qRB (p=1,,l), CQm×qRB, and

    SAB={XXABn×nRB,lp=1ApXBpC(F)=min},

    find out XABSAB such that

    XAB(F)=minXSABX(F).

    Using the semi-tensor product of reduced biquaternion matrices, we can obtain some new properties of vector operators.

    Theorem 3.1. Suppose AQm×nRB, XQn×qRB, YQp×mRB, then

    (1) Vc(AX)=AVc(X), Vr(AX)=AVr(X);

    (2) Vc(YA)=ATVc(Y), Vr(YA)=ATVr(Y).

    Proof. (1) For the equation Vr(AX)=AVr(X), let C=AX, ai (i=1,2,,m) is the i-th row of A, xk (k=1,2,,n) is the k-th row of X, ci (i=1,2,,m) is the i-th row of C, then the i-th block of AVr(X) is

    aiVr(X)=ai((x1)T(xn)T)=(nk=1aikxk1nk=1aikxkp)=(ci)T,

    therefore Vr(AX)=AVr(X).

    Applying Theorem 2.1, we have

    Vc(AX)=W[m,q]Vr(AX)=W[m,q]AVr(X)=W[m,q]AW[q,n]Vc(X)=(IqA)Vc(X)=AVc(X).

    (2) By Vr(AX)=AVr(X), then

    Vc(YA)=Vr(ATYT)=ATVr(YT)=ATVc(Y).

    Applying Theorem 2.1, we have

    Vr(YA)=W[n,p]Vc(YA)=W[n,p]ATVc(Y)=W[n,p]ATW[p,m]Vr(Y)=(IpAT)Vr(Y)=ATVr(Y).

    Yuan et al. [18] pointed out that Vc(ABC)=(CTA)Vc(B) cannot hold in the reduced biquaternion algebra. However, the new conclusion of the reduced biquaternion matrix under the vector operator obtained using the semi-tensor product of reduced biquaternion matrices can prove that the conclusion in [18] is wrong.

    Proposition 3.1. Let AQm×nRB, BQn×nRB, CQn×pRB, then

    Vc(ABC)=(CTA)Vc(B).

    Proof: Using Theorem 3.1, then

    Vc(ABC)=CTVc(AB)=CT(AVc(B))=(CTIm)(InA)Vc(B)=(CTA)Vc(B).

    Using semi-tensor product of matrices, we can find the isomorphism between the set of m×n reduced biquaternion matrices and the corresponding set of 2m×2n complex matrices, and give the computable algebraic expression of this isomorphism.

    Definition 3.1. [22] Let Wi(i=0,1,,n) be vector spaces. The mapping F:ni=1WiW0 is called a multilinear mapping, if for any 1in,α,βR,

    F(x1,,αxi+βyi,,xn)=αF(x1,,xi,,xn)
    +βF(x1,,yi,,xn),

    in which xi,yiWi,(1in). If dim(Wi)=ki,(i=0,1,,n), and (δ1ki,δ2ki,,δkiki) is the basis of Wi. Denote

    F(δj1k1,δj2k2,,δjnkn)=k0s=1cj1j2jnsδsk0,

    then

    {cj1j2jns|jt=1,,kt,t=1,,n;s=1,,k0},

    which is called structure constant set of F. Arranging these structure constants in the following form

    MF=(c1111c11kn1ck1k2kn1c1112c11kn2ck1k2kn2c111k0c11knk0ck1k2knk0),

    MF is called structure matrix of F.

    Let 1δ12,jδ22 and define symbol × to represent the reduced biquaternion multiplication. The multiplication rule of the basis satisfies Definition 2.1. According to Definition 3.1, we can obtain the structure matrix of reduced biquaternion multiplication, denoted by M as

    M=(10010110).

    Example 3.1. Suppose a,bQRB, it can also be representd as a=a1+a2j(a1a2),b=b1+b2j(b1b2), where a1=a11+a12i, a2=a21+a22i, b1=b11+b12i, b2=b21+b22iC. Consider the multiplication a×b on QRB, we can obtain

    a×b=(a1+a2j)(b1+b2j)=(a1b1+a2b2)+(a1b2+a2b1)j(a1b1+a2b2a1b2+a2b1)=M(a1a2)(b1b2).

    Suppose A=A1+A2j, we denote

    A=(A1A2),˙E2=(±100±1).

    Definition 3.2. Let A=A1+A2jQm×nRB, where A1,A2Cm×n, define a mapping from Qm×nRB to subspace of C2m×2n

    χ(A)=M(I2(˙E2A)),

    is called the complex matrix representation of reduced biquaternion matrix, if for AQm×nRB,BQn×pRB, χ satisfies

    (1) χ(AB)=χ(A)χ(B),

    (2) χc(AB)=χ(A)χc(B),

    where χc(A)=χ(A)δ12, then χ is called LC-representation of reduced biquaternion matrix.

    Next, using the semi-tensor product of reduced biquaternion matrices, we give the algebraic form of LC-representation of reduced biquaternion matrix.

    Proposition 3.2. Let AQm×nRB, BQn×pRB, then χ is LC-representation of reduced biquaternion matrix if and only if

    (1) (MIm)(I2(˙E2AB))

    =(MIm)(M(˙E2A))(I2(˙E2B)),

    (2) (MIm)(δ12(˙E2AB))

    =(MIm)(M(˙E2A))(δ12(˙E2B)).

    Proof. The proof is straightforward. For instance, we can prove each equation in Proposition 3.2 is equivalent to each equation in Dedinition 3.2. Consider the first one. Using the LC-representation of reduced biquaternion matrix, we know χ(AB)=χ(A)χ(B) holds if and only if

    M(I2(˙E2AB))=M(I2(˙E2A))M(I2(˙E2B)),

    which is equivalent to

    (MIm)(I2(˙E2AB))=(MIm)(M(˙E2A))(I2(˙E2B)).

    Remark 3.1. The LC-representation of reduced biquaternion matrix is not unique in sense that the structure matrix may be different due to the different vectorization choices of 1 and j or the choices of ˙E2.

    Let us take a simple example to illustrate Remark 3.1.

    Example 3.2. Fix M=(10010110), if we select ˙E2=(1001), we can obtain

    χ1(A)=M(I2(˙E2A))=(A1A2A2A1),

    if we select ˙E2=(1001), we can obtain

    χ2(A)=M(I2(˙E2A))=(A1A2A2A1).

    Test the equations in Proposition 3.2 for χ1(A) and χ2(A), respectively, it can be found that χ1 and χ2 are all LC-representation.

    Remark 3.2. For convenience, χ used below is χ1.

    The GH-representation method can represent a matrix with a special structure by its independent elements. This method is a generalization of the H-representation method proposed by Zhang [23].

    Definition 3.3. [23] Let LRn×n be a p-dimensional matrix subspace, where (pn2), e1, e2,,ep are its basis, and define H=[Vc(e1), Vc(e2),,Vc(ep)], XL, there exists unique l1, l2,,lpR, such that X=pi=1liei. There is a mapping φ: XLVc(X), and

    φ(X)=Vc(X)=H˜X

    where ˜X=[l1, l2,,lp]TRp, H˜X is called the H-representation of φ(X), H is called the H-representation matrix of φ(X).

    The H-representation method can transform a matrix-valued equation into a standard vector-valued equation with independent coordinates. [23] used the H-representation method to research the properties of a class of generalized Lyapunov equations, observability of linear stochastic time-varying systems, stochastic stability and stabilization. Reduced biquaternion matrix has one real part and three imaginary parts. The real matrix of different parts may not have the same structural characteristics, so the H-representation method cannot be directly applied. We extend it to the GH-representation method suitable for reduced biquaternion matrix.

    Definition 3.4. Consider a reduced biquaternion matrices subspace LQn×nRB. For each X=X11+X12i+X13j+X14kL, let X=[X11 X12 X13 X14], if we express

    ϕ(X)=Vc(X)=GHˉˉX,

    where ˉˉX=(~X11~X12~X13~X14), then GHˉˉX is called the GH-representation of ϕ(X), and GH is called the GH-representation matrix of ϕ(X), where GH=(HX10000HX20000HX30000HX4), HXi represents the H-representation matrix of real matrix Xi, i=1,2,3,4.

    It is easy to see that the key to construct GH-representation matrix is to find the H-representation matrix of real matrix corresponding to four parts of reduced biquaternion matrix. Next, we give the GH-representation matrix of anti-Hermitian matrix, Skew-Persymmetric matrix and Skew-Bisymmetric matrix, respectively.

    First we consider anti-Hermitian matrix.

    When X=X11+X12i+X13j+X14kAHn×nRB, X11 is anti-symmetric matrix and X12,X13,X14 are symmetric matrices. Denote Sn×nR be the set of symmetric matrices and ASn×nR be the set of anti-symmetric matrices. For L=Sn×nR, we select a set of basis

    {E11, , En1, E22, , En2, , Enn},

    where Eij=(eij)n×n, eij=eji=1, the other elements are zeros.

    Similarly, for L=ASn×nR, we select a set of basis

    {F21,, , Fn1, F32, , Fn2, , Fn,n1},

    where Fij=(fij)n×n, fij=fji=1, the other elements are zeros.

    After the basis is determined above, for L=Sn×nR/ASn×nR, we have

    ~XS=(x11, , xn1, x22, , xn2, , xnn)T,
    ~XAS=(x21, , xn1, x32, , xn2, , xn,n1)T.

    HS/HAS is used to represent the H-representation matrix of L=Sn×nR/ASn×nR, respectively.

    Theorem 3.2. For X=X11+X12i+X13j+X14kAHn×nRB, the GH-representation of X is expressed as

    ϕ(X)=Vc(X)=(HAS0000HS0000HS0000HS)ˉˉXVAHˉˉX.

    Similarly, we use the above idea to consider the other two classes of special matrices.

    Pn×nR represents the set of real matrices whose elements satisfy aij=anj+1,ni+1. APn×nR represents the set of real matrices whose elements satisfy aij=anj+1,ni+1. When X=X11+X12i+X13j+X14kAPn×nRB, X11APn×nR,X12,X13,X14Pn×nR. For L=Pn×nR, we can select a set of basis

    {M11, , Mn1, M12, , Mn1,2, ,M1n},

    where Mij=(mij)n×n, mij=mn+1j,n+1i=1, the other elements are zeros.

    For L=APn×nR, we take a set of basis

    {Z11, , Zn1,1, Z12, , Zn2,2, , Z1,n1},

    where Zij=(zij)n×n, zij=zn+1j,n+1i=1, the other elements are zeros.

    After the basis is determined above, for L=Pn×nR/APn×nR, we have

    ~XP=(x11, , xn1, x12, , xn1,2, , x1n)T,
    ~XAP=(x11, , xn1,1, x12, , xn2,2, , x1,n1)T.

    In the same way, we denote the H-representation matrix corresponding to L=Pn×nR by HP and HAP refers to H-representation matrix corresponding to L=APn×nR.

    Theorem 3.3. For X=X11+X12i+X13j+X14kAPn×nRB, the GH-representation of X is expressed as

    ϕ(X)=Vc(X)=(HAP0000HP0000HP0000HP)ˉˉXVAPˉˉX.

    Bn×nR represents the set of real matrices whose elements satisfy aij=ani+1,nj+1=aji. ABn×nR represents the set of real matrices whose elements satisfy aij=ani+1,nj+1=aji. When X=X11+X12i+X13j+X14kABn×nRB, X11ABn×nR,X12,X13,X14Bn×nR, for L=Bn×nR, when n is even, we can select a set of basis

    {S11,,Sn1, S22,,Sn1,2,,Sn2,n2, Sn2+1,n2},

    when n is odd, we can select a set of basis

    {S11,,Sn1, S22,,Sn1,2,,Sn+12,n+12},

    where Sij=(sij)n×n, sij=sni+1,nj+1=sji=1, the other elements are zeros. After the basis is determined above, when n is even, we have

    ~XB=(x11, , xn1, x22, , xn1,2, , xn2,n2, xn2+1,n2)T,

    when n is odd,

    ~XB=(x11, , xn1, x22, , xn1,2, , xn+12,n+12)T.

    For L=ABn×nR, when n is even, we can select a set of basis

    {T21,,Tn1,1,,Tn2,n21, Tn2+1,n21},

    when n is odd, we can select a set of basis

    {T21,,Tn1,1, T32,,Tn2,2,,Tn+12,n12},

    where Tij=(tij)n×n, tij=tni+1,nj+1=tji=1, the other elements are zeros. After the basis is determined above, when n is even, we have

    ~XAB=(x21, , xn1,1, , xn2,n21, xn2+1,n21)T,

    when n is odd,

    ~XAB=(x21, , xn1,1, x32, , xn2,2, , xn+12,n12)T.

    When n is even, we denote the H-representation matrix corresponding to L=Bn×nR by HB1, and denote the H-representation matrix corresponding to L=ABn×nR by HAB1.

    When n is odd, we denote the H-representation matrix corresponding to L=Bn×nR by HB2, and denote the H-representation matrix corresponding to L=ABn×nR by HAB2.

    Theorem 3.4. For X=X11+X12i+X13j+X14kABn×nRB, when n is even, the GH-representation of X is expressed as

    ϕ(X)=Vc(X)=(HAB10000HB10000HB10000HB1)ˉˉXVABeˉˉX,

    when n is odd, the GH-representation of X is expressed as

    ϕ(X)=Vc(X)=(HAB20000HB20000HB20000HB2)ˉˉXVABoˉˉX.

    Using the semi-tensor product of reduced biquaternion matrices and LC-representation method, we can transform the reduced biquaternion matrix equation into complex linear equations, and then, according to the special structure of the solution, the redundant elements are eliminated using the GH-representation method, so as to simplify the operation. Finally, we can use the following existing classical results of matrix equations to solve the equation.

    Lemma 4.1. [24] The least squares solutions of the matrix equation Ax=b with ARm×n and bRm can be represented as

    x=Ab+(IAA)y,

    where yRn is an arbitrary vector. The minimal norm least squares solution of the matrix equation Ax=b is Ab.

    Lemma 4.2. [24] The matrix equation Ax=b with ARm×n and bRm has a solution xRn if and only if

    AAb=b.

    In that case it has the general solution

    x=Ab+(IAA)y,

    where yRn is an arbitrary vector. The minimal norm solution of the matrix equation Ax=b is Ab.

    For the convenience of narration, we introduce the following notation:

    Let

    X=X11+X12i+X13j+X14k=X1+X2j,
    X=[X11 X12 X13 X14],γ=χ(BTpAp),
    ˘H=UVAH,˘P=UVAP,˘Be=UVABe,˘B0=UVABo,
    ϑ=(In2iIn20000In2iIn2),U=(lp=1Re(γϑ)lp=1Im(γϑ)),
    W=(Re(χc(Vc(C)))Im(χc(Vc(C)))).

    Theorem 4.1. Suppose ApQm×nRB, BpQn×qRB (p=1,,l), CQm×qRB. Then the set SAH of Problem 1 can be represented as

    SAH={XAHn×nRBVc(X)=VAH˘HW+VAH(I2n2+n˘H˘H)y}, (4.1)

    where yR2n2+n, and the minimal norm least squares anti-Hermitian solution XAH satisfies

    Vc(XAH)=VAH¯˘HW. (4.2)

    Proof.

    lp=1ApXBpC(F)=lp=1Vc(ApXBp)Vc(C)(F)=lp=1(BTpAp)Vc(X)Vc(C)(F)=lp=1χc((BTpAp)Vc(X))χc(Vc(C))F=lp=1χ(BTpAp)χc(Vc(X))χc(Vc(C))F=lp=1χ(BTpAp)(In2iIn20000In2iIn2)(Vc(X11)Vc(X12)Vc(X13)Vc(X14))  χc(Vc(C))F=lp=1γϑVc(X)χc(Vc(C))F=lp=1(Re(γϑ)+Im(γϑ)i)Vc(X)(Re(χc(Vc(C)))  +Im(χc(Vc(C)))i)F=(lp=1Re(γϑ)Vc(X)Re(χc(Vc(C)))lp=1Im(γϑ)Vc(X)Im(χc(Vc(C))))F=(lp=1Re(γϑ)lp=1Im(γϑ))Vc(X)(Re(χc(Vc(C)))Im(χc(Vc(C))))F.

    From the GH-representatian matrix of anti-Hermitian matrix, we can obtain

    Vc(X)=(Vc(X11)Vc(X12)Vc(X13)Vc(X14))=(HAS0000HS0000HS0000HS)ˉˉXVAHˉˉX.

    Then

    (lp=1Re(γϑ)lp=1Im(γϑ))Vc(X)(Re(χc(Vc(C)))Im(χc(Vc(C))))F=UVAHˉˉXWF=˘HˉˉXWF,

    thus

    lp=1ApXBpC(F)=min

    if and only if

    ˘HˉˉXWF=min.

    For real linear equations

    ˘HˉˉX=W,

    according to Lemma 4.1, its least squares solution is

    ˉˉX=˘HW+(I2n2+n˘H˘H)y, (4.3)

    where yR2n2+n, (4.1) can be obtained by multiplying both sides of (4.3) by VAH. Notice

    minXAHn×nRBX(F)=minVc(X)R4n2Vc(X)F,

    then, we can obtain the minimal norm least squares anti-Hermitian solution XAH of reduced biquaternion matrix equation (1.1) satisfies

    Vc(XAH)=VAH˘HW. (4.4)

    From the above proof process, we can obtain the compatible condition for the anti-Hermitian solution of reduced biquaternion matrix equation (1.1).

    Corollary 4.1. Suppose ApQm×nRB, BpQn×qRB (p=1,,l), CQm×qRB, ˘H is in the form of Theorem 4.1. Then, equation (1.1) has a solution XAHn×nRB if and only if

    (˘H˘HI4mq)W=0. (4.5)

    In this case, the general solution of equation (1.1) can be expressed as

    Vc(X)=VAH¯˘HW+VAH(I2n2+n˘H˘H)y, yR2n2+n,

    and the minimal norm anti-Hermitian solution ¨XAH satisfies

    Vc(¨XAH)=VAH¯˘HW. (4.6)

    Proof. Since

    lp=1ApXBpC(F)=˘HˉˉXWF=˘H˘H˘HˉˉXWF=˘H˘HWWF=(˘H˘HI4mq)WF,

    thus

    lp=1ApXBpC(F)=0(˘H˘HI4mq)WF=0(˘H˘HI4mq)W=0.

    thus (4.5) can be obtained. Moreover, using Lemma 4.2, we can obtain the expression of general solutions and the minimal norm solution.

    Through the proof of Theorem 4.1, we can see that the main difference between Problem 1, 2 and 3 is that the GH-representation matrix of the solution. Therefore, for Problem 2 and 3, we can easily get the following conclusions:

    Theorem 4.2. Suppose ApQm×nRB, BpQn×qRB (p=1,,l), CQm×qRB. Then the set SAP of Problem 2 can be represented as

    SAP={XAPn×nRBVc(X)=VAP˘PW+VAP(I2n2+n˘P˘P)y}, (4.7)

    where yR2n2+n, and the minimal norm least squares Skew-Persymmetric solution XAP satisfies

    Vc(XAP)=VAP˘PW. (4.8)

    Corollary 4.2. Suppose ApQm×nRB, BpQn×qRB (p=1,,l), CQm×qRB, ˘P is in the form of Theorem 4.2. Then, equation (1.1) has a solution XAPn×nRB if and only if

    (˘P˘PI4mq)W=0. (4.9)

    In this case, the general solution of equation (1.1) can be expressed as

    Vc(X)=VAP˘PW+VAP(I2n2+n˘P˘P)y, yR2n2+n,

    and the minimal norm Skew-Persymmetric solution ¨XAP satisfies

    Vc(¨XAP)=VAP˘PW. (4.10)

    Theorem 4.3. Suppose ApQm×nRB, BpQn×qRB (p=1,,l), CQm×qRB. When n is even, then the set SAB of Problem 3 can be represented as

    SAB={XABn×nRBVc(X)=VABe˘BeW+VABe(In2+n˘Be˘Be)y}, (4.11)

    where yRn2+n, and the minimal norm least squares Skew-Bisymmetric solution XAB satisfies

    Vc(XAB)=VABe˘BeW. (4.12)

    When n is odd, the set SAB of Problem 3 can be represented as

    SAB={XABn×nRBVc(X)=VABo˘BoW+VABo(In2+n+1˘Bo˘Bo)y}, (4.13)

    where yRn2+n+1, and the minimal norm least squares Skew-Bisymmetric solution XAB satisfies

    Vc(XAB)=VABo˘BoW. (4.14)

    Corollary 4.3. Suppose ApQm×nRB, BpQn×qRB (p=1,,l), CQm×qRB. When n is even, ˘Be is in the form of Theorem 4.4, then equation (1.1) has a solution XABn×nRB if and only if

    (˘Be˘BeI4mq)W=0. (4.15)

    In this case, the general solution of equation (1.1) can be expressed as

    Vc(X)=VABe˘BeW+VABe(In2+n˘Be˘Be)y, yRn2+n,

    and the minimal norm Skew-Bisymmetric solution ¨XAB satisfies

    Vc(¨XAB)=VABe˘BeW. (4.16)

    When n is odd, ˘Bo is in the form of Theorem 4.4, then equation (1.1) has a solution XABn×nRB if and only if

    (˘Bo˘BoI4mq)W=0. (4.17)

    In this case, the general solution of equation (1.1) can be expressed as

    Vc(X)=VABo˘BoW+VABo(In2+n+1˘Bo˘Bo)y, yRn2+n+1,

    and the minimal norm Skew-Bisymmetric solution ¨XAB satisfies

    Vc(¨XAB)=VABo˘BoW. (4.18)

    In this section, we give an algorithm for calculating the minimal norm least squares anti-Hermitian/Skew-Persymmetric/Skew-Bisymmetric solution of reduced biquaternion matrix equation (1.1), and verify the effectiveness of the method proposed in this paper through numerical examples. Then, we compare the posed method with the real vector representation method in [19] to illustrate the improvement of our algorithm.

    Algorithm 1 Calculate the minimal norm least squares anti-Hermitian/Skew-Persymmetric/Skew-Bisymmetric solution of reduced biquaternion matrix equation (1.1).
    Require: Ap,Bp,CQm×nRB;HS/HAS;HP/HAP;HB1/HAB1, HB2/HAB2;ϑ;
    Ensure: Vc(XAH)/Vc(XAP)/Vc(XAB);
    1: Fix the form of χ satisfying the Definition 3.2 and calculate the matrix U;
    2: if XAHn×nRB, then
    3: Calculate the VAH of GH-representation matrix of anti-Hermitian matrix, then calculate ˘H;
    4: Calculate the minimal norm least squares anti-Hermitian solution according to (4.2);
    5: else if XAPn×nRB, then
    6: Calculate the VAP of GH-representation matrix of Skew-Persymmetric matrix, then calculate ˘P;
    7: Calculate the minimal norm least squares Skew-Persymmetric solution according to (4.8);
    8: else if XABn×nRB, then
    9: Calculate the VABe/VABo of GH-representation matrix of Skew-Bisymmetric matrix, then calculate ˘Be/˘Bo;
    10: Calculate the minimal norm least squares Skew-Bisymmetric solution according to (4.12)/ (4.14);
    11: end if

    Example 5.1. Let m=n=p=5K,K=1:10, for fixed AQm×nRB, BQn×pRB, XAHn×nRB/APn×nRB/ABn×nRB, compute

    C=AXB.

    For AXB=C with unknown X, by Algorithm 1, we can obtain the numerical solution X. Denote the error between calculated solution X and the exact solution X as ε=log10XX(F) and ε is recorded in Figure 1.

    Figure 1.  Error of Problem 1, 2,3.

    It can be seen from the error analysis charts that the method proposed in this paper is effective.

    Next, we will make a comparison between the method in this paper and the real vector representation method [19].

    Example 5.2. Let m=n=p=K,K=1:14, for fixed AQm×nRB, BQn×pRB, XAHn×nRB, compute

    C=AXB.

    For AXB=C with unknown X, numerical solution X is obtained by using the method in this paper and the method in [19], respectively. Note down the CPU times of two methods. Detailed results are shown in Figure 2.

    Figure 2.  Time comparison of anti-Hermitian solution calculated by two methods.
    Figure 3.  64×64 Symmetric color image restoration.
    Figure 4.  64×64 Persymmetric color image restoration.
    Figure 5.  64×64 Bisymmetric color image restoration.

    From Figure 2, we observe that the operation time of our method is significantly better than that of the method in [19].

    With the increasing role of color images in daily life, color image restoration has become a hot research field. In recent years, reduced biquaternion has been widely used in color image processing because of its good structural characteristics [6,9,17,25].

    In 2004, Pei [6] applied the reducd biquaternion model to image processing. A reduced biquaternion consists of one real part and three imaginary parts, however each pixel of a color image consists of three basic pixels: red, green and blue. Therefore, image processing is usually modeled as a pure imaginary reduce biquaternion, that is

    q(x,y)=r(x,y)i+g(x,y)j+b(x,y)k,

    where r(x,y),g(x,y) and b(x,y) are the red, green and blue values of the pixel (x,y), respectively. Thus a color image with m rows and n columns can be represented by a pure imaginary reduced biquaternion matrix

    Q=(qij)m×n=Ri+Gj+Bk, qijQRB.

    The field of image restoration is required to retrieve the information from degraded images. Image restoration is to remove or reduce the degradation caused by noise, out of focus blurring and other factors in the process of image acquisition. A linear discrete model of image restoration is the matrix-vector equation

    g=Km+n,

    where g is an observed image, m is the true or ideal image, n is additive noise, and K is a matrix that represents the blurring phenomena. Given g, K, and in some cases, statistical information about the noise, the methods used in image restoration aim to construct an approximation to m. However, in most cases, the noise n is unknown. We wish to find m such that

    nF=KmgF=minKmgF.

    The problem described by the above model is the problem of the minimal norm least squares solution of reduced biquaternion matrix equation lp=1ApXBp=C, when p=1 and B is the identity matrix.

    Algorithm 2 Calculate the minimal norm least squares pure imaginary anti-Hermitian/ Skew-Persymmetric/ Skew-Bisymmetric solution of reduced biquaternion matrix equation AX=C.
    Require: AQm×nRB,CQn×qRB;HS;HP;HB1/HB2;ϑ=(iIn2000In2iIn2);
    Ensure: Vc(Xah)/Vc(Xap)/Vc(Xab);
    1: Fix the form of χ satisfying the Definition 3.2;
    2: Calculate W, ˆA=InA, u=(Re(χ(ˆA)ϑ)Im(χ(ˆA)ϑ));
    3: if X is pure imaginary anti-Hermitian matrix, then
    4: Calculate VAH=blkdiag(HS,HS,HS), and then calculate ˘h=uVAH;
    5: Calculate the minimal norm least squares pure imaginary anti-Hermitian solution Xah satisfies
          Vc(Xah)=VAH˘hW;
    6: else if X is pure imaginary Skew-Persymmetric matrix, then
    7: Calculate VAP=blkdiag(HP,HP,HP), and then calculate ˘p=uVAP;
    8: Calculate the minimal norm least squares pure imaginary Skew-Persymmetric solution Xap satisfies
          Vc(Xap)=VAP˘pW;
    9: else if X is pure imaginary Skew-Bisymmetric matrix, then
    10: Calculate the VABe=blkdiag(HB1,HB1,HB1)/VABo=blkdiag(HB2,HB2,HB2), and then calculate ˘be=uVABe/˘bo=uVABo;
    11: Calculate the minimal norm least squares pure imaginary Skew-Bisymmetric solution Xbq satisfies
          Vc(Xab)=VABe˘beW/VABo˘boW;
    12: end if

    Example 6.1. Given three 64×64 ideal color images. m=(mr,mg,mb) is the image matrix, m can be represented as the pure imaginary matrix m=mri+mgj+mbk. By using LEN=15; THETA=30; PSF=fspecial(motion,LEN,THETA) disturb the image mr, and get the disturb image matrix gr. Obviously, K=grmr. By using the matrix K, we can get the disturb image g=(gr,gg,gb)=Km=K(mr,mg,mb). Through the "reshape" command of MATLAB, we can get the corresponding color restored image m=(mr,mg,mb). The error of each channel is represented by ϵr, ϵg, ϵb, respectively, and the results are shown in Table 1.

    Table 1.  The error between computed mr,mg,mb and original mr,mg,mb.
    ϵr ϵg ϵb
    Figure 6.1 3.5112e10 5.4348e11 5.0430e11
    Figure 6.2 6.7334e11 1.4514e11 1.9030e11
    Figure 6.3 7.4626e12 1.1468e11 1.1538e11

     | Show Table
    DownLoad: CSV

    In this paper, we use the semi-tensor product of reduced biquaternion matrices to obtain the algebraic expression of the isomorphism between the set of reduced biquaternion matrices and the corresponding set of complex representation matrices, and obtain some new conclusions of reduced biquaternion matrix under the vector operator, so that the problem of the reduced biquaternion matrix equation can be equivalently transformed into the problem of the reduced biquaternion linear equations, further transformed into real linear equations. Through the GH-representation method we proposed, the number of variables in the real linear equations can be reduced, and the operation can be simplified. Finally, the proposed method is applied to color image restoration.

    This work is supported by the National Natural Science Foundation of China under grant 62176112, the Natural Science Foundation of Shandong Province under grants ZR2020MA053, ZR2022MA030, and the Discipline with Strong Characteristics of Liaocheng University–Intelligent Science and Technology under grant 319462208. The authors are grateful to the referees for their careful reading and helpful suggestion, which have led to considerable improvement of the presentation of this paper.

    The authors declare that there is no conflict of interest.



    [1] L. Cohen, The uncertainty principle in signal analysis, Proceedings of the IEEE-SP International Symposium on Time-Frequency and Time-Scale Analysis, 1994,182–185. http://dx.doi.org/10.1109/TFSA.1994.467263
    [2] L. Cohen, Time-frequency analysis: theory and application, New Jersey: Prentice-Hall Inc., 1995.
    [3] P. Dang, Tighter uncertainty principles for periodic signals in terms of frequency, Math. Method. Appl. Sci., 38 (2015), 365–379. http://dx.doi.org/10.1002/mma.3075 doi: 10.1002/mma.3075
    [4] P. Dang, G. Deng, T. Qian, A sharper uncertainty principle, J. Funct. Anal., 265 (2013), 2239–2266. http://dx.doi.org/10.1016/j.jfa.2013.07.023 doi: 10.1016/j.jfa.2013.07.023
    [5] P. Dang, W. Mai, W. Pan, Uncertainty principle in random quaternion domains, Digit. Signal Process., 136 (2023), 103988. http://dx.doi.org/10.1016/j.dsp.2023.103988 doi: 10.1016/j.dsp.2023.103988
    [6] P. Dang, T. Qian, Y. Yang, Extra-string uncertainty principles in relation to phase derivative for signals in euclidean spaces, J. Math. Anal. Appl., 437 (2016), 912–940. http://dx.doi.org/10.1016/j.jmaa.2016.01.039 doi: 10.1016/j.jmaa.2016.01.039
    [7] P. Dang, T. Qian, Z. You, Hardy-Sobolev spaces decomposition in signal analysis, J. Fourier Anal. Appl., 17 (2011), 36–64. http://dx.doi.org/10.1007/s00041-010-9132-7 doi: 10.1007/s00041-010-9132-7
    [8] P. Dang, S. Wang, Uncertainty principles for images defined on the square, Math. Method. Appl. Sci., 40 (2017), 2475–2490. http://dx.doi.org/10.1002/mma.4170 doi: 10.1002/mma.4170
    [9] Y. Ding, Modern analysis foundation (Chinese), Beijing: Beijing Normal University Press, 2008.
    [10] D. Gabor, Theory of communication, Journal of the Institution of Electrical Engineers-Part Ⅲ: Radio and Communication Engineering, 93 (1946), 429–457.
    [11] S. Goh, C. Micchelli, Uncertainty principle in Hilbert spaces, J. Fourier Anal. Appl., 8 (2002), 335–374. http://dx.doi.org/10.1007/s00041-002-0017-2 doi: 10.1007/s00041-002-0017-2
    [12] Y. Katznelson, An introduction to harmonic analysis, 3 Eds., Cambridge: Cambridge University Press, 2004. http://dx.doi.org/10.1017/CBO9781139165372
    [13] K. Kou, Y. Yang, C. Zou, Uncertainty principle for measurable sets and signal recovery in quaternion domains, Math. Method. Appl. Sci., 40 (2017), 3892–3900. http://dx.doi.org/10.1002/mma.4271 doi: 10.1002/mma.4271
    [14] F. Qu, G. Deng, A shaper uncertainty principle for L2(Rn) space (Chinese), Acta Math. Sci., 38 (2018), 631–640.
    [15] X. Wei, F. Qu, H. Liu, X. Bian, Uncertainty principles for doubly periodic functions, Math. Method. Appl. Sci., 45 (2022), 6499–6514. http://dx.doi.org/10.1002/mma.8182 doi: 10.1002/mma.8182
    [16] Y. Yang, P. Dang, T. Qian, Stronger uncertainty principles for hypercomplex signals, Complex Var. Elliptic, 60 (2015), 1696–1711. http://dx.doi.org/10.1080/17476933.2015.1041938 doi: 10.1080/17476933.2015.1041938
    [17] Y. Yang, P. Dang, T. Qian, Tighter uncertainty principles based on quaternion Fourier transform, Adv. Appl. Clifford Algebras, 26 (2016), 479–497. http://dx.doi.org/10.1007/s00006-015-0579-0 doi: 10.1007/s00006-015-0579-0
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(821) PDF downloads(33) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog