
The uncertainty principle for vector-valued functions of L2(Rn,Rm) with n≥2 are studied. We provide a stronger uncertainty principle than the existing one in literature when m≥2. The phase and the amplitude derivatives in the sense of the Fourier transform are considered when m=1. Based on these definitions, a generalized uncertainty principle is given.
Citation: Feifei Qu, Xin Wei, Juan Chen. Uncertainty principle for vector-valued functions[J]. AIMS Mathematics, 2024, 9(5): 12494-12510. doi: 10.3934/math.2024611
[1] | Guangjian Li, Guangjun He, Mingfa Zheng, Aoyu Zheng . Uncertain multi-objective dynamic weapon-target allocation problem based on uncertainty theory. AIMS Mathematics, 2023, 8(3): 5639-5669. doi: 10.3934/math.2023284 |
[2] | Muhammad Nazam, Hijaz Ahmad, Muhammad Waheed, Sameh Askar . On the Perov's type $ (\beta, F) $-contraction principle and an application to delay integro-differential problem. AIMS Mathematics, 2023, 8(10): 23871-23888. doi: 10.3934/math.20231217 |
[3] | Francisco J. Mendoza-Torres, Juan A. Escamilla-Reyna, Daniela Rodríguez-Tzompantzi . The Jordan decomposition of bounded variation functions valued in vector spaces. AIMS Mathematics, 2017, 2(4): 635-646. doi: 10.3934/Math.2017.4.635 |
[4] | Mohammed Al-Refai, Dumitru Baleanu . Comparison principles of fractional differential equations with non-local derivative and their applications. AIMS Mathematics, 2021, 6(2): 1443-1451. doi: 10.3934/math.2021088 |
[5] | Dojin Kim, Lee-Chae Jang, Seongook Heo, Patcharee Wongsason . Note on fuzzifying probability density function and its properties. AIMS Mathematics, 2023, 8(7): 15486-15498. doi: 10.3934/math.2023790 |
[6] | Clara Burgos, Juan Carlos Cortés, Elena López-Navarro, Rafael Jacinto Villanueva . Probabilistic analysis of linear-quadratic logistic-type models with hybrid uncertainties via probability density functions. AIMS Mathematics, 2021, 6(5): 4938-4957. doi: 10.3934/math.2021290 |
[7] | Muhammad Nazam, Aftab Hussain, Asim Asiri . On a common fixed point theorem in vector-valued $ b $-metric spaces: Its consequences and application. AIMS Mathematics, 2023, 8(11): 26021-26044. doi: 10.3934/math.20231326 |
[8] | Nattapong Kamsrisuk, Donny Passary, Sotiris K. Ntouyas, Jessada Tariboon . Quantum calculus with respect to another function. AIMS Mathematics, 2024, 9(4): 10446-10461. doi: 10.3934/math.2024510 |
[9] | Kefan Liu, Jingyao Chen, Jichao Zhang, Yueting Yang . Application of fuzzy Malliavin calculus in hedging fixed strike lookback option. AIMS Mathematics, 2023, 8(4): 9187-9211. doi: 10.3934/math.2023461 |
[10] | Ying Fang, Guo Cheng, Zhongfeng Qu . Optimal reinsurance for both an insurer and a reinsurer under general premium principles. AIMS Mathematics, 2020, 5(4): 3231-3255. doi: 10.3934/math.2020208 |
The uncertainty principle for vector-valued functions of L2(Rn,Rm) with n≥2 are studied. We provide a stronger uncertainty principle than the existing one in literature when m≥2. The phase and the amplitude derivatives in the sense of the Fourier transform are considered when m=1. Based on these definitions, a generalized uncertainty principle is given.
In 1843, Irish mathematician Hamilton proposed the concept of quaternion, which is one of his greatest contributions to mathematical science. This discovery expanded the complex number field to higher dimensional space. Quaternion has been widely used in many fields, such as color image processing, modern physics, geostatics and so on [1,2,3,4]. However, processing some complex discrete-time signals requires some complex number systems of higher order. As a generalization of complex numbers, quaternion is easy to be thought of. Because of its non-commutative structure, quaternion is not suitable for digital signal processing. To solve this problem, Sch¨utte and Wenzel introduced the reduced biquaternion and proposed their applications for the implementation of a digital filter in 1990 [5]. Reduced biquaternion is a kind of commutative quaternion. Using commutativity, reduced biquaternion and reduced biquaternion matrix have great achievements in many practical problems. For example, [6] applied reduced biquaternion in digital signal and image processing; [7] investigated two types of multistate Hopfield neural networks based on reduced biquaternion; [8] defined the reduced biquaternion canonical transform that can be used in color image processing; [9] proposed an algorithm for computing eigenvalues, eigenvectors, and singular value decomposition of reduced biquaternion matrices, and applied it in color image processing.
Matrix equation is an important branch of matrix theory, and many engineering application problems are modeled as matrix equation problems [10]. A linear matrix equation plays an important role in stability analysis of linear dynamic systems and theoretical development of nonlinear systems. For example, the Sylvester matrix equation is widely used in control theory [11,12], model reduction [13], image processing [14] and so on. The Lyapunov matrix equation is closely related to the H2 norm of discrete-time linear systems [15], and plays an important role in studying the stability and accurate observability of the systems [16]. With the applications of reduced biquaternion and reduced biquaternion matrix becoming more and more extensive, many scholars are more and more interested in solving reduced biquaternion matrix equations. [17] studied the minimal norm least squares solution of the reduced biquaternion matrix equation AX=B using e1−e2 representation, and applied it to color image restoration; [18] studied Hermitian solution of reduced biquaternion matrix equation (AXB,CXD)=(E,G) by complex representation; [19] proposed the real vector representation method of reduced biquaternion using the semi-tensor product of real matrices to solve the least squares (anti)-Hermitian solution of reduced biquaternion matrix equation k∑i=1AiXBi=C. In this paper, we will also use semi-tensor product as a basic tool to study matrix equation problems.
The semi-tensor product of real matrices was proposed by Cheng [20], which is a generalization of ordinary matrix multiplication and has quasi-commutativity under certain conditions. In this paper, we extend the semi-tensor product of real matrices to reduced biquaternion matrices, and then some new conclusions of reduced biquaternion matrix under vector operator are proposed by using semi-tensor product of reduced biquaternion matrices. Using these new conclusions, we study the reduced biquaternion matrix equation
l∑p=1ApXBp=C. | (1.1) |
Some contributions are summarized as follows:
1. Semi-tensor product of real matrices is generalized to reduced biquaternion matrices, and then some new results of reduced biquaternion matrices under vector operator are proposed, so that the reduced biquaternion matrix equation is directly transformed into reduced biquaternion linear equations.
2. Inspired by the H-representation method, we define the GH-representation method to eliminate redundant elements in reduced biquaternion matrices with special structure, so as to improve operation efficiency. We give the GH-representation of anti-Hermitian matrix, Skew-Persymmetric matrix and Skew-Bisymmetric matrix, respectively.
3. Using semi-tensor product of matrices and the structure matrix of multiplication of reduced biquaternion, a more widely defined complex representation matrix of reduced biquaternion matrix is defined, which is called LC-representation.
4. Compared with the real vector representation method in [19], the method proposed in this paper is superior in time. The method which we proposed is applied to color image restoration.
The remainder of this paper is organized as follows: Section 2 introduces the basic knowledge of reduced biquaternion, reduced biquaternion matrix and semi-tensor product of the reduced biquaternion matrices. Some new results are stated and proved in Section 3, including the vector operator of reduced biquaternion matrix, LC-representation and GH-representation; Section 4 gives the expression of the least squares solution of Problems 1, 2 and 3, the necessary and sufficient conditions for the compatibility and the expression of general solutions are obtained in corollary; In Section 5, corresponding algorithms are given, the effectiveness of the algorithms is verified by the corresponding numerical examples and a comparison between the method in this paper and the existed is made; Section 6 applies the proposed method to color image restoration; Section 7 summarizes the content of this paper.
Notations: R/C/QRB represent the set of real number/complex number/reduced biquaternion, respectively. Rn/Cn represent the set of all real/complex column vectors with order n, respectively. Rm×n/Cm×n/Qm×nRB represent the set of all m×n real matrices/complex matrices/reduced biquaternion matrices, respectively. ˉA/AT/AH/A† represent the conjugate/the transpose/the conjugate transpose/Moore-Penrose inverse of matrix A, respectively. Re(A) and Im(A) represent the real and imaginary parts of matrix A, respectively. ˉaij represents the conjugate of aij. δin is the ith column of identity matrix In. ⊗ represents the Kronecker product of matrices. ⋉/⋊ represent left semi-tensor product of matrices and right semi-tensor product of matrices, respectively. ‖⋅‖F represents the Frobenius norm of a matrix or Eucliden norm of a vector.
In this section, we give some necessary preliminaries, which will be used throughout this paper.
Definition 2.1. [6] The set of reduced biquaternion is expressed as
QRB={q=q11+q12i+q13j+q14k,q11,q12,q13,q14∈R}, |
where i,j,k satisfy
i2=k2=−1, j2=1, ij=ji=k, ik=ki=−j, jk=kj=i. |
A reduced biquaternion q can be uniquely represented as q=q1+q2j, where q1=q11+q12i, q2=q13+q14i∈C. The modulus of q is defined as |q|=√∣q11∣2+∣q12∣2+∣q13∣2+∣q14∣2. Similarly, a reduced biquaternion matrix A=A11+A12i+A13j+A14k can also be uniquely represented as A=A1+A2j, where A1=A11+A12i, A2=A13+A14i∈Cm×n. The norm of A is defined as ‖A‖(F)=√‖A11‖2F+‖A12‖2F+‖A13‖2F+‖A14‖2F.
For semi-tensor product of real matrices, please refer to [21,22] for details. Now, we generalize semi-tensor product of real matrices to reduced biquaternion matrices.
Definition 2.2. Suppose A∈Qm×nRB, B∈Qp×qRB, left semi-tensor product of A and B is defined as
A⋉B=(A⊗Itn)(B⊗Itp), |
and right semi-tensor product of A and B is defined as
A⋊B=(Itn⊗A)(Itp⊗B), |
where t=lcm(n,p) is the least common multiple of n and p.
Remark 2.1. Left semi-tensor product of reduced biquaternion matrices and right semi-tensor product of reduced biquaternion matrices are collectively called semi-tensor product of reduced biquaternion matrices. When n=p, semi-tensor product of reduced biquaternion matrices is ordinarily reduced biquaternion matrix multiplication.
Example 2.1. Let A=(1+i2−j3ki+j),B=(ik)T. Then
A⋉B=A(B⊗I2)=(1+i2−j3ki+j)(i00ik00k)=(i−43i−j−k)=(1+i2−j)i+(3ki+j)k,A⋊B=A(I2⊗B)=(1+i2−j3ki+j)(i0k00i0k)=(−1+2ki−4j)≠(1+i2−j)i+(3ki+j)k. |
It can be seen from Example 2.1 that left semi-tensor product of reduced biquaternion matrices satisfies the multiplication of block matrices while right semi-tensor product of reduced biquaternion matrices does not. This is also the biggest difference between these two matrix multiplications, which makes the application range of left semi-tensor product of reduced biquaternion matrices wider than that of right semi-tensor product of reduced biquaternion matrices.
Since the left semi-tensor product of reduced biquaternion matrices is used more widely, the semi-tensor product of reduced biquaternion matrices mentioned below refers to left semi-tensor product of reduced biquaternion matrices.
Definition 2.3. Suppose A=(aij)∈Qm×nRB, denote
Vc(A)=(a11, a21,⋯, am1,⋯, a1n, a2n,⋯, amn)T∈Qmn×1RB, |
Vr(A)=(a11, a12,⋯, a1n,⋯, am1, am2,⋯, amn)T∈Qmn×1RB. |
Theorem 2.1. Suppose A=(aij)∈Qm×nRB,B=(bij)∈Qs×tRB, and for any positive integer p, then
(1) W[m,n]Vr(A)=Vc(A),W[n,m]Vc(A)=Vr(A),
(2) W[s,p]⋉B⋉W[p,t]⋉A=(Ip⊗B)⋉A,
where W[m,n]=(In⊗δ1m,In⊗δ2m,⋯,In⊗δmm) is called swap matrix.
The above equations are easily obtained by direct calculation.
First, several kinds of reduced biquaternion matrices with symmetric structure are introduced.
Definition 2.4. Let A=(aij)∈Qn×nRB, denote AH=(ˉaji)∈Qn×nRB, A(H)=(ˉan−j+1,n−i+1)∈Qn×nRB, and A(H)=VnAHVn. Vn has the following form, Vn=(11...1), in which the other elements are zero.
(1) A∈Qn×nRB is called anti-Hermitian matrix if A=−AH, denoted by AHn×nRB.
(2) A∈Qn×nRB is called Skew-Persymmetric matrix if A=−A(H), denoted by APn×nRB.
(3) A∈Qn×nRB is called Skew-Bisymmetric matrix if aij=an−i+1,n−j+1=−ˉaji, denoted by ABn×nRB.
For the above-mentioned special symmetric matrices, this paper studies the following problems.
Problem 1 Suppose Ap∈Qm×nRB,Bp∈Qn×qRB(p=1,⋯,l),C∈Qm×qRB, and
SAH={X∣X∈AHn×nRB,‖l∑p=1ApXBp−C‖(F)=min}, |
find out XAH∈SAH such that
‖XAH‖(F)=minX∈SAH‖X‖(F). |
Problem 2 Suppose Ap∈Qm×nRB, Bp∈Qn×qRB (p=1,⋯,l), C∈Qm×qRB, and
SAP={X∣X∈APn×nRB,‖l∑p=1ApXBp−C‖(F)=min}, |
find out XAP∈SAP such that
‖XAP‖(F)=minX∈SAP‖X‖(F). |
Problem 3 Suppose Ap∈Qm×nRB, Bp∈Qn×qRB (p=1,⋯,l), C∈Qm×qRB, and
SAB={X∣X∈ABn×nRB,‖l∑p=1ApXBp−C‖(F)=min}, |
find out XAB∈SAB such that
‖XAB‖(F)=minX∈SAB‖X‖(F). |
Using the semi-tensor product of reduced biquaternion matrices, we can obtain some new properties of vector operators.
Theorem 3.1. Suppose A∈Qm×nRB, X∈Qn×qRB, Y∈Qp×mRB, then
(1) Vc(AX)=A⋊Vc(X), Vr(AX)=A⋉Vr(X);
(2) Vc(YA)=AT⋉Vc(Y), Vr(YA)=AT⋊Vr(Y).
Proof. (1) For the equation Vr(AX)=A⋉Vr(X), let C=AX, ai (i=1,2,⋯,m) is the i-th row of A, xk (k=1,2,⋯,n) is the k-th row of X, ci (i=1,2,⋯,m) is the i-th row of C, then the i-th block of A⋉Vr(X) is
ai⋉Vr(X)=ai⋉((x1)T⋮(xn)T)=(n∑k=1aikxk1⋮n∑k=1aikxkp)=(ci)T, |
therefore Vr(AX)=A⋉Vr(X).
Applying Theorem 2.1, we have
Vc(AX)=W[m,q]Vr(AX)=W[m,q]⋉A⋉Vr(X)=W[m,q]⋉A⋉W[q,n]⋉Vc(X)=(Iq⊗A)Vc(X)=A⋊Vc(X). |
(2) By Vr(AX)=A⋉Vr(X), then
Vc(YA)=Vr(ATYT)=AT⋉Vr(YT)=AT⋉Vc(Y). |
Applying Theorem 2.1, we have
Vr(YA)=W[n,p]Vc(YA)=W[n,p]⋉AT⋉Vc(Y)=W[n,p]⋉AT⋉W[p,m]⋉Vr(Y)=(Ip⊗AT)Vr(Y)=AT⋊Vr(Y). |
Yuan et al. [18] pointed out that Vc(ABC)=(CT⊗A)Vc(B) cannot hold in the reduced biquaternion algebra. However, the new conclusion of the reduced biquaternion matrix under the vector operator obtained using the semi-tensor product of reduced biquaternion matrices can prove that the conclusion in [18] is wrong.
Proposition 3.1. Let A∈Qm×nRB, B∈Qn×nRB, C∈Qn×pRB, then
Vc(ABC)=(CT⊗A)Vc(B). |
Proof: Using Theorem 3.1, then
Vc(ABC)=CT⋉Vc(AB)=CT⋉(A⋊Vc(B))=(CT⊗Im)(In⊗A)Vc(B)=(CT⊗A)Vc(B). |
Using semi-tensor product of matrices, we can find the isomorphism between the set of m×n reduced biquaternion matrices and the corresponding set of 2m×2n complex matrices, and give the computable algebraic expression of this isomorphism.
Definition 3.1. [22] Let Wi(i=0,1,⋯,n) be vector spaces. The mapping F:∏ni=1Wi→W0 is called a multilinear mapping, if for any 1≤i≤n,α,β∈R,
F(x1,⋯,αxi+βyi,⋯,xn)=αF(x1,⋯,xi,⋯,xn) |
+βF(x1,⋯,yi,⋯,xn), |
in which xi,yi∈Wi,(1≤i≤n). If dim(Wi)=ki,(i=0,1,⋯,n), and (δ1ki,δ2ki,⋯,δkiki) is the basis of Wi. Denote
F(δj1k1,δj2k2,⋯,δjnkn)=k0∑s=1cj1j2⋯jnsδsk0, |
then
{cj1j2⋯jns|jt=1,⋯,kt,t=1,⋯,n;s=1,⋯,k0}, |
which is called structure constant set of F. Arranging these structure constants in the following form
MF=(c11⋯11⋯c11⋯kn1⋯ck1k2⋯kn1c11⋯12⋯c11⋯kn2⋯ck1k2⋯kn2⋮⋮⋮c11⋯1k0⋯c11⋯knk0⋯ck1k2⋯knk0), |
MF is called structure matrix of F.
Let 1∼δ12,j∼δ22 and define symbol × to represent the reduced biquaternion multiplication. The multiplication rule of the basis satisfies Definition 2.1. According to Definition 3.1, we can obtain the structure matrix of reduced biquaternion multiplication, denoted by M as
M=(10010110). |
Example 3.1. Suppose a,b∈QRB, it can also be representd as a=a1+a2j∼(a1a2),b=b1+b2j∼(b1b2), where a1=a11+a12i, a2=a21+a22i, b1=b11+b12i, b2=b21+b22i∈C. Consider the multiplication a×b on QRB, we can obtain
a×b=(a1+a2j)(b1+b2j)=(a1b1+a2b2)+(a1b2+a2b1)j∼(a1b1+a2b2a1b2+a2b1)=M⋉(a1a2)⋉(b1b2). |
Suppose A=A1+A2j, we denote
←A=(A1A2),˙E2=(±100±1). |
Definition 3.2. Let A=A1+A2j∈Qm×nRB, where A1,A2∈Cm×n, define a mapping from Qm×nRB to subspace of C2m×2n
χ(A)=M⋉(I2⊗(˙E2⋉←A)), |
is called the complex matrix representation of reduced biquaternion matrix, if for A∈Qm×nRB,B∈Qn×pRB, χ satisfies
(1) χ(AB)=χ(A)χ(B),
(2) χc(AB)=χ(A)χc(B),
where χc(A)=χ(A)⋉δ12, then χ is called LC-representation of reduced biquaternion matrix.
Next, using the semi-tensor product of reduced biquaternion matrices, we give the algebraic form of LC-representation of reduced biquaternion matrix.
Proposition 3.2. Let A∈Qm×nRB, B∈Qn×pRB, then χ is LC-representation of reduced biquaternion matrix if and only if
(1) (M⊗Im)(I2⊗(˙E2⋉←AB))
=(M⊗Im)(M⊗(˙E2⋉←A))(I2⊗(˙E2⋉←B)), |
(2) (M⊗Im)(δ12⊗(˙E2⋉←AB))
=(M⊗Im)(M⊗(˙E2⋉←A))(δ12⊗(˙E2⋉←B)). |
Proof. The proof is straightforward. For instance, we can prove each equation in Proposition 3.2 is equivalent to each equation in Dedinition 3.2. Consider the first one. Using the LC-representation of reduced biquaternion matrix, we know χ(AB)=χ(A)χ(B) holds if and only if
M⋉(I2⊗(˙E2⋉←AB))=M⋉(I2⊗(˙E2⋉←A))⋉M⋉(I2⊗(˙E2⋉←B)), |
which is equivalent to
(M⊗Im)(I2⊗(˙E2⋉←AB))=(M⊗Im)(M⊗(˙E2⋉←A))(I2⊗(˙E2⋉←B)). |
Remark 3.1. The LC-representation of reduced biquaternion matrix is not unique in sense that the structure matrix may be different due to the different vectorization choices of 1 and j or the choices of ˙E2.
Let us take a simple example to illustrate Remark 3.1.
Example 3.2. Fix M=(10010110), if we select ˙E2=(1001), we can obtain
χ1(A)=M⋉(I2⊗(˙E2⋉←A))=(A1A2A2A1), |
if we select ˙E2=(100−1), we can obtain
χ2(A)=M⋉(I2⊗(˙E2⋉←A))=(A1−A2−A2A1). |
Test the equations in Proposition 3.2 for χ1(A) and χ2(A), respectively, it can be found that χ1 and χ2 are all LC-representation.
Remark 3.2. For convenience, χ used below is χ1.
The GH-representation method can represent a matrix with a special structure by its independent elements. This method is a generalization of the H-representation method proposed by Zhang [23].
Definition 3.3. [23] Let L⊂Rn×n be a p-dimensional matrix subspace, where (p≤n2), e1, e2,⋯,ep are its basis, and define H=[Vc(e1), Vc(e2),⋯,Vc(ep)], ∀X∈L, there exists unique l1, l2,⋯,lp∈R, such that X=p∑i=1liei. There is a mapping φ: X∈L↦Vc(X), and
φ(X)=Vc(X)=H˜X |
where ˜X=[l1, l2,⋯,lp]T∈Rp, H˜X is called the H-representation of φ(X), H is called the H-representation matrix of φ(X).
The H-representation method can transform a matrix-valued equation into a standard vector-valued equation with independent coordinates. [23] used the H-representation method to research the properties of a class of generalized Lyapunov equations, observability of linear stochastic time-varying systems, stochastic stability and stabilization. Reduced biquaternion matrix has one real part and three imaginary parts. The real matrix of different parts may not have the same structural characteristics, so the H-representation method cannot be directly applied. We extend it to the GH-representation method suitable for reduced biquaternion matrix.
Definition 3.4. Consider a reduced biquaternion matrices subspace L⊂Qn×nRB. For each X=X11+X12i+X13j+X14k∈L, let →X=[X11 X12 X13 X14], if we express
ϕ(X)=Vc(→X)=GHˉˉX, |
where ˉˉX=(~X11~X12~X13~X14), then GHˉˉX is called the GH-representation of ϕ(X), and GH is called the GH-representation matrix of ϕ(X), where GH=(HX10000HX20000HX30000HX4), HXi represents the H-representation matrix of real matrix Xi, i=1,2,3,4.
It is easy to see that the key to construct GH-representation matrix is to find the H-representation matrix of real matrix corresponding to four parts of reduced biquaternion matrix. Next, we give the GH-representation matrix of anti-Hermitian matrix, Skew-Persymmetric matrix and Skew-Bisymmetric matrix, respectively.
First we consider anti-Hermitian matrix.
When X=X11+X12i+X13j+X14k∈AHn×nRB, X11 is anti-symmetric matrix and X12,X13,X14 are symmetric matrices. Denote Sn×nR be the set of symmetric matrices and ASn×nR be the set of anti-symmetric matrices. For L=Sn×nR, we select a set of basis
{E11, ⋯, En1, E22, ⋯, En2, ⋯, Enn}, |
where Eij=(eij)n×n, eij=eji=1, the other elements are zeros.
Similarly, for L=ASn×nR, we select a set of basis
{F21,, ⋯, Fn1, F32, ⋯, Fn2, ⋯, Fn,n−1}, |
where Fij=(fij)n×n, fij=−fji=1, the other elements are zeros.
After the basis is determined above, for L=Sn×nR/ASn×nR, we have
~XS=(x11, ⋯, xn1, x22, ⋯, xn2, ⋯, xnn)T, |
~XAS=(x21, ⋯, xn1, x32, ⋯, xn2, ⋯, xn,n−1)T. |
HS/HAS is used to represent the H-representation matrix of L=Sn×nR/ASn×nR, respectively.
Theorem 3.2. For X=X11+X12i+X13j+X14k∈AHn×nRB, the GH-representation of X is expressed as
ϕ(X)=Vc(→X)=(HAS0000HS0000HS0000HS)ˉˉX≜VAHˉˉX. |
Similarly, we use the above idea to consider the other two classes of special matrices.
Pn×nR represents the set of real matrices whose elements satisfy aij=an−j+1,n−i+1. APn×nR represents the set of real matrices whose elements satisfy aij=−an−j+1,n−i+1. When X=X11+X12i+X13j+X14k∈APn×nRB, X11∈APn×nR,X12,X13,X14∈Pn×nR. For L=Pn×nR, we can select a set of basis
{M11, ⋯, Mn1, M12, ⋯, Mn−1,2, ⋯,M1n}, |
where Mij=(mij)n×n, mij=mn+1−j,n+1−i=1, the other elements are zeros.
For L=APn×nR, we take a set of basis
{Z11, ⋯, Zn−1,1, Z12, ⋯, Zn−2,2, ⋯, Z1,n−1}, |
where Zij=(zij)n×n, zij=−zn+1−j,n+1−i=1, the other elements are zeros.
After the basis is determined above, for L=Pn×nR/APn×nR, we have
~XP=(x11, ⋯, xn1, x12, ⋯, xn−1,2, ⋯, x1n)T, |
~XAP=(x11, ⋯, xn−1,1, x12, ⋯, xn−2,2, ⋯, x1,n−1)T. |
In the same way, we denote the H-representation matrix corresponding to L=Pn×nR by HP and HAP refers to H-representation matrix corresponding to L=APn×nR.
Theorem 3.3. For X=X11+X12i+X13j+X14k∈APn×nRB, the GH-representation of X is expressed as
ϕ(X)=Vc(→X)=(HAP0000HP0000HP0000HP)ˉˉX≜VAPˉˉX. |
Bn×nR represents the set of real matrices whose elements satisfy aij=an−i+1,n−j+1=aji. ABn×nR represents the set of real matrices whose elements satisfy aij=an−i+1,n−j+1=−aji. When X=X11+X12i+X13j+X14k∈ABn×nRB, X11∈ABn×nR,X12,X13,X14∈Bn×nR, for L=Bn×nR, when n is even, we can select a set of basis
{S11,⋯,Sn1, S22,⋯,Sn−1,2,⋯,Sn2,n2, Sn2+1,n2}, |
when n is odd, we can select a set of basis
{S11,⋯,Sn1, S22,⋯,Sn−1,2,⋯,Sn+12,n+12}, |
where Sij=(sij)n×n, sij=sn−i+1,n−j+1=sji=1, the other elements are zeros. After the basis is determined above, when n is even, we have
~XB=(x11, ⋯, xn1, x22, ⋯, xn−1,2, ⋯, xn2,n2, xn2+1,n2)T, |
when n is odd,
~XB=(x11, ⋯, xn1, x22, ⋯, xn−1,2, ⋯, xn+12,n+12)T. |
For L=ABn×nR, when n is even, we can select a set of basis
{T21,⋯,Tn−1,1,⋯,Tn2,n2−1, Tn2+1,n2−1}, |
when n is odd, we can select a set of basis
{T21,⋯,Tn−1,1, T32,⋯,Tn−2,2,⋯,Tn+12,n−12}, |
where Tij=(tij)n×n, tij=tn−i+1,n−j+1=−tji=1, the other elements are zeros. After the basis is determined above, when n is even, we have
~XAB=(x21, ⋯, xn−1,1, ⋯, xn2,n2−1, xn2+1,n2−1)T, |
when n is odd,
~XAB=(x21, ⋯, xn−1,1, x32, ⋯, xn−2,2, ⋯, xn+12,n−12)T. |
When n is even, we denote the H-representation matrix corresponding to L=Bn×nR by HB1, and denote the H-representation matrix corresponding to L=ABn×nR by HAB1.
When n is odd, we denote the H-representation matrix corresponding to L=Bn×nR by HB2, and denote the H-representation matrix corresponding to L=ABn×nR by HAB2.
Theorem 3.4. For X=X11+X12i+X13j+X14k∈ABn×nRB, when n is even, the GH-representation of X is expressed as
ϕ(X)=Vc(→X)=(HAB10000HB10000HB10000HB1)ˉˉX≜VABeˉˉX, |
when n is odd, the GH-representation of X is expressed as
ϕ(X)=Vc(→X)=(HAB20000HB20000HB20000HB2)ˉˉX≜VABoˉˉX. |
Using the semi-tensor product of reduced biquaternion matrices and LC-representation method, we can transform the reduced biquaternion matrix equation into complex linear equations, and then, according to the special structure of the solution, the redundant elements are eliminated using the GH-representation method, so as to simplify the operation. Finally, we can use the following existing classical results of matrix equations to solve the equation.
Lemma 4.1. [24] The least squares solutions of the matrix equation Ax=b with A∈Rm×n and b∈Rm can be represented as
x=A†b+(I−A†A)y, |
where y∈Rn is an arbitrary vector. The minimal norm least squares solution of the matrix equation Ax=b is A†b.
Lemma 4.2. [24] The matrix equation Ax=b with A∈Rm×n and b∈Rm has a solution x∈Rn if and only if
AA†b=b. |
In that case it has the general solution
x=A†b+(I−A†A)y, |
where y∈Rn is an arbitrary vector. The minimal norm solution of the matrix equation Ax=b is A†b.
For the convenience of narration, we introduce the following notation:
Let
X=X11+X12i+X13j+X14k=X1+X2j, |
→X=[X11 X12 X13 X14],γ=χ(BTp⊗Ap), |
˘H=UVAH,˘P=UVAP,˘Be=UVABe,˘B0=UVABo, |
ϑ=(In2i∗In20000In2i∗In2),U=(l∑p=1Re(γϑ)l∑p=1Im(γϑ)), |
W=(Re(χc(Vc(C)))Im(χc(Vc(C)))). |
Theorem 4.1. Suppose Ap∈Qm×nRB, Bp∈Qn×qRB (p=1,⋯,l), C∈Qm×qRB. Then the set SAH of Problem 1 can be represented as
SAH={X∈AHn×nRB∣Vc(→X)=VAH˘H†W+VAH(I2n2+n−˘H†˘H)y}, | (4.1) |
where ∀y∈R2n2+n, and the minimal norm least squares anti-Hermitian solution XAH satisfies
Vc(→XAH)=VAH¯˘H†W. | (4.2) |
Proof.
‖l∑p=1ApXBp−C‖(F)=‖l∑p=1Vc(ApXBp)−Vc(C)‖(F)=‖l∑p=1(BTp⊗Ap)Vc(X)−Vc(C)‖(F)=‖l∑p=1χc((BTp⊗Ap)Vc(X))−χc(Vc(C))‖F=‖l∑p=1χ(BTp⊗Ap)χc(Vc(X))−χc(Vc(C))‖F=‖l∑p=1χ(BTp⊗Ap)(In2i∗In20000In2i∗In2)(Vc(X11)Vc(X12)Vc(X13)Vc(X14)) −χc(Vc(C))‖F=‖l∑p=1γϑVc(→X)−χc(Vc(C))‖F=‖l∑p=1(Re(γϑ)+Im(γϑ)i)Vc(→X)−(Re(χc(Vc(C))) +Im(χc(Vc(C)))i)‖F=‖(l∑p=1Re(γϑ)Vc(→X)−Re(χc(Vc(C)))l∑p=1Im(γϑ)Vc(→X)−Im(χc(Vc(C))))‖F=‖(l∑p=1Re(γϑ)l∑p=1Im(γϑ))Vc(→X)−(Re(χc(Vc(C)))Im(χc(Vc(C))))‖F. |
From the GH-representatian matrix of anti-Hermitian matrix, we can obtain
Vc(→X)=(Vc(X11)Vc(X12)Vc(X13)Vc(X14))=(HAS0000HS0000HS0000HS)ˉˉX≜VAHˉˉX. |
Then
‖(l∑p=1Re(γϑ)l∑p=1Im(γϑ))Vc(→X)−(Re(χc(Vc(C)))Im(χc(Vc(C))))‖F=‖UVAHˉˉX−W‖F=‖˘HˉˉX−W‖F, |
thus
‖l∑p=1ApXBp−C‖(F)=min |
if and only if
‖˘HˉˉX−W‖F=min. |
For real linear equations
˘HˉˉX=W, |
according to Lemma 4.1, its least squares solution is
ˉˉX=˘H†W+(I2n2+n−˘H†˘H)y, | (4.3) |
where ∀y∈R2n2+n, (4.1) can be obtained by multiplying both sides of (4.3) by VAH. Notice
minX∈AHn×nRB‖X‖(F)=minVc(→X)∈R4n2‖Vc(→X)‖F, |
then, we can obtain the minimal norm least squares anti-Hermitian solution XAH of reduced biquaternion matrix equation (1.1) satisfies
Vc(→XAH)=VAH˘H†W. | (4.4) |
From the above proof process, we can obtain the compatible condition for the anti-Hermitian solution of reduced biquaternion matrix equation (1.1).
Corollary 4.1. Suppose Ap∈Qm×nRB, Bp∈Qn×qRB (p=1,⋯,l), C∈Qm×qRB, ˘H is in the form of Theorem 4.1. Then, equation (1.1) has a solution X∈AHn×nRB if and only if
(˘H˘H†−I4mq)W=0. | (4.5) |
In this case, the general solution of equation (1.1) can be expressed as
Vc(→X)=VAH¯˘H†W+VAH(I2n2+n−˘H†˘H)y, ∀y∈R2n2+n, |
and the minimal norm anti-Hermitian solution ¨XAH satisfies
Vc(→¨XAH)=VAH¯˘H†W. | (4.6) |
Proof. Since
‖l∑p=1ApXBp−C‖(F)=‖˘HˉˉX−W‖F=‖˘H˘H†˘HˉˉX−W‖F=‖˘H˘H†W−W‖F=‖(˘H˘H†−I4mq)W‖F, |
thus
‖l∑p=1ApXBp−C‖(F)=0⟺‖(˘H˘H†−I4mq)W‖F=0⟺(˘H˘H†−I4mq)W=0. |
thus (4.5) can be obtained. Moreover, using Lemma 4.2, we can obtain the expression of general solutions and the minimal norm solution.
Through the proof of Theorem 4.1, we can see that the main difference between Problem 1, 2 and 3 is that the GH-representation matrix of the solution. Therefore, for Problem 2 and 3, we can easily get the following conclusions:
Theorem 4.2. Suppose Ap∈Qm×nRB, Bp∈Qn×qRB (p=1,⋯,l), C∈Qm×qRB. Then the set SAP of Problem 2 can be represented as
SAP={X∈APn×nRB∣Vc(→X)=VAP˘P†W+VAP(I2n2+n−˘P†˘P)y}, | (4.7) |
where ∀y∈R2n2+n, and the minimal norm least squares Skew-Persymmetric solution XAP satisfies
Vc(→XAP)=VAP˘P†W. | (4.8) |
Corollary 4.2. Suppose Ap∈Qm×nRB, Bp∈Qn×qRB (p=1,⋯,l), C∈Qm×qRB, ˘P is in the form of Theorem 4.2. Then, equation (1.1) has a solution X∈APn×nRB if and only if
(˘P˘P†−I4mq)W=0. | (4.9) |
In this case, the general solution of equation (1.1) can be expressed as
Vc(→X)=VAP˘P†W+VAP(I2n2+n−˘P†˘P)y, ∀y∈R2n2+n, |
and the minimal norm Skew-Persymmetric solution ¨XAP satisfies
Vc(→¨XAP)=VAP˘P†W. | (4.10) |
Theorem 4.3. Suppose Ap∈Qm×nRB, Bp∈Qn×qRB (p=1,⋯,l), C∈Qm×qRB. When n is even, then the set SAB of Problem 3 can be represented as
SAB={X∈ABn×nRB∣Vc(→X)=VABe˘Be†W+VABe(In2+n−˘Be†˘Be)y}, | (4.11) |
where ∀y∈Rn2+n, and the minimal norm least squares Skew-Bisymmetric solution XAB satisfies
Vc(→XAB)=VABe˘Be†W. | (4.12) |
When n is odd, the set SAB of Problem 3 can be represented as
SAB={X∈ABn×nRB∣Vc(→X)=VABo˘Bo†W+VABo(In2+n+1−˘Bo†˘Bo)y}, | (4.13) |
where ∀y∈Rn2+n+1, and the minimal norm least squares Skew-Bisymmetric solution XAB satisfies
Vc(→XAB)=VABo˘Bo†W. | (4.14) |
Corollary 4.3. Suppose Ap∈Qm×nRB, Bp∈Qn×qRB (p=1,⋯,l), C∈Qm×qRB. When n is even, ˘Be is in the form of Theorem 4.4, then equation (1.1) has a solution X∈ABn×nRB if and only if
(˘Be˘Be†−I4mq)W=0. | (4.15) |
In this case, the general solution of equation (1.1) can be expressed as
Vc(→X)=VABe˘Be†W+VABe(In2+n−˘Be†˘Be)y, ∀y∈Rn2+n, |
and the minimal norm Skew-Bisymmetric solution ¨XAB satisfies
Vc(→¨XAB)=VABe˘Be†W. | (4.16) |
When n is odd, ˘Bo is in the form of Theorem 4.4, then equation (1.1) has a solution X∈ABn×nRB if and only if
(˘Bo˘Bo†−I4mq)W=0. | (4.17) |
In this case, the general solution of equation (1.1) can be expressed as
Vc(→X)=VABo˘Bo†W+VABo(In2+n+1−˘Bo†˘Bo)y, ∀y∈Rn2+n+1, |
and the minimal norm Skew-Bisymmetric solution ¨XAB satisfies
Vc(→¨XAB)=VABo˘Bo†W. | (4.18) |
In this section, we give an algorithm for calculating the minimal norm least squares anti-Hermitian/Skew-Persymmetric/Skew-Bisymmetric solution of reduced biquaternion matrix equation (1.1), and verify the effectiveness of the method proposed in this paper through numerical examples. Then, we compare the posed method with the real vector representation method in [19] to illustrate the improvement of our algorithm.
Algorithm 1 Calculate the minimal norm least squares anti-Hermitian/Skew-Persymmetric/Skew-Bisymmetric solution of reduced biquaternion matrix equation (1.1). |
Require: Ap,Bp,C∈Qm×nRB;HS/HAS;HP/HAP;HB1/HAB1, HB2/HAB2;ϑ; |
Ensure: Vc(→XAH)/Vc(→XAP)/Vc(→XAB); |
1: Fix the form of χ satisfying the Definition 3.2 and calculate the matrix U; |
2: if X∈AHn×nRB, then |
3: Calculate the VAH of GH-representation matrix of anti-Hermitian matrix, then calculate ˘H; |
4: Calculate the minimal norm least squares anti-Hermitian solution according to (4.2); |
5: else if X∈APn×nRB, then |
6: Calculate the VAP of GH-representation matrix of Skew-Persymmetric matrix, then calculate ˘P; |
7: Calculate the minimal norm least squares Skew-Persymmetric solution according to (4.8); |
8: else if X∈ABn×nRB, then |
9: Calculate the VABe/VABo of GH-representation matrix of Skew-Bisymmetric matrix, then calculate ˘Be/˘Bo; |
10: Calculate the minimal norm least squares Skew-Bisymmetric solution according to (4.12)/ (4.14); |
11: end if |
Example 5.1. Let m=n=p=5K,K=1:10, for fixed A∈Qm×nRB, B∈Qn×pRB, X∗∈AHn×nRB/APn×nRB/ABn×nRB, compute
C=AX∗B. |
For AXB=C with unknown X, by Algorithm 1, we can obtain the numerical solution X. Denote the error between calculated solution X and the exact solution X∗ as ε=log10‖X−X∗‖(F) and ε is recorded in Figure 1.
It can be seen from the error analysis charts that the method proposed in this paper is effective.
Next, we will make a comparison between the method in this paper and the real vector representation method [19].
Example 5.2. Let m=n=p=K,K=1:14, for fixed A∈Qm×nRB, B∈Qn×pRB, X∗∈AHn×nRB, compute
C=AX∗B. |
For AXB=C with unknown X, numerical solution X is obtained by using the method in this paper and the method in [19], respectively. Note down the CPU times of two methods. Detailed results are shown in Figure 2.
From Figure 2, we observe that the operation time of our method is significantly better than that of the method in [19].
With the increasing role of color images in daily life, color image restoration has become a hot research field. In recent years, reduced biquaternion has been widely used in color image processing because of its good structural characteristics [6,9,17,25].
In 2004, Pei [6] applied the reducd biquaternion model to image processing. A reduced biquaternion consists of one real part and three imaginary parts, however each pixel of a color image consists of three basic pixels: red, green and blue. Therefore, image processing is usually modeled as a pure imaginary reduce biquaternion, that is
q(x,y)=r(x,y)i+g(x,y)j+b(x,y)k, |
where r(x,y),g(x,y) and b(x,y) are the red, green and blue values of the pixel (x,y), respectively. Thus a color image with m rows and n columns can be represented by a pure imaginary reduced biquaternion matrix
Q=(qij)m×n=Ri+Gj+Bk, qij∈QRB. |
The field of image restoration is required to retrieve the information from degraded images. Image restoration is to remove or reduce the degradation caused by noise, out of focus blurring and other factors in the process of image acquisition. A linear discrete model of image restoration is the matrix-vector equation
g=Km+n, |
where g is an observed image, m is the true or ideal image, n is additive noise, and K is a matrix that represents the blurring phenomena. Given g, K, and in some cases, statistical information about the noise, the methods used in image restoration aim to construct an approximation to m. However, in most cases, the noise n is unknown. We wish to find m′ such that
‖n‖F=‖Km′−g‖F=min‖Km−g‖F. |
The problem described by the above model is the problem of the minimal norm least squares solution of reduced biquaternion matrix equation l∑p=1ApXBp=C, when p=1 and B is the identity matrix.
Algorithm 2 Calculate the minimal norm least squares pure imaginary anti-Hermitian/ Skew-Persymmetric/ Skew-Bisymmetric solution of reduced biquaternion matrix equation AX=C. |
Require: A∈Qm×nRB,C∈Qn×qRB;HS;HP;HB1/HB2;ϑ′=(i∗In2000In2i∗In2); |
Ensure: Vc(→Xah)/Vc(→Xap)/Vc(→Xab); |
1: Fix the form of χ satisfying the Definition 3.2; |
2: Calculate W, ˆA=In⊗A, u′=(Re(χ(ˆA)ϑ′)Im(χ(ˆA)ϑ′)); |
3: if X is pure imaginary anti-Hermitian matrix, then |
4: Calculate VAH′=blkdiag(HS,HS,HS), and then calculate ˘h′=u′VAH′; |
5: Calculate the minimal norm least squares pure imaginary anti-Hermitian solution Xah satisfies |
Vc(→Xah)=VAH′˘h′†W; |
6: else if X is pure imaginary Skew-Persymmetric matrix, then |
7: Calculate VAP′=blkdiag(HP,HP,HP), and then calculate ˘p′=u′VAP′; |
8: Calculate the minimal norm least squares pure imaginary Skew-Persymmetric solution Xap satisfies |
Vc(→Xap)=VAP′˘p′†W; |
9: else if X is pure imaginary Skew-Bisymmetric matrix, then |
10: Calculate the VABe′=blkdiag(HB1,HB1,HB1)/VABo′=blkdiag(HB2,HB2,HB2), and then calculate ˘be′=u′VABe′/˘bo′=u′VABo′; |
11: Calculate the minimal norm least squares pure imaginary Skew-Bisymmetric solution Xbq satisfies |
Vc(→Xab)=VABe′˘be′†W/VABo′˘bo′†W; |
12: end if |
Example 6.1. Given three 64×64 ideal color images. m=(mr,mg,mb) is the image matrix, m can be represented as the pure imaginary matrix m=mri+mgj+mbk. By using LEN=15; THETA=30; PSF=fspecial(′motion′,LEN,THETA) disturb the image mr, and get the disturb image matrix gr. Obviously, K=grm†r. By using the matrix K, we can get the disturb image g=(gr,gg,gb)=Km=K(mr,mg,mb). Through the "reshape" command of MATLAB, we can get the corresponding color restored image m′=(m′r,m′g,m′b). The error of each channel is represented by ϵr, ϵg, ϵb, respectively, and the results are shown in Table 1.
ϵr | ϵg | ϵb | ||
Figure 6.1 | 3.5112e−10 | 5.4348e−11 | 5.0430e−11 | |
Figure 6.2 | 6.7334e−11 | 1.4514e−11 | 1.9030e−11 | |
Figure 6.3 | 7.4626e−12 | 1.1468e−11 | 1.1538e−11 |
In this paper, we use the semi-tensor product of reduced biquaternion matrices to obtain the algebraic expression of the isomorphism between the set of reduced biquaternion matrices and the corresponding set of complex representation matrices, and obtain some new conclusions of reduced biquaternion matrix under the vector operator, so that the problem of the reduced biquaternion matrix equation can be equivalently transformed into the problem of the reduced biquaternion linear equations, further transformed into real linear equations. Through the GH-representation method we proposed, the number of variables in the real linear equations can be reduced, and the operation can be simplified. Finally, the proposed method is applied to color image restoration.
This work is supported by the National Natural Science Foundation of China under grant 62176112, the Natural Science Foundation of Shandong Province under grants ZR2020MA053, ZR2022MA030, and the Discipline with Strong Characteristics of Liaocheng University–Intelligent Science and Technology under grant 319462208. The authors are grateful to the referees for their careful reading and helpful suggestion, which have led to considerable improvement of the presentation of this paper.
The authors declare that there is no conflict of interest.
[1] | L. Cohen, The uncertainty principle in signal analysis, Proceedings of the IEEE-SP International Symposium on Time-Frequency and Time-Scale Analysis, 1994,182–185. http://dx.doi.org/10.1109/TFSA.1994.467263 |
[2] | L. Cohen, Time-frequency analysis: theory and application, New Jersey: Prentice-Hall Inc., 1995. |
[3] |
P. Dang, Tighter uncertainty principles for periodic signals in terms of frequency, Math. Method. Appl. Sci., 38 (2015), 365–379. http://dx.doi.org/10.1002/mma.3075 doi: 10.1002/mma.3075
![]() |
[4] |
P. Dang, G. Deng, T. Qian, A sharper uncertainty principle, J. Funct. Anal., 265 (2013), 2239–2266. http://dx.doi.org/10.1016/j.jfa.2013.07.023 doi: 10.1016/j.jfa.2013.07.023
![]() |
[5] |
P. Dang, W. Mai, W. Pan, Uncertainty principle in random quaternion domains, Digit. Signal Process., 136 (2023), 103988. http://dx.doi.org/10.1016/j.dsp.2023.103988 doi: 10.1016/j.dsp.2023.103988
![]() |
[6] |
P. Dang, T. Qian, Y. Yang, Extra-string uncertainty principles in relation to phase derivative for signals in euclidean spaces, J. Math. Anal. Appl., 437 (2016), 912–940. http://dx.doi.org/10.1016/j.jmaa.2016.01.039 doi: 10.1016/j.jmaa.2016.01.039
![]() |
[7] |
P. Dang, T. Qian, Z. You, Hardy-Sobolev spaces decomposition in signal analysis, J. Fourier Anal. Appl., 17 (2011), 36–64. http://dx.doi.org/10.1007/s00041-010-9132-7 doi: 10.1007/s00041-010-9132-7
![]() |
[8] |
P. Dang, S. Wang, Uncertainty principles for images defined on the square, Math. Method. Appl. Sci., 40 (2017), 2475–2490. http://dx.doi.org/10.1002/mma.4170 doi: 10.1002/mma.4170
![]() |
[9] | Y. Ding, Modern analysis foundation (Chinese), Beijing: Beijing Normal University Press, 2008. |
[10] | D. Gabor, Theory of communication, Journal of the Institution of Electrical Engineers-Part Ⅲ: Radio and Communication Engineering, 93 (1946), 429–457. |
[11] |
S. Goh, C. Micchelli, Uncertainty principle in Hilbert spaces, J. Fourier Anal. Appl., 8 (2002), 335–374. http://dx.doi.org/10.1007/s00041-002-0017-2 doi: 10.1007/s00041-002-0017-2
![]() |
[12] | Y. Katznelson, An introduction to harmonic analysis, 3 Eds., Cambridge: Cambridge University Press, 2004. http://dx.doi.org/10.1017/CBO9781139165372 |
[13] |
K. Kou, Y. Yang, C. Zou, Uncertainty principle for measurable sets and signal recovery in quaternion domains, Math. Method. Appl. Sci., 40 (2017), 3892–3900. http://dx.doi.org/10.1002/mma.4271 doi: 10.1002/mma.4271
![]() |
[14] | F. Qu, G. Deng, A shaper uncertainty principle for L2(Rn) space (Chinese), Acta Math. Sci., 38 (2018), 631–640. |
[15] |
X. Wei, F. Qu, H. Liu, X. Bian, Uncertainty principles for doubly periodic functions, Math. Method. Appl. Sci., 45 (2022), 6499–6514. http://dx.doi.org/10.1002/mma.8182 doi: 10.1002/mma.8182
![]() |
[16] |
Y. Yang, P. Dang, T. Qian, Stronger uncertainty principles for hypercomplex signals, Complex Var. Elliptic, 60 (2015), 1696–1711. http://dx.doi.org/10.1080/17476933.2015.1041938 doi: 10.1080/17476933.2015.1041938
![]() |
[17] |
Y. Yang, P. Dang, T. Qian, Tighter uncertainty principles based on quaternion Fourier transform, Adv. Appl. Clifford Algebras, 26 (2016), 479–497. http://dx.doi.org/10.1007/s00006-015-0579-0 doi: 10.1007/s00006-015-0579-0
![]() |
ϵr | ϵg | ϵb | ||
Figure 6.1 | 3.5112e−10 | 5.4348e−11 | 5.0430e−11 | |
Figure 6.2 | 6.7334e−11 | 1.4514e−11 | 1.9030e−11 | |
Figure 6.3 | 7.4626e−12 | 1.1468e−11 | 1.1538e−11 |