In this paper, we mainly investigate the random convolution sampling stability for signals in multiply generated shift invariant subspace of weighted mixed Lebesgue space. Under some restricted conditions for the generators and the convolution function, we conclude that the defined multiply generated shift invariant subspace could be approximated by a finite dimensional subspace. Furthermore, with overwhelming probability, the random convolution sampling stability holds for signals in some subset of the defined multiply generated shift invariant subspace when the sampling size is large enough.
Citation: Suping Wang. The random convolution sampling stability in multiply generated shift invariant subspace of weighted mixed Lebesgue space[J]. AIMS Mathematics, 2022, 7(2): 1707-1725. doi: 10.3934/math.2022098
[1] | Stevo Stević . Norms of some operators between weighted-type spaces and weighted Lebesgue spaces. AIMS Mathematics, 2023, 8(2): 4022-4041. doi: 10.3934/math.2023201 |
[2] | Heng Yang, Jiang Zhou . Compactness of commutators of fractional integral operators on ball Banach function spaces. AIMS Mathematics, 2024, 9(2): 3126-3149. doi: 10.3934/math.2024152 |
[3] | Qifang Li, Jinjin Li, Xun Ge, Yiliang Li . Invariance of separation in covering approximation spaces. AIMS Mathematics, 2021, 6(6): 5772-5785. doi: 10.3934/math.2021341 |
[4] | Shuhui Yang, Yan Lin . Multilinear strongly singular integral operators with generalized kernels and applications. AIMS Mathematics, 2021, 6(12): 13533-13551. doi: 10.3934/math.2021786 |
[5] | Dazhao Chen . Endpoint estimates for multilinear fractional singular integral operators on Herz and Herz type Hardy spaces. AIMS Mathematics, 2021, 6(5): 4989-4999. doi: 10.3934/math.2021293 |
[6] | Tianyang He, Zhiwen Liu, Ting Yu . The Weighted Lp estimates for the fractional Hardy operator and a class of integral operators on the Heisenberg group. AIMS Mathematics, 2025, 10(1): 858-883. doi: 10.3934/math.2025041 |
[7] | Dazhao Chen . Weighted boundedness for Toeplitz type operator related to singular integral transform with variable Calderón-Zygmund kernel. AIMS Mathematics, 2021, 6(1): 688-697. doi: 10.3934/math.2021041 |
[8] | Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007 |
[9] | Yanqi Yang, Shuangping Tao, Guanghui Lu . Weighted and endpoint estimates for commutators of bilinear pseudo-differential operators. AIMS Mathematics, 2022, 7(4): 5971-5990. doi: 10.3934/math.2022333 |
[10] | Kandhasamy Tamilvanan, Jung Rye Lee, Choonkil Park . Ulam stability of a functional equation deriving from quadratic and additive mappings in random normed spaces. AIMS Mathematics, 2021, 6(1): 908-924. doi: 10.3934/math.2021054 |
In this paper, we mainly investigate the random convolution sampling stability for signals in multiply generated shift invariant subspace of weighted mixed Lebesgue space. Under some restricted conditions for the generators and the convolution function, we conclude that the defined multiply generated shift invariant subspace could be approximated by a finite dimensional subspace. Furthermore, with overwhelming probability, the random convolution sampling stability holds for signals in some subset of the defined multiply generated shift invariant subspace when the sampling size is large enough.
Random sampling problems widely arise in compressed sensing [5,7], learning theory [17] and image processing [6]. Recently, many random sampling results for signals in different subspaces of the classical Lebesgue space have already been presented such as bandlimited space [2,3], shift invariant subspace [19], multiply generated shift invariant subspace [18] and reproducing kernel subspace [14,16]. However, an obvious shortcoming of the classical Lebesgue space is that it imposed the same control over all the variables of a function [12]. Thus, when considering the random sampling problems for some time-varying signals which depend on independent quantities with different properties, the mixed Lebesgue space, which could realize the separate integrability for each variable, seems to be a much more suitable tool to model those signals.
Up to date, abundant sampling results for many subspaces of a mixed Lebesgue space have already been presented [11,12,13]. While, we should notice that most of those sampling results are all obtained based on a pre-given relatively separated set whose sampling gap satisfies some restricted conditions. As for the random sampling results, which are based on a randomly selected sampling set, there are very few results. Furthermore, an obvious precondition for these existing sampling results is that those signals are all integrable in the corresponding spaces, which is impossible for some non-decaying or infinitely growing signals. Based on these two facts and the properties of the moderated weight functions, which could control the growth or decay of the signals [1,9], we consider to use weighted mixed Lebesgue space to model those non-decaying or infinitely growing signals such that the corresponding random sampling problem could be well solved.
Generally, the sampling problem mainly consists of the following two aspects. Firstly, finding out the proper conditions which ensure the given sampling set satisfies sampling stability. Secondly, designing an efficient reconstruction algorithm to restore the signals. In this paper, we mainly focus on the investigation of sampling stability. As for the research about the reconstruction algorithm, it will be the goal of our future work. In the following, some essential definitions or properties are presented such that the random convolution sampling problem is well understood.
The weighted mixed Lebesgue space Lp,qν(Rd+1) consists of all measurable functions f=f(x,y) defined on R×Rd such that
‖f‖Lp,qν(Rd+1)=‖‖f(x,y)ν(x,y)‖Lqy(Rd)‖Lpx(R)<∞,1≤p,q≤∞. | (1.1) |
The corresponding weighted sequence space ℓp,qν(Zd+1) is defined by
ℓp,qν(Zd+1)={c:‖c‖ℓp,qν(Zd+1)=‖‖c(k1,k2)ν(k1,k2)‖ℓqk2(Zd)‖ℓpk1(Z)<∞},1≤p,q≤∞. | (1.2) |
In this paper, the weight function ν(x,y) is assumed to be continuous, positive, symmetric and moderated with respect to the weight function ω(x,y), i.e., there exists a constant C>0, such that,
0<ν(x+x′,y+y′)≤Cν(x,y)ω(x′,y′),(x,y),(x′,y′)∈R×Rd. | (1.3) |
Unless otherwise specified, the constant C without any subscript or superscript in this paper all stands for the above mentioned constant in (1.3). Furthermore, we also make the assumption that the weight function ω(x,y) is continuous, positive, symmetric and submultiplicative, that is, for all (x,y),(x′,y′)∈R×Rd,
0<ω(x+x′,y+y′)≤ω(x,y)ω(x′,y′),(x,y),(x′,y′)∈R×Rd. |
The classical example about the above weight function is
m(x)=ea|x|b(1+|x|)s(log(e+|x|))t,x∈Rd+1. |
If a,s,t≥0 and 0≤b≤1, then the weight function m(x) is submultiplicative. If a,s,t∈R and 0≤b≤1, then the weight function m(x) is moderated. For further information about weight function, please refer to [8].
In addition, the sampling set X={(xj,yk)}j=1,⋯,m;k=1,⋯,n in this paper is assumed to consist of sampling points which are randomly selected in CR1,R2 with the density function ρ(x,y) satisfying
0<Cρ,l≤ρ(x,y)≤Cρ,u,(x,y)∈CR1,R2, | (1.4) |
where CR1,R2:=[−R1,R1]×[−R2,R2]d with R1,R2>0. Meanwhile, due to the limitation of the sampling devices, the obtained sampling value is not the exact value of signal at each sampling point but is the local average value near the corresponding sampling location. Thus, we assume that the corresponding sampling values are obtained by the following convolution version
{(f∗ψ)(xj,yk),(xj,yk)∈X}, |
where the convolution function ψ satisfies
ψ∈L1,1ω(Rd+1),suppψ⊂CR1,R2. | (1.5) |
In this paper, we mainly consider the random convolution sampling stability for signals in multiply generated shift-invariant subspace of weighted mixed Lebesgue space, which has the form
Vp,qν:={r∑i=1∑k1∈Z∑k2∈Zdci(k1,k2)ϕi(x−k1,y−k2),ci∈ℓp,qν(Zd+1)},1≤p,q≤∞, | (1.6) |
where the generators ϕi, i=1,⋯,r satisfy the following conditions:
● For any (x,y)∈R×Rd,
|(ϕiω)(x,y)|≤1(1+|x|)n1(1+|y|)n2,n1>d+1,n2>d+1, | (1.7) |
where |⋅| means the traditional Euclidean norm.
● There exist constants cp,q,Cp,q>0 such that
cp,qr∑i=1‖ci‖ℓp,qν(Zd+1)≤‖r∑i=1∑k1∈Z∑k2∈Zdci(k1,k2)ϕi(x−k1,y−k2)‖Lp,qν(Rd+1)≤Cp,qr∑i=1‖ci‖ℓp,qν(Zd+1). | (1.8) |
● There exists a constant 0<β<1 such that
β‖ψ‖L1,1ν(CR1,R2)‖f‖Lp,qν(Rd+1)≤‖f∗ψ‖Lp,qν(CR1,R2). | (1.9) |
Besides these, for a well defined function f in the practice, the values of the sampling points which are located in a very distant place may not be significant. Thus, we only consider the following subset
Vp,qν,R1,R2:={f∈Vp,qν:(1−δ)‖f‖Lp,qν(Rd+1)≤‖f‖Lp,qν(CR1,R2)} | (1.10) |
or its normalization
Vp,q,∗ν,R1,R2:={f∈Vp,qν,R1,R2:‖f‖Lp,qν(Rd+1)=1}, | (1.11) |
where 0≤δ<1. Obviously, the sets Vp,qν,R1,R2 and Vp,q,∗ν,R1,R2 consist of those functions whose energy is mainly concentrated in CR1,R2.
This paper is organized as follows. In Section 2, we prove that the defined multiply generated shift invariant subspace could be approximated by a finite dimensional subspace. In Section 3, we present some essential results which will contribute to the proof of the sampling stability. In Section 4, we prove that with overwhelming probability, the sampling stability holds for the signals in some subset of the defined multiply generated shift invariant subspace when the sampling size is large enough.
Define a finite dimensional subspace by
Vp,qν,N:={r∑i=1∑|k1|≤N∑|k2|≤Nci(k1,k2)ϕi(x−k1,y−k2),ci∈ℓp,qν([−N,N]d+1)},1≤p,q≤∞ | (2.1) |
and its normalization
Vp,q,∗ν,N:={f∈Vp,qν,N:‖f‖Lp,qν(Rd+1)=1},1≤p,q≤∞. | (2.2) |
Lemma 2.1. [15] Let α>0, x∈Rd, I1(M,α)=∫|x|≥M|x|−d−αdx, then
I1(M,α)=2πd2Γ(d2)−11αM−α. | (2.3) |
Lemma 2.2. Let 1<p,q<∞, R=max{R1,√dR2} and s=min{n1−1−d,n2−1−d}. Assume that the function f∈Vp,qν with ‖f‖Lp,qν(Rd+1)=1, then for the given ε1,ε2>0, there exists a function fN∈Vp,qν,N such that
‖f−fN‖Lp,qν(C2R1,2R2)≤ε1, | (2.4) |
if
N≥2R+1+{C(4R1)1p(4R2)dqcp,qε1[8Cdπd2Γ(d2)−1(1+√dR2)(n2−d)+22d+1C1(1+R1)d(n1−1)+4C1Cdπd2Γ(d2)−1(n1−1)(n2−d)]}1/s:=2R+1+N1{ε1} | (2.5) |
and
‖f−fN‖L∞,∞ν(C2R1,2R2)≤ε2, | (2.6) |
if
N≥2R+1+{Ccp,qε2[8Cdπd2Γ(d2)−1(1+√dR2)(n2−d)+22d+1C1(1+R1)d(n1−1)+4C1Cdπd2Γ(d2)−1(n1−1)(n2−d)]}1/s:=2R+1+N2{ε2}, | (2.7) |
where C1 and Cd are the constants which depend on the dimensional of the corresponding space.
Proof. By the definitions of Vp,qν and Vp,qν,N in (1.6) and (2.1),
f−fN=r∑i=1∑|k1|≤N∑|k2|>Nci(k1,k2)ϕi(x−k1,y−k2)+r∑i=1∑|k1|>N∑|k2|≤Nci(k1,k2)ϕi(x−k1,y−k2)+r∑i=1∑|k1|>N∑|k2|>Nci(k1,k2)ϕi(x−k1,y−k2):=I1+I2+I3. | (2.8) |
Next, we will separately estimate ‖I1‖Lp,qν(C2R1,2R2), ‖I2‖Lp,qν(C2R1,2R2) and ‖I3‖Lp,qν(C2R1,2R2).
Firstly, by (1.3), (1.7) and (1.8), we could obtain
‖I1‖Lp,qν(C2R1,2R2)≤Cr∑i=1‖∑|k1|≤N∑|k2|>N|(ciν)(k1,k2)||(ϕiω)(x−k1,y−k2)|‖Lp,q(C2R1,2R2)≤Cr∑i=1‖ci‖ℓp,qν(Zd+1)‖∑|k1|≤N∑|k2|>N|(ϕiω)(x−k1,y−k2)|‖Lp,q(C2R1,2R2)≤Cr∑i=1‖ci‖ℓp,qν(Zd+1)‖∑|k1|≤N∑|k2|>N1(1+|x−k1|)n1(1+|y−k2|)n2‖Lp,q(C2R1,2R2)≤Ccp,q‖∑|k1|≤N∑|k2|>N1(1+|x−k1|)n1(1+|y−k2|)n2‖Lp,q(C2R1,2R2). | (2.9) |
With the help of Lemma 2.1 and the fact (a+b)d≤ad(1+b)d for a≥1 and b>0,
∑|k1|≤N∑|k2|>N1(1+|x−k1|)n1(1+|y−k2|)n2=(∑|k1|≤N1(1+|x−k1|)n1)(∑|k2|>N1(1+|y−k2|)n2)≤(∑|k1|≤N1(1+|x−k1|)n1)(∑|k2−y|≥N−|y|1(1+|y−k2|)n2)≤(2N+1)Cd∫|u−y|≥N−|y|1|u−y|n2du=(2N+1)Cd∫|u−y|≥N−|y||u−y|−d−(n2−d)du≤ 2Cd(N−|y|+|y|+1)2πd2Γ(d2)−1(n2−d)(N−|y|)n2−d≤2Cd(N−|y|)(|y|+2)2πd2Γ(d2)−1(n2−d)(N−|y|)n2−d≤2Cd(2√dR2+2)2πd2Γ(d2)−1(n2−d)(N−2√dR2)n2−d−1. | (2.10) |
Combining the results of (2.9) and (2.10),
‖I1‖Lp,qν(C2R1,2R2)≤8CCd(4R1)1p(4R2)dqπd2Γ(d2)−1(1+√dR2)cp,q(n2−d)(N−2√dR2)n2−d−1. | (2.11) |
By the similar method, we could obtain
‖I2‖Lp,qν(C2R1,2R2)≤22d+1CC1(4R1)1p(4R2)dq(1+R1)dcp,q(n1−1)(N−2R1)n1−d−1, | (2.12) |
‖I3‖Lp,qν(C2R1,2R2)≤4CC1Cdπd2Γ(d2)−1(4R1)1p(4R2)dqcp,q(n1−1)(n2−d)(N−2R)n1+n2−d−1. | (2.13) |
Thus, the result (2.4) is followed by (2.11)–(2.13).
When p=q=∞,
‖I1‖L∞,∞ν(C2R1,2R2)≤8CCdπd2Γ(d2)−1(1+√dR2)cp,q(n2−d)(N−2√dR2)n2−d−1, | (2.14) |
‖I2‖L∞,∞ν(C2R1,2R2)≤22d+1CC1(1+R1)dcp,q(n1−1)(N−2R1)n1−d−1, | (2.15) |
‖I3‖L∞,∞ν(C2R1,2R2)≤4CC1Cdπd2Γ(d2)−1cp,q(n1−1)(n2−d)(N−2R)n1+n2−d−1. | (2.16) |
Thus, the result (2.6) is followed by (2.14)–(2.16).
For dealing well with the random convolution sampling stability for signals in multiply generated shift invariant subspace of weighted mixed Lebesgue space, some essential conclusions are presented in the following.
Lemma 3.1. For f∈Lp,qν(Rd+1) and h∈L1,1ω(Rd+1),
‖f∗h‖Lp,qν(Rd+1)≤C‖f‖Lp,qν(Rd+1)‖h‖L1,1ω(Rd+1). | (3.1) |
Proof. For any f∈Lp,qν(Rd+1) and h∈L1,1ω(Rd+1),
|(f∗h)(x,y)|ν(x,y)≤∫R∫Rd|f(t,s)h(x−t,y−s)|dtdsν(x,y)≤C∫R∫Rd(|f|ν)(t,s)(|h|ω)(x−t,y−s)dtds=C(|f|ν∗|h|ω)(x,y). | (3.2) |
By the result in [4],
‖f∗h‖Lp,qν(Rd+1)≤C‖(|f|ν)∗(|h|ω)‖Lp,q(Rd+1)≤C‖f‖Lp,qν(Rd+1)‖h‖L1,1ω(Rd+1). | (3.3) |
Furthermore, we also need the help of the covering number which is a powerful tool in estimating the probability error or the number of samples required for a given confidence and error bound [20].
Lemma 3.2. Let Vp,q,∗ν,N be defined by (2.2). Then for any η>0, the covering number of Vp,q,∗ν,N with respect to the norm ‖⋅‖Lp,qν(Rd+1) is bounded by
N(Vp,q,∗ν,N,η)≤exp(r(2N+1)d+1ln(2η+1)). | (3.4) |
The proof could refer to [10]. There will omit it.
Lemma 3.3. Let Vp,q,∗ν,N be defined by (2.2). Then for any η>0, the covering number of Vp,q,∗ν,N with respect to the norm ‖⋅‖L∞,∞ν(Rd+1) is bounded by
N(Vp,q,∗ν,N,η)≤exp(r(2N+1)d+1ln(2C∗η+1)), | (3.5) |
where
C∗:=Ccp,q(∑k1∈Z2(1+|k1|)n1)(∑k2∈Zd2(1+|k2|)n2). | (3.6) |
Proof. For any f∈Vp,q,∗ν,N,
‖f‖L∞,∞ν(Rd+1)=‖r∑i=1∑|k1|≤N∑|k2|≤Nci(k1,k2)ϕi(x−k1,y−k2)‖L∞,∞ν(Rd+1)≤C‖r∑i=1∑|k1|≤N∑|k2|≤N(ciν)(k1,k2)(ϕiω)(x−k1,y−k2)‖L∞,∞(Rd+1)≤C‖r∑i=1∑|k1|≤N∑|k2|≤N|(ciν)(k1,k2)|(1+|x−k1|)n1(1+|y−k2|)n2‖L∞,∞(Rd+1)≤Cr∑i=1‖ci‖ℓp,qν([−N,N]d+1)‖∑|k1|≤N∑|k2|≤N1(1+|x−k1|)n1(1+|y−k2|)n2‖L∞,∞(Rd+1)=Cr∑i=1‖ci‖ℓp,qν([−N,N]d+1)‖(∑|k1|≤N1(1+|x−k1|)n1)(∑|k2|≤N1(1+|y−k2|)n2)‖L∞,∞(Rd+1)≤Cr∑i=1‖ci‖ℓp,qν([−N,N]d+1)(∑k1∈Z2(1+|k1|)n1)(∑k2∈Zd2(1+|k2|)n2)≤C‖f‖Lp,qν(Rd+1)cp,q(∑k1∈Z2(1+|k1|)n1)(∑k2∈Zd2(1+|k2|)n2)=C∗. | (3.7) |
Let F be the corresponding ηC∗− net for the space Vp,q,∗ν,N with respect to the norm ‖⋅‖Lp,qν(Rd+1). Then for any f∈Vp,q,∗ν,N, there exists a function ˜f∈F such that ‖f−˜f‖Lp,qν(Rd+1)≤ηC∗. Furthermore,
‖f−˜f‖L∞,∞ν(Rd+1)≤C∗‖f−˜f‖Lp,qν(Rd+1)≤η. | (3.8) |
Thus, F is also a η− net of Vp,q,∗ν,N with respect to the norm ‖⋅‖L∞,∞ν(Rd+1) and the cardinality of F is at most (3.5).
Based on the function f∈Vp,qν, we will introduce the random variable
Zj,k(f)=|(f∗ψ)(xj,yk)|ν(xj,yk)−∫R1−R1∫[−R2,R2]dρ(x,y)|(f∗ψ)(x,y)|ν(x,y)dxdy, | (3.9) |
where the sampling set {(xj,yk)}j=1,⋯,m;k=1,⋯,n is a sequence of independent random variables which are drawn from a general probability distribution over CR1,R2 with the density function ρ satisfying (1.4). Obviously, {Zj,k(f),j=1,⋯,m;k=1,⋯,n} is a sequence of independent random variables with E[Zj,k(f)]=0. Regarding to the other properties, we will elaborate on them in the following section.
Lemma 3.4. Let the density function ρ satisfy the condition (1.4) and the convolution function ψ satisfy (1.5). Then for any f,g∈Vp,qν,
(1) ‖Zj,k(f)‖ℓ∞,∞≤C‖f‖L∞,∞ν(C2R1,2R2)‖ψ‖L1,1ω(CR1,R2);(2) ‖Zj,k(f)−Zj,k(g)‖ℓ∞,∞≤2C‖f−g‖L∞,∞ν(C2R1,2R2)‖ψ‖L1,1ω(CR1,R2);(3) Var(Zj,k(f))≤C2‖f‖2L∞,∞ν(C2R1,2R2)‖ψ‖2L1,1ω(CR1,R2);(4) Var(Zj,k(f)−Zj,k(g))≤C2‖ψ‖2L1,1ω(CR1,R2)‖f−g‖L∞,∞ν(C2R1,2R2)(‖f‖L∞,∞ν(C2R1,2R2)+‖g‖L∞,∞ν(C2R1,2R2)). |
Proof.
(1)‖Zj,k(f)‖ℓ∞,∞=supxj∈[−R1,R1]supyk∈[−R2,R2]d||(f∗ψ)(xj,yk)|ν(xj,yk)−∫R1−R1∫[−R2,R2]dρ(x,y)|(f∗ψ)(x,y)|ν(x,y)dxdy|≤max{supx∈[−R1,R1]supy∈[−R2,R2]d|(f∗ψ)(x,y)|ν(x,y),∫R1−R1∫[−R2,R2]dρ(x,y)|(f∗ψ)(x,y)|ν(x,y)dxdy}≤supx∈[−R1,R1]supy∈[−R2,R2]d|(f∗ψ)(x,y)|ν(x,y)≤Csupx∈[−R1,R1]supy∈[−R2,R2]d∫R1−R1∫[−R2,R2]d(|f|ν)(x−t,y−s)(|ψ|ω)(t,s)dtds≤C‖f‖L∞,∞ν(C2R1,2R2)‖ψ‖L1,1ω(CR1,R2). |
(2)‖Zj,k(f)−Zj,k(g)‖ℓ∞,∞=supxj∈[−R1,R1]supyk∈[−R2,R2]d||(f∗ψ)(xj,yk)|ν(xj,yk)−|(g∗ψ)(xj,yk)|ν(xj,yk) −∫R1−R1∫[−R2,R2]dρ(x,y)(|(f∗ψ)(x,y)|ν(x,y)−|(g∗ψ)(x,y)|ν(x,y))dxdy|≤supx∈[−R1,R1]supy∈[−R2,R2]d|((f−g)∗ψ)(x,y)|ν(x,y)+∫R1−R1∫[−R2,R2]dρ(x,y)|((f−g)∗ψ)(x,y)|ν(x,y)dxdy≤2supx∈[−R1,R1]supy∈[−R2,R2]d(|((f−g)∗ψ)(x,y)|ν(x,y))≤2Csupx∈[−R1,R1]supy∈[−R2,R2]d∫R1−R1∫[−R2,R2]d(|f−g|ν)(x−t,y−s)(|ψ|ω)(t,s)dtds≤2C‖f−g‖L∞,∞ν(C2R1,2R2)‖ψ‖L1,1ω(CR1,R2). |
(3)Var(Zj,k(f))=E[Zj,k(f)]2−(E[Zj,k(f)])2=E[Zj,k(f)]2=∫R1−R1∫[−R2,R2]dρ(t,s)(|(f∗ψ)(t,s)|ν(t,s)−∫R1−R1∫[−R2,R2]dρ(x,y)|(f∗ψ)(x,y)|ν(x,y)dxdy)2dtds≤∫R1−R1∫[−R2,R2]dρ(t,s)(|(f∗ψ)(t,s)|ν(t,s))2dtds≤C2∫R1−R1∫[−R2,R2]dρ(t,s)(∫R1−R1∫[−R2,R2]d(|f|ν)(t−x,s−y)(|ψ|ω)(x,y)dxdy)2dtds≤C2‖f‖2L∞,∞ν(C2R1,2R2)‖ψ‖2L1,1ω(CR1,R2). |
(4)Var(Zj,k(f)−Zj,k(g))=E[Zj,k(f)−Zj,k(g)]2−(E[Zj,k(f)−Zj,k(g)])2=E[Zj,k(f)−Zj,k(g)]2=∫R1−R1∫[−R2,R2]dρ(t,s)[(|(f∗ψ)(t,s)|−|(g∗ψ)(t,s)|)ν(t,s) −∫R1−R1∫[−R2,R2]dρ(x,y)(|(f∗ψ)(x,y)|−|(g∗ψ)(x,y)|)ν(x,y)dxdy]2dtds≤∫R1−R1∫[−R2,R2]dρ(t,s)[(|(f∗ψ)(t,s)|−|(g∗ψ)(t,s)|)ν(t,s)]2dtds≤∫R1−R1∫[−R2,R2]dρ(t,s)(|((f−g)∗ψ)(t,s)|ν(t,s))(|((f+g)∗ψ)(t,s)|ν(t,s))dtds≤C2‖f−g‖L∞,∞ν(C2R1,2R2)(‖f‖L∞,∞ν(C2R1,2R2)+‖g‖L∞,∞ν(C2R1,2R2))‖ψ‖2L1,1ω(CR1,R2). |
Lemma 3.5. [10] Let Zj,k be independent random variables with expected values E[Zj,k]=0, Var(Zj,k)≤σ2 and |Zj,k|≤M for all j=1,⋯,m and k=1,⋯,n. Then for any γ≥0,
Prob(|m∑j=1n∑k=1Zj,k|≥γ)≤2exp(−γ22mnσ2+23Mγ). | (3.10) |
Lemma 3.6. Assume that {(xj,yk)}j=1,⋯,m;k=1,⋯,n is a sequence of independent random variables which are drawn from a general probability distribution over CR1,R2 with the density function ρ satisfying (1.4). Then for any m,n∈N, there exist positive constants A,B>0 such that
Prob(supf∈Vp,q,∗ν,N|m∑j=1n∑k=1Zj,k(f)|≥γ)≤Aexp(−Bγ212mnC∗D+2γ), | (3.11) |
where A is of order exp(C′(2N+1)d+1), C′=(5r)ln2+2rln(C∗+1)+rln(4C∗+1), B=min{32C∗D,√21296D} and D=C‖ψ‖L1,1ω(CR1,R2).
Proof. For given l∈N, we construct a 2−l-covering for Vp,q,∗ν,N with respect to the norm ‖⋅‖L∞,∞ν(Rd+1). Let A(2−l) be the corresponding 2−l-net for l=1,2,⋯. By Lemma 3.3, A(2−l) has cardinality at most N(Vp,q,∗ν,N,2−l). Suppose that fl is the function in A(2−l) that is close to f∈Vp,q,∗ν,N with respect to the norm ‖⋅‖L∞,∞ν(Rd+1), then ‖f−fl‖L∞,∞ν(Rd+1)≤2−l→0 as l→∞. Define
Zj,k(f)=Zj,k(f1)+∞∑l=2(Zj,k(fl)−Zj,k(fl−1)). |
By Lemma 3.4, the random variable Zj,k(f) is well defined.
If supf∈Vp,q,∗ν,N|m∑j=1n∑k=1Zj,k(f)|≥γ, then the event ωl must hold for some l≥1, where
ω1={there exists f1∈A(2−1) such that |m∑j=1n∑k=1Zj,k(f1)|≥γ/2} |
and for l≥2,
ωl={there exist fl∈A(2−l),fl−1∈A(2−(l−1)) with ‖fl−fl−1‖L∞,∞ν(Rd+1)≤3⋅2−l, such that |m∑j=1n∑k=1(Zj,k(fl)−Zj,k(fl−1))|≥γ2l2}. |
If this were not the case, then with f0=0, we have
|m∑j=1n∑k=1Zj,k(f)|≤∞∑l=1|m∑j=1n∑k=1(Zj,k(fl)−Zj,k(fl−1))|≤∞∑l=1γ2l2=π2γ12<γ. |
In the following, we will estimate the probability of ω1. By Lemma 3.5, for each fixed function f∈A(2−1),
Prob(|m∑j=1n∑k=1Zj,k(f)|≥γ2)≤2exp(−(γ/2)22mnVar(Zj,k(f))+2/3⋅γ/2⋅‖Zj,k(f)‖ℓ∞,∞))≤2exp(−3γ22(12mn(C∗D)2+2γC∗D))=2exp(−3γ22C∗D(12mnC∗D+2γ)). |
Moreover, by the result of Lemma 3.3, there are at most
N(Vp,q,∗ν,N,12)≤exp(r(2N+1)d+1ln(4C∗+1)) |
functions in A(2−1). Therefore, the probability of ω1 is bounded by
Prob(ω1)≤2exp(r(2N+1)d+1ln(4C∗+1))exp(−3γ22C∗D(12mnC∗D+2γ)). | (3.12) |
By the similar method, we could obtain the following estimations about the probabilities of ωl, l≥2. In fact, for fl∈A(2−l),fl−1∈A(2−(l−1)) and ‖fl−fl−1‖L∞,∞ν(Rd+1)≤3⋅2−l,
Prob(|m∑j=1n∑k=1(Zj,k(fl)−Zj,k(fl−1))|≥γ2l2)≤2exp(−(γ2l2)22mnVar(Zj,k(fl)−Zj,k(fl−1))+23‖Zj,k(fl)−Zj,k(fl−1)‖ℓ∞,∞γ2l2)≤2exp(−(γ2l2)22mn⋅3⋅2−l⋅D2⋅2C∗+23⋅2D⋅3⋅2−l⋅γ2l2)≤2exp(−ϑ2ll4), | (3.13) |
where ϑ:=γ24D(12mnDC∗+2γ). There are at most N(Vp,q,∗ν,N,2−l) functions in A(2−l) and N(Vp,q,∗ν,N,2−l+1) functions in A(2−(l−1)). Therefore, we have
Prob(∞⋃l=2ωl)≤∞∑l=2N(Vp,q,∗ν,N,2−l)N(Vp,q,∗ν,N,2−l+1)2exp(−ϑ2ll4)≤∞∑l=22exp(r(2N+1)d+1ln(2l+1C∗+1))exp(r(2N+1)d+1ln(2lC∗+1))exp(−ϑ2ll4)≤∞∑l=22exp(2r(2N+1)d+1[(l+1)ln2+ln(C∗+1)]−ϑ2ll4)=∞∑l=22exp([(2rln2)(2N+1)d+1]l+(2rln2)(2N+1)d+1+2r(2N+1)d+1ln(C∗+1)−ϑ2ll4)=C1∞∑l=2exp(C2l−ϑ2ll4)=C1∞∑l=2exp(−ϑ2l2(2l2l4−C2lϑ2l2)), |
where C1=2exp((2rln2)(2N+1)d+1+2r(2N+1)d+1ln(C∗+1)) and C2=(2rln2)(2N+1)d+1.
Notice that
minl≥22l2l4=1324,maxl≥2l2l2=3√24, |
then
2l2l4−lC22l2ϑ≥1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2. |
We first consider the case that
1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2>0. | (3.14) |
Notice that s,a>0, one has ∞∑l=2e−sal≤e−assalna, see [17] and let
s=ϑ(2l2l4−lC22l2ϑ),a=212, |
we can obtain
Prob(∞⋃l=2ωl)≤C1exp(−√2ϑ(1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2))(√2ln√2)ϑ(1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2)=C1(√2ln√2)ϑ(1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2)×exp(−√2ϑ(1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2))=C1exp(√2ϑ(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2)(√2ln√2)ϑ(1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2)exp(−√2ϑ324)=2exp((2rln2+2rln(C∗+1)+3rln2)(2N+1)d+1)(√2ln√2)ϑ(1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2)exp(−√2ϑ324)=2exp(((5r)ln2+2rln(C∗+1))(2N+1)d+1)(√2ln√2)ϑ(1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2)exp(−√2γ21296D(12mnDC∗+2γ)). | (3.15) |
Combining the results of (3.12) and (3.15), we can obtain that
Prob(supf∈Vp,q,∗ν,N|m∑j=1n∑k=1Zj,k(f)|≥γ)≤Aexp(−Bγ212mnDC∗+2γ), | (3.16) |
where A is of order exp(C′(2N+1)d+1), C′=(5r)ln2+2rln(C∗+1)+rln(4C∗+1) and B=min{32C∗D,√21296D}.
If
1324−(6√2ln2)Dr(2N+1)d+1(12mnDC∗+2γ)γ2≤0 | (3.17) |
then we could choose C′≥324B(6√2ln2)Dr such that Aexp(−Bγ212mnDC∗+2γ)≥1.
In order to obtain the random convolution sampling stability for signals in multiply generated subspaces of weighted mixed Lebesgue space, we also need the help of the corresponding sampling stability for some subset of Vp,qν,N.
Theorem 4.1. Assume that {(xj,yk)}j=1,⋯,m;k=1,⋯,n is a sequence of independent random variables which are drawn from a general probability distribution over CR1,R2 with the density function ρ satisfying (1.4) and the convolution function ψ satisfies (1.5). Then for any m,n∈N, there exist positive constants 0<θ,γ<1 such that
A1{θ}:=m1pn1q(1−γ)(Cρ,lθG−1)pq>0, |
B1{θ}:=mn(γ(Cρ,lθG−1)pq+CC∗‖ψ‖L1,1ω(CR1,R2))>0, |
where
G:=(2R1)q−1pq(2R2)d(p−1)pq(CCρ,uC∗‖ψ‖L1,1ω(CR1,R2))pq−1pq. |
Furthermore, the random convolution sampling stability
A1{θ}‖f‖Lp,qν(Rd+1)≤‖{(f∗ψ)(xj,yk)}j=1,⋯,m;k=1,⋯,n‖ℓp,qν≤B1{θ}‖f‖Lp,qν(Rd+1) | (4.1) |
holds for all f∈Vp,q,⋄ν,N:={f∈Vp,qν,N:‖f∗ψ‖Lp,qν(CR1,R2)≥θ} with the probability at least
1−Aexp(−B(γmn(Cρ,lθG−1)pq)212mnC∗C‖ψ‖L1,1ω(CR1,R2)+2γmn(Cρ,lθG−1)pq), |
where A,B are defined as in Lemma 3.6.
Proof. Obviously, every function f∈Vp,q,⋄ν,N satisfies the random convolution sampling stability (4.1) if and only if f/‖f‖Lp,qν(Rd+1) also does. Thus we assume that f∈Vp,q,⋄,∗ν,N:={f∈Vp,q,⋄ν,N:‖f‖Lp,qν(Rd+1)=1}.
Define the event
H={supf∈Vp,q,⋄,∗ν,N|m∑j=1n∑k=1Zj,k(f)|≥γmn(Cρ,lθG−1)pq}. | (4.2) |
Its complement is
˜H={−γmn(Cρ,lθG−1)pq+mn∫R1−R1∫[−R2,R2]dρ(x,y)|(f∗ψ)(x,y)|ν(x,y)dxdy≤m∑j=1n∑k=1|(f∗ψ)(xj,yk)|ν(xj,yk)≤γmn(Cρ,lθG−1)pq+mn∫R1−R1∫[−R2,R2]dρ(x,y)|(f∗ψ)(x,y)|ν(x,y)dxdy,f∈Vp,q,⋄,∗ν,N}. | (4.3) |
Write g(x,y):=ρ(x,y)|(f∗ψ)(x,y)|ν(x,y) with f∈Vp,q,⋄,∗ν,N, we can obtain
∫R1−R1∫[−R2,R2]d|g(x,y)|dxdy=∫R1−R1∫[−R2,R2]d|ρ(x,y)[|∫R1−R1∫[−R2,R2]df(x−t,y−s)ψ(t,s)dtds|]ν(x,y)|dxdy≤C∫R1−R1∫[−R2,R2]d(ρ(x,y)∫R1−R1∫[−R2,R2]d|(fν)(x−t,y−s)||(ψω)(t,s)|dtds)dxdy≤C‖f‖L∞,∞ν(C2R1,2R2)‖ψ‖L1,1ω(CR1,R2)≤C‖f‖L∞,∞ν(Rd+1)‖ψ‖L1,1ω(CR1,R2)≤CC∗‖ψ‖L1,1ω(CR1,R2). | (4.4) |
Furthermore,
‖{(f∗ψ)(xj,yk)}j=1,⋯,m;k=1,⋯,n‖ℓp,qν≤‖{(f∗ψ)(xj,yk)}j=1,⋯,m;k=1,⋯,n‖ℓ1,1ν. | (4.5) |
Combining the results of (4.4) and (4.5), we can obtain
‖{(f∗ψ)(xj,yk)}j=1,⋯,m;k=1,⋯,n‖ℓp,qν≤mn(γ(Cρ,lθG−1)pq+CC∗‖ψ‖L1,1ω(CR1,R2)). | (4.6) |
In addition,
‖g‖Lp,q(CR1,R2)≤(2R1)q−1pq[(∫R1−R1(∫[−R2,R2]d|g(x,y)|qdy)pdx)1q]1p≤(2R1)q−1pq(‖g‖L∞,∞(CR1,R2))q−1q[(∫R1−R1(∫[−R2,R2]d|g(x,y)|dy)pdx)1q]1p≤(2R1)q−1pq(‖g‖L∞,∞(CR1,R2))q−1q(∫[−R2,R2]d(∫R1−R1|g(x,y)|pdx)1pdy)1q≤(2R1)q−1pq(2R2)d(p−1)pq(‖g‖L∞,∞(CR1,R2))q−1q(∫[−R2,R2]d(∫R1−R1|g(x,y)|pdx)dy)1pq≤(2R1)q−1pq(2R2)d(p−1)pq(‖g‖L∞,∞(CR1,R2))q−1q(‖g‖L∞,∞(CR1,R2))p−1pq‖g‖1pqL1,1(CR1,R2)≤(2R1)q−1pq(2R2)d(p−1)pq(CCρ,u‖f‖L∞,∞ν(C2R1,2R2)‖ψ‖L1,1ω(CR1,R2))pq−1pq‖g‖1pqL1,1(CR1,R2)≤(2R1)q−1pq(2R2)d(p−1)pq(CCρ,uC∗‖ψ‖L1,1ω(CR1,R2))pq−1pq‖g‖1pqL1,1(CR1,R2)=G‖g‖1pqL1,1(CR1,R2). |
Thus,
\begin{align} \|g\|_{L^{1, 1}(C_{R_{1}, R_{2}})}&\geq\frac{\|g\|^{pq}_{L^{p, q}(C_{R_{1}, R_{2}})}}{G^{pq}}\\ &\geq\frac{\Big[C_{\rho, l}\Big(\int_{[-R_{1}, R_{1}]}\Big(\int_{[-R_{2}, R_{2}]^{d}}\Big(|(f*\psi)(x, y)|\nu(x, y)\Big)^{q}dy\Big)^{\frac{p}{q}}dx\Big)^{\frac{1}{p}}\Big]^{pq}}{G^{pq}}\\ & = \frac{\Big[C_{\rho, l}\|(f*\psi)\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\Big]^{pq}}{G^{pq}}\geq (C_{\rho, l}\theta G^{-1})^{pq}. \end{align} | (4.7) |
By Hölder inequality, we have
\begin{equation} \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1\cdots, n}\Big\|_{\ell^{1, 1}_{\nu}} \leq m^{\frac{p-1}{p}}n^{\frac{q-1}{q}}\Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1\cdots, n}\Big\|_{\ell^{p, q}_{\nu}}. \end{equation} | (4.8) |
Combining the results of (4.7) and (4.8), we can obtain
\begin{equation} m^{-\frac{p-1}{p}}n^{-\frac{q-1}{q}}\Big[-\gamma mn(C_{\rho, l}\theta G^{-1})^{pq} +mn(C_{\rho, l}\theta G^{-1})^{pq}\Big]\leq \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1\cdots, n}\Big\|_{\ell^{p, q}_{\nu}}. \end{equation} | (4.9) |
Followed by the results of (4.3), (4.6) and (4.9), the event
\begin{align} \overline{\mathcal{H}} = \Big\{&m^{-\frac{p-1}{p}}n^{-\frac{q-1}{q}}\Big[mn(C_{\rho, l}\theta G^{-1})^{pq}-\gamma mn(C_{\rho, l}\theta G^{-1})^{pq}\Big]\\ &\leq \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq \Big[\gamma mn (C_{\rho, l}\theta G^{-1})^{pq}+mnCC^{*}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big], \quad f\in V^{p, q, \diamond, *}_{\nu, N}\Big\}\\ = \Big\{&m^{-\frac{p-1}{p}}n^{-\frac{q-1}{q}}\Big[mn(C_{\rho, l}\theta G^{-1})^{pq}-\gamma mn(C_{\rho, l}\theta G^{-1})^{pq}\Big]\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\\ &\leq \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq \Big[\gamma mn (C_{\rho, l}\theta G^{-1})^{pq}+mnCC^{*}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big]\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}, \quad f\in V^{p, q, \diamond}_{\nu, N}\Big\} \end{align} | (4.10) |
contains the event \widetilde{\mathcal{H}} .
By Lemma 3.6, we can obtain
\begin{align} Prob(\overline{\mathcal{H}})&\geq Prob(\widetilde{\mathcal{H}})\geq 1-Prob(\mathcal{H})\\ &\geq 1-Prob\Big(\sup\limits_{f\in V_{\nu, N}^{p, q, \diamond, *}}\Big|\sum\limits_{j = 1}^{m}\sum\limits_{k = 1}^{n}Z_{j, k}(f)\Big|\geq \gamma mn (C_{\rho, l}\theta G^{-1})^{pq}\Big) \\ &\geq 1-Prob\Big(\sup\limits_{f\in V_{\nu, N}^{p, q, *}}\Big|\sum\limits_{j = 1}^{m}\sum\limits_{k = 1}^{n}Z_{j, k}(f)\Big|\geq \gamma mn(C_{\rho, l}\theta G^{-1})^{pq}\Big) \\ &\geq 1-A\exp\Big(-B\frac{\Big(\gamma mn(C_{\rho, l}\theta G^{-1})^{pq}\Big)^{2}}{12mnC^{*}C\|\psi\|_{L^{1, 1}(C_{R_{1}, R_{2}})}+2\gamma mn(C_{\rho, l}\theta G^{-1})^{pq}}\Big). \end{align} |
Theorem 4.2. Assume that \{(x_{j}, y_{k})\}_{j = 1, \cdots, m; k = 1, \cdots, n} is a sequence of independent random variables which are drawn from a general probability distribution over C_{R_{1}, R_{2}} with the density function \rho satisfying (1.4) and the convolution function \psi satisfies (1.5). Then for any m, n\in\mathbb{N} , there exist positive constants 0 < \gamma, \varepsilon < 1 such that
A_{2}: = A_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}(1-\delta-\varepsilon)-\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}} > 0 |
and
B_{2}: = B_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}\frac{C_{p, q}}{c_{p, q}}+\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}} > 0, |
where A_{1}, B_{1} are defined as in Theorem 4.1. Furthermore, the random convolution sampling stability
\begin{equation} A_{2}\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\leq\Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\leq B_{2}\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})} \end{equation} | (4.11) |
holds for any f\in V_{\nu, R_{1}, R_{2}}^{p, q} with the probability at least
\begin{equation} 1-A\exp\Big(-B\frac{\Big(\gamma mn \Big[C_{\rho, l}(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})} G^{-1}\Big]^{pq}\Big)^{2}}{12mnC^{*}C\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+2\gamma mn \Big[C_{\rho, l}(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})} G^{-1}\Big]^{pq}}\Big), \end{equation} | (4.12) |
where the constants A, B, G are defined as in Theorem 4.1 with
N\geq\max\{N_{1}(\varepsilon)+2R+1, N_{2}(\frac{\varepsilon}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}})+2R+1\}, \quad R = \max\{R_{1}, \sqrt{d}R_{2}\}. |
Proof. It is obvious that every function f\in V_{\nu, R_{1}, R_{2}}^{p, q} satisfies the random convolution sampling stability (4.11) if and only if f/\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})} also does. Thus, we assume that f\in V_{\nu, R_{1}, R_{2}}^{p, q, *} .
By Lemma 2.2, for any f\in V_{\nu, R_{1}, R_{2}}^{p, q, *} and \varepsilon > 0 ,
\begin{align} &\|f-f_{N}\|_{L^{\infty, \infty}_{\nu}(C_{2R_{1}, 2R_{2}})}\leq \frac{\varepsilon}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}, \end{align} | (4.13) |
\begin{align} &\|f-f_{N}\|_{L^{p, q}_{\nu}(C_{2R_{1}, 2R_{2}})}\leq \varepsilon, \end{align} | (4.14) |
if N\geq\max\{N_{1}(\varepsilon)+2R+1, N_{2}(\frac{\varepsilon}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}})+2R+1\} and R = \max\{R_{1}, \sqrt{d}R_{2}\}.
Moreover, by the result of Lemma 3.1,
\begin{align} &\quad\|f*\psi-f_{N}*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\leq C\|f-f_{N}\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\\ &\leq C\|f-f_{N}\|_{L^{p, q}_{\nu}(C_{2R_{1}, 2R_{2}})}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\\ &\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}. \end{align} |
Thus,
\begin{align} &\quad\|f_{N}*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+\|f*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+\|f*\psi\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\\ &\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+C\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}. \end{align} | (4.15) |
Furthermore, by inequality (1.9),
\begin{align} &\quad\|f_{N}*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\geq -C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+\|f*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\geq -C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+\beta\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}. \end{align} | (4.16) |
Combining the results of (4.15) and (4.16), we can obtain for any function f\in V_{\nu, R_{1}, R_{2}}^{p, q, *} ,
\begin{equation} (\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\leq\|f_{N}*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+C\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}. \end{equation} | (4.17) |
By Theorem 4.1, the inequality
\begin{align*} A_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}&\|f_{N}\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\leq\Big\|\Big\{(f_{N}*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\nonumber\\ &\leq B_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}\|f_{N}\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\nonumber \end{align*} |
holds with the probability at least
\begin{equation} 1-A\exp\Big(-B\frac{\Big(\gamma mn \Big[C_{\rho, l}(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})} G^{-1}\Big]^{pq}\Big)^{2}}{12mnC^{*}C\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+2\gamma mn \Big[C_{\rho, l}(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})} G^{-1}\Big]^{pq}}\Big), \end{equation} | (4.18) |
At the same time, by inequality (4.13),
\begin{align} &\quad\Big\|\Big\{(f*\psi)(x_{j}, y_{k})-(f_{N}*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq\Big\|\Big\{(|f-f_{N}|*|\psi|)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ & = \Big(\sum\limits_{j = 1}^{m}\Big(\sum\limits_{k = 1}^{n}\Big|(|f-f_{N}|*|\psi|)(x_{j}, y_{k})\nu(x_{j}, y_{k})\Big|^{q}\Big)^{\frac{p}{q}}\Big)^{1/p}\\ &\leq\sum\limits_{j = 1}^{m}\sum\limits_{k = 1}^{n}\Big|(|f-f_{N}|*|\psi|)(x_{j}, y_{k})\nu(x_{j}, y_{k})\Big|\\ &\leq \frac{C\varepsilon}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}\sum\limits_{j = 1}^{m}\sum\limits_{k = 1}^{n}\Big|\int_{-R_{1}}^{R_{1}}\int_{[-R_{2}, R_{2}]^{d}}|\psi(t, s)|\omega(t, s)dtds\Big|\\ & = \frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}, \end{align} | (4.19) |
which is equivalent to
\begin{align} &\Big\|\Big\{(f_{N}*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}-\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}\\ &\leq\Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq\Big\|\Big\{(f_{N}*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m;k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}+\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}. \end{align} |
Furthermore,
\begin{equation*} (1-\delta)\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}-\varepsilon\leq\|f\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}-\varepsilon\leq \|f\|_{L^{p, q}_{\nu}(C_{2R_{1}, 2R_{2}})}-\varepsilon\leq \|f_{N}\|_{L^{p, q}_{\nu}(C_{2R_{1}, 2R_{2}})}\leq\|f_{N}\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})} \end{equation*} |
and
\begin{equation} \|f_{N}\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\leq C_{p, q}\sum\limits_{i = 1}^{r}\|c_{i}\|_{\ell^{p, q}_{\nu}([-N, N]^{d+1})}\leq C_{p, q}\sum\limits_{i = 1}^{r}\|c_{i}\|_{\ell^{p, q}_{\nu}(\mathbb{Z}^{d+1})}\leq \frac{C_{p, q}}{c_{p, q}}\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}. \end{equation} | (4.20) |
Thus, for any function f\in V^{p, q, *}_{\nu, R_{1}, R_{2}} ,
\begin{align} &A_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}\Big(1-\delta-\varepsilon\Big)-\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}\\ &\leq \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m;k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq B_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}\frac{C_{p, q}}{c_{p, q}}+\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}} \end{align} |
holds with the probability at least (4.12). Thus the random convolution sampling stability is proved.
This paper is aimed at studying the random convolution sampling stability in multiply generated shift invariant subspace of weighted mixed Lebesgue space. Under some restricted conditions and essential results, we prove that with overwhelming probability, the random convolution sampling stability holds for signals in some subset of the defined multiply generated shift invariant subspace when the sampling size is large enough.
The author declares no conflicts of interest in this paper.
[1] |
A. Aldroubi, K. Gröchenig, Nonuniform sampling and reconstruction in shift-invariant space, SIAM Rev., 43 (2001), 585–620. doi: 10.1137/s0036144501386986. doi: 10.1137/s0036144501386986
![]() |
[2] |
R. F. Bass, K. Gröchenig, Random sampling of bandlimited functions, Israel J. Math., 177 (2010), 1–28. doi: 10.1007/s11856-010-0036-7. doi: 10.1007/s11856-010-0036-7
![]() |
[3] |
R. F. Bass, K. Gröchenig, Relevant sampling of bandlimited functions, Illinois J. Math., 57 (2013), 43–58. doi: 10.1215/ijm/1403534485. doi: 10.1215/ijm/1403534485
![]() |
[4] |
A. Benedek, R. Panzone, The space L^{p} with mixed norm, Duke Math. J., 28 (1961), 301–324. doi: 10.1215/s0012-7094-61-02828-9. doi: 10.1215/s0012-7094-61-02828-9
![]() |
[5] |
E. J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, 52 (2006), 489–509. doi: 10.1109/TIT.2005.862083. doi: 10.1109/TIT.2005.862083
![]() |
[6] |
S. H. Chan, T. Zickler, Y. M. Lu, Monte Carlo non-local means: Random sampling for large-scale image filtering, IEEE Trans. Image Process., 23 (2014), 3711–3725. doi: 10.1109/tip.2014.2327813. doi: 10.1109/tip.2014.2327813
![]() |
[7] |
Y. C. Eldar, Compressed sensing of analog signal in a shift-invariant spaces, IEEE Trans. Signal Process., 57 (2009), 2986–2997. doi: 10.1109/TSP.2009.2020750. doi: 10.1109/TSP.2009.2020750
![]() |
[8] | K. Gröchenig, Weight functions in time-frequency analysis, 2006. Available from: https://arXiv.org/abs/math/0611174. |
[9] | Y. Han, B. Liu, Q. Y. Zhang, A sampling theory for non-decaying signals in mixed Lebesgue spaces L^{p, q}(\mathbb{R}\times \mathbb{R}^{d}), Appl. Anal., 2020. doi: 10.1080/00036811.2020.1736286. |
[10] | Y. C. Jiang, W. Li, Random sampling in multiply generated shift-invariant subspaces of mixed Lebesgue spaces L^{p, q}(\mathbb{R}\times \mathbb{R}^{d}), J. Comput. Appl. Math., 386 (2021), 113237. doi: 10.1016/j.cam.2020.113237. |
[11] |
A. Kumar, D. Patel, S. Sampath, Sampling and reconstruction in reproducing kernel subspaces of mixed Lebesgue spaces, J. Pseudo-Differ. Oper. Appl., 11 (2020), 843–868. doi: 10.1007/s11868-019-00315-0. doi: 10.1007/s11868-019-00315-0
![]() |
[12] |
R. Li, B. Liu, R. liu, Q. Y. Zhang, Nonuniform sampling in principle shift-invariant subspaces of mixed Lebesgue spaces L^{p, q}(\mathbb{R}^{d+1}), J. Math. Anal. Appl., 453 (2017), 928–941. doi: 10.1016/j.jmaa.2017.04.036. doi: 10.1016/j.jmaa.2017.04.036
![]() |
[13] |
R. Li, B. Liu, R. Liu, Q. Y. Zhang, The L^{p, q}-stability of the shifts of finitely many functions in mixed Lebesgue spaces L^{p, q}(\mathbb{R}^{d+1}), Acta Math. Sin., Engl. Ser., 34 (2018), 1001–1014. doi: 10.1007/s10114-018-7333-1. doi: 10.1007/s10114-018-7333-1
![]() |
[14] |
Y. X. Li, Q. Y. Sun, J. Xian, Random sampling and reconstruction of concentrated signals in a reproducing kernel space, Appl. Comput. Harmon. Anal., 54 (2021), 273–302. doi: 10.1016/j.acha.2021.03.006. doi: 10.1016/j.acha.2021.03.006
![]() |
[15] |
S. P. Luo, Error estimation for non-uniform sampling in shift invariant space, Appl. Anal., 86 (2007), 483–496. doi: 10.1080/00036810701259236. doi: 10.1080/00036810701259236
![]() |
[16] | D. Patel, S. Sampath, Random sampling in reproducing kernel subspaces of L^{p}(\mathbb{R}^{n}), J. Math. Anal. Appl., 491 (2020), 124270. doi: 10.1016/j.jmaa.2020.124270. |
[17] |
S. Smale, D. X. Zhou, Online learning with Markov sampling, Anal. Appl., 7 (2009), 87–113. doi: 10.1142/S0219530509001293. doi: 10.1142/S0219530509001293
![]() |
[18] |
J. B. Yang, Random sampling and reconstruction in multiply generated shift-invariant spaces, Anal. Appl., 17 (2019), 323–347. doi: 10.1142/S0219530518500185. doi: 10.1142/S0219530518500185
![]() |
[19] |
J. B. Yang, W. Wei, Random sampling in shift invariant spaces, J. Math. Anal. Appl., 398 (2013), 26–34. doi: 10.1016/j.jmaa.2012.08.030. doi: 10.1016/j.jmaa.2012.08.030
![]() |
[20] |
D. X. Zhou, The covering number in learning theory, J. Complexity, 18 (2002), 739–767. doi: 10.1006/jcom.2002.0635. doi: 10.1006/jcom.2002.0635
![]() |
1. | S. Arati, P. Devaraj, Ankush Kumar Garg, Random Average Sampling and Reconstruction in Shift-Invariant Subspaces of Mixed Lebesgue Spaces, 2022, 77, 1422-6383, 10.1007/s00025-022-01738-w | |
2. | Dhiraj Patel, S. Sivananthan, Random Average Sampling in a Reproducing Kernel Subspace of Mixed Lebesgue Space L^{p,q}({\mathbb {R}}^{n+1}), 2024, 79, 1422-6383, 10.1007/s00025-023-02037-8 |