Processing math: 82%
Research article Special Issues

The random convolution sampling stability in multiply generated shift invariant subspace of weighted mixed Lebesgue space

  • In this paper, we mainly investigate the random convolution sampling stability for signals in multiply generated shift invariant subspace of weighted mixed Lebesgue space. Under some restricted conditions for the generators and the convolution function, we conclude that the defined multiply generated shift invariant subspace could be approximated by a finite dimensional subspace. Furthermore, with overwhelming probability, the random convolution sampling stability holds for signals in some subset of the defined multiply generated shift invariant subspace when the sampling size is large enough.

    Citation: Suping Wang. The random convolution sampling stability in multiply generated shift invariant subspace of weighted mixed Lebesgue space[J]. AIMS Mathematics, 2022, 7(2): 1707-1725. doi: 10.3934/math.2022098

    Related Papers:

    [1] Stevo Stević . Norms of some operators between weighted-type spaces and weighted Lebesgue spaces. AIMS Mathematics, 2023, 8(2): 4022-4041. doi: 10.3934/math.2023201
    [2] Heng Yang, Jiang Zhou . Compactness of commutators of fractional integral operators on ball Banach function spaces. AIMS Mathematics, 2024, 9(2): 3126-3149. doi: 10.3934/math.2024152
    [3] Qifang Li, Jinjin Li, Xun Ge, Yiliang Li . Invariance of separation in covering approximation spaces. AIMS Mathematics, 2021, 6(6): 5772-5785. doi: 10.3934/math.2021341
    [4] Shuhui Yang, Yan Lin . Multilinear strongly singular integral operators with generalized kernels and applications. AIMS Mathematics, 2021, 6(12): 13533-13551. doi: 10.3934/math.2021786
    [5] Dazhao Chen . Endpoint estimates for multilinear fractional singular integral operators on Herz and Herz type Hardy spaces. AIMS Mathematics, 2021, 6(5): 4989-4999. doi: 10.3934/math.2021293
    [6] Tianyang He, Zhiwen Liu, Ting Yu . The Weighted Lp estimates for the fractional Hardy operator and a class of integral operators on the Heisenberg group. AIMS Mathematics, 2025, 10(1): 858-883. doi: 10.3934/math.2025041
    [7] Dazhao Chen . Weighted boundedness for Toeplitz type operator related to singular integral transform with variable Calderón-Zygmund kernel. AIMS Mathematics, 2021, 6(1): 688-697. doi: 10.3934/math.2021041
    [8] Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007
    [9] Yanqi Yang, Shuangping Tao, Guanghui Lu . Weighted and endpoint estimates for commutators of bilinear pseudo-differential operators. AIMS Mathematics, 2022, 7(4): 5971-5990. doi: 10.3934/math.2022333
    [10] Kandhasamy Tamilvanan, Jung Rye Lee, Choonkil Park . Ulam stability of a functional equation deriving from quadratic and additive mappings in random normed spaces. AIMS Mathematics, 2021, 6(1): 908-924. doi: 10.3934/math.2021054
  • In this paper, we mainly investigate the random convolution sampling stability for signals in multiply generated shift invariant subspace of weighted mixed Lebesgue space. Under some restricted conditions for the generators and the convolution function, we conclude that the defined multiply generated shift invariant subspace could be approximated by a finite dimensional subspace. Furthermore, with overwhelming probability, the random convolution sampling stability holds for signals in some subset of the defined multiply generated shift invariant subspace when the sampling size is large enough.



    Random sampling problems widely arise in compressed sensing [5,7], learning theory [17] and image processing [6]. Recently, many random sampling results for signals in different subspaces of the classical Lebesgue space have already been presented such as bandlimited space [2,3], shift invariant subspace [19], multiply generated shift invariant subspace [18] and reproducing kernel subspace [14,16]. However, an obvious shortcoming of the classical Lebesgue space is that it imposed the same control over all the variables of a function [12]. Thus, when considering the random sampling problems for some time-varying signals which depend on independent quantities with different properties, the mixed Lebesgue space, which could realize the separate integrability for each variable, seems to be a much more suitable tool to model those signals.

    Up to date, abundant sampling results for many subspaces of a mixed Lebesgue space have already been presented [11,12,13]. While, we should notice that most of those sampling results are all obtained based on a pre-given relatively separated set whose sampling gap satisfies some restricted conditions. As for the random sampling results, which are based on a randomly selected sampling set, there are very few results. Furthermore, an obvious precondition for these existing sampling results is that those signals are all integrable in the corresponding spaces, which is impossible for some non-decaying or infinitely growing signals. Based on these two facts and the properties of the moderated weight functions, which could control the growth or decay of the signals [1,9], we consider to use weighted mixed Lebesgue space to model those non-decaying or infinitely growing signals such that the corresponding random sampling problem could be well solved.

    Generally, the sampling problem mainly consists of the following two aspects. Firstly, finding out the proper conditions which ensure the given sampling set satisfies sampling stability. Secondly, designing an efficient reconstruction algorithm to restore the signals. In this paper, we mainly focus on the investigation of sampling stability. As for the research about the reconstruction algorithm, it will be the goal of our future work. In the following, some essential definitions or properties are presented such that the random convolution sampling problem is well understood.

    The weighted mixed Lebesgue space Lp,qν(Rd+1) consists of all measurable functions f=f(x,y) defined on R×Rd such that

    fLp,qν(Rd+1)=f(x,y)ν(x,y)Lqy(Rd)Lpx(R)<,1p,q. (1.1)

    The corresponding weighted sequence space p,qν(Zd+1) is defined by

    p,qν(Zd+1)={c:cp,qν(Zd+1)=c(k1,k2)ν(k1,k2)qk2(Zd)pk1(Z)<},1p,q. (1.2)

    In this paper, the weight function ν(x,y) is assumed to be continuous, positive, symmetric and moderated with respect to the weight function ω(x,y), i.e., there exists a constant C>0, such that,

    0<ν(x+x,y+y)Cν(x,y)ω(x,y),(x,y),(x,y)R×Rd. (1.3)

    Unless otherwise specified, the constant C without any subscript or superscript in this paper all stands for the above mentioned constant in (1.3). Furthermore, we also make the assumption that the weight function ω(x,y) is continuous, positive, symmetric and submultiplicative, that is, for all (x,y),(x,y)R×Rd,

    0<ω(x+x,y+y)ω(x,y)ω(x,y),(x,y),(x,y)R×Rd.

    The classical example about the above weight function is

    m(x)=ea|x|b(1+|x|)s(log(e+|x|))t,xRd+1.

    If a,s,t0 and 0b1, then the weight function m(x) is submultiplicative. If a,s,tR and 0b1, then the weight function m(x) is moderated. For further information about weight function, please refer to [8].

    In addition, the sampling set X={(xj,yk)}j=1,,m;k=1,,n in this paper is assumed to consist of sampling points which are randomly selected in CR1,R2 with the density function ρ(x,y) satisfying

    0<Cρ,lρ(x,y)Cρ,u,(x,y)CR1,R2, (1.4)

    where CR1,R2:=[R1,R1]×[R2,R2]d with R1,R2>0. Meanwhile, due to the limitation of the sampling devices, the obtained sampling value is not the exact value of signal at each sampling point but is the local average value near the corresponding sampling location. Thus, we assume that the corresponding sampling values are obtained by the following convolution version

    {(fψ)(xj,yk),(xj,yk)X},

    where the convolution function ψ satisfies

    ψL1,1ω(Rd+1),suppψCR1,R2. (1.5)

    In this paper, we mainly consider the random convolution sampling stability for signals in multiply generated shift-invariant subspace of weighted mixed Lebesgue space, which has the form

    Vp,qν:={ri=1k1Zk2Zdci(k1,k2)ϕi(xk1,yk2),cip,qν(Zd+1)},1p,q, (1.6)

    where the generators ϕi, i=1,,r satisfy the following conditions:

    ● For any (x,y)R×Rd,

    |(ϕiω)(x,y)|1(1+|x|)n1(1+|y|)n2,n1>d+1,n2>d+1, (1.7)

    where || means the traditional Euclidean norm.

    ● There exist constants cp,q,Cp,q>0 such that

    cp,qri=1cip,qν(Zd+1)ri=1k1Zk2Zdci(k1,k2)ϕi(xk1,yk2)Lp,qν(Rd+1)Cp,qri=1cip,qν(Zd+1). (1.8)

    ● There exists a constant 0<β<1 such that

    βψL1,1ν(CR1,R2)fLp,qν(Rd+1)fψLp,qν(CR1,R2). (1.9)

    Besides these, for a well defined function f in the practice, the values of the sampling points which are located in a very distant place may not be significant. Thus, we only consider the following subset

    Vp,qν,R1,R2:={fVp,qν:(1δ)fLp,qν(Rd+1)fLp,qν(CR1,R2)} (1.10)

    or its normalization

    Vp,q,ν,R1,R2:={fVp,qν,R1,R2:fLp,qν(Rd+1)=1}, (1.11)

    where 0δ<1. Obviously, the sets Vp,qν,R1,R2 and Vp,q,ν,R1,R2 consist of those functions whose energy is mainly concentrated in CR1,R2.

    This paper is organized as follows. In Section 2, we prove that the defined multiply generated shift invariant subspace could be approximated by a finite dimensional subspace. In Section 3, we present some essential results which will contribute to the proof of the sampling stability. In Section 4, we prove that with overwhelming probability, the sampling stability holds for the signals in some subset of the defined multiply generated shift invariant subspace when the sampling size is large enough.

    Define a finite dimensional subspace by

    Vp,qν,N:={ri=1|k1|N|k2|Nci(k1,k2)ϕi(xk1,yk2),cip,qν([N,N]d+1)},1p,q (2.1)

    and its normalization

    Vp,q,ν,N:={fVp,qν,N:fLp,qν(Rd+1)=1},1p,q. (2.2)

    Lemma 2.1. [15] Let α>0, xRd, I1(M,α)=|x|M|x|dαdx, then

    I1(M,α)=2πd2Γ(d2)11αMα. (2.3)

    Lemma 2.2. Let 1<p,q<, R=max{R1,dR2} and s=min{n11d,n21d}. Assume that the function fVp,qν with fLp,qν(Rd+1)=1, then for the given ε1,ε2>0, there exists a function fNVp,qν,N such that

    ffNLp,qν(C2R1,2R2)ε1, (2.4)

    if

    N2R+1+{C(4R1)1p(4R2)dqcp,qε1[8Cdπd2Γ(d2)1(1+dR2)(n2d)+22d+1C1(1+R1)d(n11)+4C1Cdπd2Γ(d2)1(n11)(n2d)]}1/s:=2R+1+N1{ε1} (2.5)

    and

    ffNL,ν(C2R1,2R2)ε2, (2.6)

    if

    N2R+1+{Ccp,qε2[8Cdπd2Γ(d2)1(1+dR2)(n2d)+22d+1C1(1+R1)d(n11)+4C1Cdπd2Γ(d2)1(n11)(n2d)]}1/s:=2R+1+N2{ε2}, (2.7)

    where C1 and Cd are the constants which depend on the dimensional of the corresponding space.

    Proof. By the definitions of Vp,qν and Vp,qν,N in (1.6) and (2.1),

    ffN=ri=1|k1|N|k2|>Nci(k1,k2)ϕi(xk1,yk2)+ri=1|k1|>N|k2|Nci(k1,k2)ϕi(xk1,yk2)+ri=1|k1|>N|k2|>Nci(k1,k2)ϕi(xk1,yk2):=I1+I2+I3. (2.8)

    Next, we will separately estimate I1Lp,qν(C2R1,2R2), I2Lp,qν(C2R1,2R2) and I3Lp,qν(C2R1,2R2).

    Firstly, by (1.3), (1.7) and (1.8), we could obtain

    I1Lp,qν(C2R1,2R2)Cri=1|k1|N|k2|>N|(ciν)(k1,k2)||(ϕiω)(xk1,yk2)|Lp,q(C2R1,2R2)Cri=1cip,qν(Zd+1)|k1|N|k2|>N|(ϕiω)(xk1,yk2)|Lp,q(C2R1,2R2)Cri=1cip,qν(Zd+1)|k1|N|k2|>N1(1+|xk1|)n1(1+|yk2|)n2Lp,q(C2R1,2R2)Ccp,q|k1|N|k2|>N1(1+|xk1|)n1(1+|yk2|)n2Lp,q(C2R1,2R2). (2.9)

    With the help of Lemma 2.1 and the fact (a+b)dad(1+b)d for a1 and b>0,

    |k1|N|k2|>N1(1+|xk1|)n1(1+|yk2|)n2=(|k1|N1(1+|xk1|)n1)(|k2|>N1(1+|yk2|)n2)(|k1|N1(1+|xk1|)n1)(|k2y|N|y|1(1+|yk2|)n2)(2N+1)Cd|uy|N|y|1|uy|n2du=(2N+1)Cd|uy|N|y||uy|d(n2d)du 2Cd(N|y|+|y|+1)2πd2Γ(d2)1(n2d)(N|y|)n2d2Cd(N|y|)(|y|+2)2πd2Γ(d2)1(n2d)(N|y|)n2d2Cd(2dR2+2)2πd2Γ(d2)1(n2d)(N2dR2)n2d1. (2.10)

    Combining the results of (2.9) and (2.10),

    I1Lp,qν(C2R1,2R2)8CCd(4R1)1p(4R2)dqπd2Γ(d2)1(1+dR2)cp,q(n2d)(N2dR2)n2d1. (2.11)

    By the similar method, we could obtain

    I2Lp,qν(C2R1,2R2)22d+1CC1(4R1)1p(4R2)dq(1+R1)dcp,q(n11)(N2R1)n1d1, (2.12)
    I3Lp,qν(C2R1,2R2)4CC1Cdπd2Γ(d2)1(4R1)1p(4R2)dqcp,q(n11)(n2d)(N2R)n1+n2d1. (2.13)

    Thus, the result (2.4) is followed by (2.11)–(2.13).

    When p=q=,

    I1L,ν(C2R1,2R2)8CCdπd2Γ(d2)1(1+dR2)cp,q(n2d)(N2dR2)n2d1, (2.14)
    I2L,ν(C2R1,2R2)22d+1CC1(1+R1)dcp,q(n11)(N2R1)n1d1, (2.15)
    I3L,ν(C2R1,2R2)4CC1Cdπd2Γ(d2)1cp,q(n11)(n2d)(N2R)n1+n2d1. (2.16)

    Thus, the result (2.6) is followed by (2.14)–(2.16).

    For dealing well with the random convolution sampling stability for signals in multiply generated shift invariant subspace of weighted mixed Lebesgue space, some essential conclusions are presented in the following.

    Lemma 3.1. For fLp,qν(Rd+1) and hL1,1ω(Rd+1),

    fhLp,qν(Rd+1)CfLp,qν(Rd+1)hL1,1ω(Rd+1). (3.1)

    Proof. For any fLp,qν(Rd+1) and hL1,1ω(Rd+1),

    |(fh)(x,y)|ν(x,y)RRd|f(t,s)h(xt,ys)|dtdsν(x,y)CRRd(|f|ν)(t,s)(|h|ω)(xt,ys)dtds=C(|f|ν|h|ω)(x,y). (3.2)

    By the result in [4],

    fhLp,qν(Rd+1)C(|f|ν)(|h|ω)Lp,q(Rd+1)CfLp,qν(Rd+1)hL1,1ω(Rd+1). (3.3)

    Furthermore, we also need the help of the covering number which is a powerful tool in estimating the probability error or the number of samples required for a given confidence and error bound [20].

    Lemma 3.2. Let Vp,q,ν,N be defined by (2.2). Then for any η>0, the covering number of Vp,q,ν,N with respect to the norm Lp,qν(Rd+1) is bounded by

    N(Vp,q,ν,N,η)exp(r(2N+1)d+1ln(2η+1)). (3.4)

    The proof could refer to [10]. There will omit it.

    Lemma 3.3. Let Vp,q,ν,N be defined by (2.2). Then for any η>0, the covering number of Vp,q,ν,N with respect to the norm L,ν(Rd+1) is bounded by

    N(Vp,q,ν,N,η)exp(r(2N+1)d+1ln(2Cη+1)), (3.5)

    where

    C:=Ccp,q(k1Z2(1+|k1|)n1)(k2Zd2(1+|k2|)n2). (3.6)

    Proof. For any fVp,q,ν,N,

    fL,ν(Rd+1)=ri=1|k1|N|k2|Nci(k1,k2)ϕi(xk1,yk2)L,ν(Rd+1)Cri=1|k1|N|k2|N(ciν)(k1,k2)(ϕiω)(xk1,yk2)L,(Rd+1)Cri=1|k1|N|k2|N|(ciν)(k1,k2)|(1+|xk1|)n1(1+|yk2|)n2L,(Rd+1)Cri=1cip,qν([N,N]d+1)|k1|N|k2|N1(1+|xk1|)n1(1+|yk2|)n2L,(Rd+1)=Cri=1cip,qν([N,N]d+1)(|k1|N1(1+|xk1|)n1)(|k2|N1(1+|yk2|)n2)L,(Rd+1)Cri=1cip,qν([N,N]d+1)(k1Z2(1+|k1|)n1)(k2Zd2(1+|k2|)n2)CfLp,qν(Rd+1)cp,q(k1Z2(1+|k1|)n1)(k2Zd2(1+|k2|)n2)=C. (3.7)

    Let F be the corresponding ηC net for the space Vp,q,ν,N with respect to the norm Lp,qν(Rd+1). Then for any fVp,q,ν,N, there exists a function ˜fF such that f˜fLp,qν(Rd+1)ηC. Furthermore,

    f˜fL,ν(Rd+1)Cf˜fLp,qν(Rd+1)η. (3.8)

    Thus, F is also a η net of Vp,q,ν,N with respect to the norm L,ν(Rd+1) and the cardinality of F is at most (3.5).

    Based on the function fVp,qν, we will introduce the random variable

    Zj,k(f)=|(fψ)(xj,yk)|ν(xj,yk)R1R1[R2,R2]dρ(x,y)|(fψ)(x,y)|ν(x,y)dxdy, (3.9)

    where the sampling set {(xj,yk)}j=1,,m;k=1,,n is a sequence of independent random variables which are drawn from a general probability distribution over CR1,R2 with the density function ρ satisfying (1.4). Obviously, {Zj,k(f),j=1,,m;k=1,,n} is a sequence of independent random variables with E[Zj,k(f)]=0. Regarding to the other properties, we will elaborate on them in the following section.

    Lemma 3.4. Let the density function ρ satisfy the condition (1.4) and the convolution function ψ satisfy (1.5). Then for any f,gVp,qν,

    (1)  Zj,k(f),CfL,ν(C2R1,2R2)ψL1,1ω(CR1,R2);(2)  Zj,k(f)Zj,k(g),2CfgL,ν(C2R1,2R2)ψL1,1ω(CR1,R2);(3)  Var(Zj,k(f))C2f2L,ν(C2R1,2R2)ψ2L1,1ω(CR1,R2);(4)  Var(Zj,k(f)Zj,k(g))C2ψ2L1,1ω(CR1,R2)fgL,ν(C2R1,2R2)(fL,ν(C2R1,2R2)+gL,ν(C2R1,2R2)).

    Proof.

    (1)Zj,k(f),=supxj[R1,R1]supyk[R2,R2]d||(fψ)(xj,yk)|ν(xj,yk)R1R1[R2,R2]dρ(x,y)|(fψ)(x,y)|ν(x,y)dxdy|max{supx[R1,R1]supy[R2,R2]d|(fψ)(x,y)|ν(x,y),R1R1[R2,R2]dρ(x,y)|(fψ)(x,y)|ν(x,y)dxdy}supx[R1,R1]supy[R2,R2]d|(fψ)(x,y)|ν(x,y)Csupx[R1,R1]supy[R2,R2]dR1R1[R2,R2]d(|f|ν)(xt,ys)(|ψ|ω)(t,s)dtdsCfL,ν(C2R1,2R2)ψL1,1ω(CR1,R2).
    (2)Zj,k(f)Zj,k(g),=supxj[R1,R1]supyk[R2,R2]d||(fψ)(xj,yk)|ν(xj,yk)|(gψ)(xj,yk)|ν(xj,yk)  R1R1[R2,R2]dρ(x,y)(|(fψ)(x,y)|ν(x,y)|(gψ)(x,y)|ν(x,y))dxdy|supx[R1,R1]supy[R2,R2]d|((fg)ψ)(x,y)|ν(x,y)+R1R1[R2,R2]dρ(x,y)|((fg)ψ)(x,y)|ν(x,y)dxdy2supx[R1,R1]supy[R2,R2]d(|((fg)ψ)(x,y)|ν(x,y))2Csupx[R1,R1]supy[R2,R2]dR1R1[R2,R2]d(|fg|ν)(xt,ys)(|ψ|ω)(t,s)dtds2CfgL,ν(C2R1,2R2)ψL1,1ω(CR1,R2).
    (3)Var(Zj,k(f))=E[Zj,k(f)]2(E[Zj,k(f)])2=E[Zj,k(f)]2=R1R1[R2,R2]dρ(t,s)(|(fψ)(t,s)|ν(t,s)R1R1[R2,R2]dρ(x,y)|(fψ)(x,y)|ν(x,y)dxdy)2dtdsR1R1[R2,R2]dρ(t,s)(|(fψ)(t,s)|ν(t,s))2dtdsC2R1R1[R2,R2]dρ(t,s)(R1R1[R2,R2]d(|f|ν)(tx,sy)(|ψ|ω)(x,y)dxdy)2dtdsC2f2L,ν(C2R1,2R2)ψ2L1,1ω(CR1,R2).
    (4)Var(Zj,k(f)Zj,k(g))=E[Zj,k(f)Zj,k(g)]2(E[Zj,k(f)Zj,k(g)])2=E[Zj,k(f)Zj,k(g)]2=R1R1[R2,R2]dρ(t,s)[(|(fψ)(t,s)||(gψ)(t,s)|)ν(t,s)   R1R1[R2,R2]dρ(x,y)(|(fψ)(x,y)||(gψ)(x,y)|)ν(x,y)dxdy]2dtdsR1R1[R2,R2]dρ(t,s)[(|(fψ)(t,s)||(gψ)(t,s)|)ν(t,s)]2dtdsR1R1[R2,R2]dρ(t,s)(|((fg)ψ)(t,s)|ν(t,s))(|((f+g)ψ)(t,s)|ν(t,s))dtdsC2fgL,ν(C2R1,2R2)(fL,ν(C2R1,2R2)+gL,ν(C2R1,2R2))ψ2L1,1ω(CR1,R2).

    Lemma 3.5. [10] Let Zj,k be independent random variables with expected values E[Zj,k]=0, Var(Zj,k)σ2 and |Zj,k|M for all j=1,,m and k=1,,n. Then for any γ0,

    Prob(|mj=1nk=1Zj,k|γ)2exp(γ22mnσ2+23Mγ). (3.10)

    Lemma 3.6. Assume that {(xj,yk)}j=1,,m;k=1,,n is a sequence of independent random variables which are drawn from a general probability distribution over CR1,R2 with the density function ρ satisfying (1.4). Then for any m,nN, there exist positive constants A,B>0 such that

    Prob(supfVp,q,ν,N|mj=1nk=1Zj,k(f)|γ)Aexp(Bγ212mnCD+2γ), (3.11)

    where A is of order exp(C(2N+1)d+1), C=(5r)ln2+2rln(C+1)+rln(4C+1), B=min{32CD,21296D} and D=CψL1,1ω(CR1,R2).

    Proof. For given lN, we construct a 2l-covering for Vp,q,ν,N with respect to the norm L,ν(Rd+1). Let A(2l) be the corresponding 2l-net for l=1,2,. By Lemma 3.3, A(2l) has cardinality at most N(Vp,q,ν,N,2l). Suppose that fl is the function in A(2l) that is close to fVp,q,ν,N with respect to the norm L,ν(Rd+1), then fflL,ν(Rd+1)2l0 as l. Define

    Zj,k(f)=Zj,k(f1)+l=2(Zj,k(fl)Zj,k(fl1)).

    By Lemma 3.4, the random variable Zj,k(f) is well defined.

    If supfVp,q,ν,N|mj=1nk=1Zj,k(f)|γ, then the event ωl must hold for some l1, where

    ω1={there exists f1A(21) such that |mj=1nk=1Zj,k(f1)|γ/2}

    and for l2,

    ωl={there exist flA(2l),fl1A(2(l1)) with flfl1L,ν(Rd+1)32l, such that |mj=1nk=1(Zj,k(fl)Zj,k(fl1))|γ2l2}.

    If this were not the case, then with f0=0, we have

    |mj=1nk=1Zj,k(f)|l=1|mj=1nk=1(Zj,k(fl)Zj,k(fl1))|l=1γ2l2=π2γ12<γ.

    In the following, we will estimate the probability of ω1. By Lemma 3.5, for each fixed function fA(21),

    Prob(|mj=1nk=1Zj,k(f)|γ2)2exp((γ/2)22mnVar(Zj,k(f))+2/3γ/2Zj,k(f),))2exp(3γ22(12mn(CD)2+2γCD))=2exp(3γ22CD(12mnCD+2γ)).

    Moreover, by the result of Lemma 3.3, there are at most

    N(Vp,q,ν,N,12)exp(r(2N+1)d+1ln(4C+1))

    functions in A(21). Therefore, the probability of ω1 is bounded by

    Prob(ω1)2exp(r(2N+1)d+1ln(4C+1))exp(3γ22CD(12mnCD+2γ)). (3.12)

    By the similar method, we could obtain the following estimations about the probabilities of ωl, l2. In fact, for flA(2l),fl1A(2(l1)) and flfl1L,ν(Rd+1)32l,

    Prob(|mj=1nk=1(Zj,k(fl)Zj,k(fl1))|γ2l2)2exp((γ2l2)22mnVar(Zj,k(fl)Zj,k(fl1))+23Zj,k(fl)Zj,k(fl1),γ2l2)2exp((γ2l2)22mn32lD22C+232D32lγ2l2)2exp(ϑ2ll4), (3.13)

    where ϑ:=γ24D(12mnDC+2γ). There are at most N(Vp,q,ν,N,2l) functions in A(2l) and N(Vp,q,ν,N,2l+1) functions in A(2(l1)). Therefore, we have

    Prob(l=2ωl)l=2N(Vp,q,ν,N,2l)N(Vp,q,ν,N,2l+1)2exp(ϑ2ll4)l=22exp(r(2N+1)d+1ln(2l+1C+1))exp(r(2N+1)d+1ln(2lC+1))exp(ϑ2ll4)l=22exp(2r(2N+1)d+1[(l+1)ln2+ln(C+1)]ϑ2ll4)=l=22exp([(2rln2)(2N+1)d+1]l+(2rln2)(2N+1)d+1+2r(2N+1)d+1ln(C+1)ϑ2ll4)=C1l=2exp(C2lϑ2ll4)=C1l=2exp(ϑ2l2(2l2l4C2lϑ2l2)),

    where C1=2exp((2rln2)(2N+1)d+1+2r(2N+1)d+1ln(C+1)) and C2=(2rln2)(2N+1)d+1.

    Notice that

    minl22l2l4=1324,maxl2l2l2=324,

    then

    2l2l4lC22l2ϑ1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2.

    We first consider the case that

    1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2>0. (3.14)

    Notice that s,a>0, one has l=2esaleassalna, see [17] and let

    s=ϑ(2l2l4lC22l2ϑ),a=212,

    we can obtain

    Prob(l=2ωl)C1exp(2ϑ(1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2))(2ln2)ϑ(1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2)=C1(2ln2)ϑ(1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2)×exp(2ϑ(1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2))=C1exp(2ϑ(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2)(2ln2)ϑ(1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2)exp(2ϑ324)=2exp((2rln2+2rln(C+1)+3rln2)(2N+1)d+1)(2ln2)ϑ(1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2)exp(2ϑ324)=2exp(((5r)ln2+2rln(C+1))(2N+1)d+1)(2ln2)ϑ(1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ2)exp(2γ21296D(12mnDC+2γ)). (3.15)

    Combining the results of (3.12) and (3.15), we can obtain that

    Prob(supfVp,q,ν,N|mj=1nk=1Zj,k(f)|γ)Aexp(Bγ212mnDC+2γ), (3.16)

    where A is of order exp(C(2N+1)d+1), C=(5r)ln2+2rln(C+1)+rln(4C+1) and B=min{32CD,21296D}.

    If

    1324(62ln2)Dr(2N+1)d+1(12mnDC+2γ)γ20 (3.17)

    then we could choose C324B(62ln2)Dr such that Aexp(Bγ212mnDC+2γ)1.

    In order to obtain the random convolution sampling stability for signals in multiply generated subspaces of weighted mixed Lebesgue space, we also need the help of the corresponding sampling stability for some subset of Vp,qν,N.

    Theorem 4.1. Assume that {(xj,yk)}j=1,,m;k=1,,n is a sequence of independent random variables which are drawn from a general probability distribution over CR1,R2 with the density function ρ satisfying (1.4) and the convolution function ψ satisfies (1.5). Then for any m,nN, there exist positive constants 0<θ,γ<1 such that

    A1{θ}:=m1pn1q(1γ)(Cρ,lθG1)pq>0,
    B1{θ}:=mn(γ(Cρ,lθG1)pq+CCψL1,1ω(CR1,R2))>0,

    where

    G:=(2R1)q1pq(2R2)d(p1)pq(CCρ,uCψL1,1ω(CR1,R2))pq1pq.

    Furthermore, the random convolution sampling stability

    A1{θ}fLp,qν(Rd+1){(fψ)(xj,yk)}j=1,,m;k=1,,np,qνB1{θ}fLp,qν(Rd+1) (4.1)

    holds for all fVp,q,ν,N:={fVp,qν,N:fψLp,qν(CR1,R2)θ} with the probability at least

    1Aexp(B(γmn(Cρ,lθG1)pq)212mnCCψL1,1ω(CR1,R2)+2γmn(Cρ,lθG1)pq),

    where A,B are defined as in Lemma 3.6.

    Proof. Obviously, every function fVp,q,ν,N satisfies the random convolution sampling stability (4.1) if and only if f/fLp,qν(Rd+1) also does. Thus we assume that fVp,q,,ν,N:={fVp,q,ν,N:fLp,qν(Rd+1)=1}.

    Define the event

    H={supfVp,q,,ν,N|mj=1nk=1Zj,k(f)|γmn(Cρ,lθG1)pq}. (4.2)

    Its complement is

    ˜H={γmn(Cρ,lθG1)pq+mnR1R1[R2,R2]dρ(x,y)|(fψ)(x,y)|ν(x,y)dxdymj=1nk=1|(fψ)(xj,yk)|ν(xj,yk)γmn(Cρ,lθG1)pq+mnR1R1[R2,R2]dρ(x,y)|(fψ)(x,y)|ν(x,y)dxdy,fVp,q,,ν,N}. (4.3)

    Write g(x,y):=ρ(x,y)|(fψ)(x,y)|ν(x,y) with fVp,q,,ν,N, we can obtain

    R1R1[R2,R2]d|g(x,y)|dxdy=R1R1[R2,R2]d|ρ(x,y)[|R1R1[R2,R2]df(xt,ys)ψ(t,s)dtds|]ν(x,y)|dxdyCR1R1[R2,R2]d(ρ(x,y)R1R1[R2,R2]d|(fν)(xt,ys)||(ψω)(t,s)|dtds)dxdyCfL,ν(C2R1,2R2)ψL1,1ω(CR1,R2)CfL,ν(Rd+1)ψL1,1ω(CR1,R2)CCψL1,1ω(CR1,R2). (4.4)

    Furthermore,

    {(fψ)(xj,yk)}j=1,,m;k=1,,np,qν{(fψ)(xj,yk)}j=1,,m;k=1,,n1,1ν. (4.5)

    Combining the results of (4.4) and (4.5), we can obtain

    {(fψ)(xj,yk)}j=1,,m;k=1,,np,qνmn(γ(Cρ,lθG1)pq+CCψL1,1ω(CR1,R2)). (4.6)

    In addition,

    gLp,q(CR1,R2)(2R1)q1pq[(R1R1([R2,R2]d|g(x,y)|qdy)pdx)1q]1p(2R1)q1pq(gL,(CR1,R2))q1q[(R1R1([R2,R2]d|g(x,y)|dy)pdx)1q]1p(2R1)q1pq(gL,(CR1,R2))q1q([R2,R2]d(R1R1|g(x,y)|pdx)1pdy)1q(2R1)q1pq(2R2)d(p1)pq(gL,(CR1,R2))q1q([R2,R2]d(R1R1|g(x,y)|pdx)dy)1pq(2R1)q1pq(2R2)d(p1)pq(gL,(CR1,R2))q1q(gL,(CR1,R2))p1pqg1pqL1,1(CR1,R2)(2R1)q1pq(2R2)d(p1)pq(CCρ,ufL,ν(C2R1,2R2)ψL1,1ω(CR1,R2))pq1pqg1pqL1,1(CR1,R2)(2R1)q1pq(2R2)d(p1)pq(CCρ,uCψL1,1ω(CR1,R2))pq1pqg1pqL1,1(CR1,R2)=Gg1pqL1,1(CR1,R2).

    Thus,

    \begin{align} \|g\|_{L^{1, 1}(C_{R_{1}, R_{2}})}&\geq\frac{\|g\|^{pq}_{L^{p, q}(C_{R_{1}, R_{2}})}}{G^{pq}}\\ &\geq\frac{\Big[C_{\rho, l}\Big(\int_{[-R_{1}, R_{1}]}\Big(\int_{[-R_{2}, R_{2}]^{d}}\Big(|(f*\psi)(x, y)|\nu(x, y)\Big)^{q}dy\Big)^{\frac{p}{q}}dx\Big)^{\frac{1}{p}}\Big]^{pq}}{G^{pq}}\\ & = \frac{\Big[C_{\rho, l}\|(f*\psi)\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\Big]^{pq}}{G^{pq}}\geq (C_{\rho, l}\theta G^{-1})^{pq}. \end{align} (4.7)

    By Hölder inequality, we have

    \begin{equation} \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1\cdots, n}\Big\|_{\ell^{1, 1}_{\nu}} \leq m^{\frac{p-1}{p}}n^{\frac{q-1}{q}}\Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1\cdots, n}\Big\|_{\ell^{p, q}_{\nu}}. \end{equation} (4.8)

    Combining the results of (4.7) and (4.8), we can obtain

    \begin{equation} m^{-\frac{p-1}{p}}n^{-\frac{q-1}{q}}\Big[-\gamma mn(C_{\rho, l}\theta G^{-1})^{pq} +mn(C_{\rho, l}\theta G^{-1})^{pq}\Big]\leq \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1\cdots, n}\Big\|_{\ell^{p, q}_{\nu}}. \end{equation} (4.9)

    Followed by the results of (4.3), (4.6) and (4.9), the event

    \begin{align} \overline{\mathcal{H}} = \Big\{&m^{-\frac{p-1}{p}}n^{-\frac{q-1}{q}}\Big[mn(C_{\rho, l}\theta G^{-1})^{pq}-\gamma mn(C_{\rho, l}\theta G^{-1})^{pq}\Big]\\ &\leq \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq \Big[\gamma mn (C_{\rho, l}\theta G^{-1})^{pq}+mnCC^{*}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big], \quad f\in V^{p, q, \diamond, *}_{\nu, N}\Big\}\\ = \Big\{&m^{-\frac{p-1}{p}}n^{-\frac{q-1}{q}}\Big[mn(C_{\rho, l}\theta G^{-1})^{pq}-\gamma mn(C_{\rho, l}\theta G^{-1})^{pq}\Big]\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\\ &\leq \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq \Big[\gamma mn (C_{\rho, l}\theta G^{-1})^{pq}+mnCC^{*}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big]\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}, \quad f\in V^{p, q, \diamond}_{\nu, N}\Big\} \end{align} (4.10)

    contains the event \widetilde{\mathcal{H}} .

    By Lemma 3.6, we can obtain

    \begin{align} Prob(\overline{\mathcal{H}})&\geq Prob(\widetilde{\mathcal{H}})\geq 1-Prob(\mathcal{H})\\ &\geq 1-Prob\Big(\sup\limits_{f\in V_{\nu, N}^{p, q, \diamond, *}}\Big|\sum\limits_{j = 1}^{m}\sum\limits_{k = 1}^{n}Z_{j, k}(f)\Big|\geq \gamma mn (C_{\rho, l}\theta G^{-1})^{pq}\Big) \\ &\geq 1-Prob\Big(\sup\limits_{f\in V_{\nu, N}^{p, q, *}}\Big|\sum\limits_{j = 1}^{m}\sum\limits_{k = 1}^{n}Z_{j, k}(f)\Big|\geq \gamma mn(C_{\rho, l}\theta G^{-1})^{pq}\Big) \\ &\geq 1-A\exp\Big(-B\frac{\Big(\gamma mn(C_{\rho, l}\theta G^{-1})^{pq}\Big)^{2}}{12mnC^{*}C\|\psi\|_{L^{1, 1}(C_{R_{1}, R_{2}})}+2\gamma mn(C_{\rho, l}\theta G^{-1})^{pq}}\Big). \end{align}

    Theorem 4.2. Assume that \{(x_{j}, y_{k})\}_{j = 1, \cdots, m; k = 1, \cdots, n} is a sequence of independent random variables which are drawn from a general probability distribution over C_{R_{1}, R_{2}} with the density function \rho satisfying (1.4) and the convolution function \psi satisfies (1.5). Then for any m, n\in\mathbb{N} , there exist positive constants 0 < \gamma, \varepsilon < 1 such that

    A_{2}: = A_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}(1-\delta-\varepsilon)-\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}} > 0

    and

    B_{2}: = B_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}\frac{C_{p, q}}{c_{p, q}}+\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}} > 0,

    where A_{1}, B_{1} are defined as in Theorem 4.1. Furthermore, the random convolution sampling stability

    \begin{equation} A_{2}\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\leq\Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\leq B_{2}\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})} \end{equation} (4.11)

    holds for any f\in V_{\nu, R_{1}, R_{2}}^{p, q} with the probability at least

    \begin{equation} 1-A\exp\Big(-B\frac{\Big(\gamma mn \Big[C_{\rho, l}(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})} G^{-1}\Big]^{pq}\Big)^{2}}{12mnC^{*}C\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+2\gamma mn \Big[C_{\rho, l}(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})} G^{-1}\Big]^{pq}}\Big), \end{equation} (4.12)

    where the constants A, B, G are defined as in Theorem 4.1 with

    N\geq\max\{N_{1}(\varepsilon)+2R+1, N_{2}(\frac{\varepsilon}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}})+2R+1\}, \quad R = \max\{R_{1}, \sqrt{d}R_{2}\}.

    Proof. It is obvious that every function f\in V_{\nu, R_{1}, R_{2}}^{p, q} satisfies the random convolution sampling stability (4.11) if and only if f/\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})} also does. Thus, we assume that f\in V_{\nu, R_{1}, R_{2}}^{p, q, *} .

    By Lemma 2.2, for any f\in V_{\nu, R_{1}, R_{2}}^{p, q, *} and \varepsilon > 0 ,

    \begin{align} &\|f-f_{N}\|_{L^{\infty, \infty}_{\nu}(C_{2R_{1}, 2R_{2}})}\leq \frac{\varepsilon}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}, \end{align} (4.13)
    \begin{align} &\|f-f_{N}\|_{L^{p, q}_{\nu}(C_{2R_{1}, 2R_{2}})}\leq \varepsilon, \end{align} (4.14)

    if N\geq\max\{N_{1}(\varepsilon)+2R+1, N_{2}(\frac{\varepsilon}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}})+2R+1\} and R = \max\{R_{1}, \sqrt{d}R_{2}\}.

    Moreover, by the result of Lemma 3.1,

    \begin{align} &\quad\|f*\psi-f_{N}*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\leq C\|f-f_{N}\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\\ &\leq C\|f-f_{N}\|_{L^{p, q}_{\nu}(C_{2R_{1}, 2R_{2}})}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\\ &\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}. \end{align}

    Thus,

    \begin{align} &\quad\|f_{N}*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+\|f*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+\|f*\psi\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\\ &\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+C\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}. \end{align} (4.15)

    Furthermore, by inequality (1.9),

    \begin{align} &\quad\|f_{N}*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\geq -C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+\|f*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\\ &\geq -C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+\beta\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}. \end{align} (4.16)

    Combining the results of (4.15) and (4.16), we can obtain for any function f\in V_{\nu, R_{1}, R_{2}}^{p, q, *} ,

    \begin{equation} (\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\leq\|f_{N}*\psi\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}\leq C\varepsilon\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+C\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}. \end{equation} (4.17)

    By Theorem 4.1, the inequality

    \begin{align*} A_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}&\|f_{N}\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\leq\Big\|\Big\{(f_{N}*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\nonumber\\ &\leq B_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}\|f_{N}\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\nonumber \end{align*}

    holds with the probability at least

    \begin{equation} 1-A\exp\Big(-B\frac{\Big(\gamma mn \Big[C_{\rho, l}(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})} G^{-1}\Big]^{pq}\Big)^{2}}{12mnC^{*}C\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}+2\gamma mn \Big[C_{\rho, l}(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})} G^{-1}\Big]^{pq}}\Big), \end{equation} (4.18)

    At the same time, by inequality (4.13),

    \begin{align} &\quad\Big\|\Big\{(f*\psi)(x_{j}, y_{k})-(f_{N}*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq\Big\|\Big\{(|f-f_{N}|*|\psi|)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ & = \Big(\sum\limits_{j = 1}^{m}\Big(\sum\limits_{k = 1}^{n}\Big|(|f-f_{N}|*|\psi|)(x_{j}, y_{k})\nu(x_{j}, y_{k})\Big|^{q}\Big)^{\frac{p}{q}}\Big)^{1/p}\\ &\leq\sum\limits_{j = 1}^{m}\sum\limits_{k = 1}^{n}\Big|(|f-f_{N}|*|\psi|)(x_{j}, y_{k})\nu(x_{j}, y_{k})\Big|\\ &\leq \frac{C\varepsilon}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}\sum\limits_{j = 1}^{m}\sum\limits_{k = 1}^{n}\Big|\int_{-R_{1}}^{R_{1}}\int_{[-R_{2}, R_{2}]^{d}}|\psi(t, s)|\omega(t, s)dtds\Big|\\ & = \frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}, \end{align} (4.19)

    which is equivalent to

    \begin{align} &\Big\|\Big\{(f_{N}*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}-\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}\\ &\leq\Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m; k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq\Big\|\Big\{(f_{N}*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m;k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}+\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}. \end{align}

    Furthermore,

    \begin{equation*} (1-\delta)\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}-\varepsilon\leq\|f\|_{L^{p, q}_{\nu}(C_{R_{1}, R_{2}})}-\varepsilon\leq \|f\|_{L^{p, q}_{\nu}(C_{2R_{1}, 2R_{2}})}-\varepsilon\leq \|f_{N}\|_{L^{p, q}_{\nu}(C_{2R_{1}, 2R_{2}})}\leq\|f_{N}\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})} \end{equation*}

    and

    \begin{equation} \|f_{N}\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}\leq C_{p, q}\sum\limits_{i = 1}^{r}\|c_{i}\|_{\ell^{p, q}_{\nu}([-N, N]^{d+1})}\leq C_{p, q}\sum\limits_{i = 1}^{r}\|c_{i}\|_{\ell^{p, q}_{\nu}(\mathbb{Z}^{d+1})}\leq \frac{C_{p, q}}{c_{p, q}}\|f\|_{L^{p, q}_{\nu}(\mathbb{R}^{d+1})}. \end{equation} (4.20)

    Thus, for any function f\in V^{p, q, *}_{\nu, R_{1}, R_{2}} ,

    \begin{align} &A_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}\Big(1-\delta-\varepsilon\Big)-\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}}\\ &\leq \Big\|\Big\{(f*\psi)(x_{j}, y_{k})\Big\}_{j = 1, \cdots, m;k = 1, \cdots, n}\Big\|_{\ell^{p, q}_{\nu}}\\ &\leq B_{1}\Big\{(\beta-C\varepsilon)\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}\Big\}\frac{C_{p, q}}{c_{p, q}}+\frac{C\varepsilon mn\|\psi\|_{L^{1, 1}_{\omega}(C_{R_{1}, R_{2}})}}{(4R_{1})^{\frac{1}{p}}(4R_{2})^{\frac{d}{q}}} \end{align}

    holds with the probability at least (4.12). Thus the random convolution sampling stability is proved.

    This paper is aimed at studying the random convolution sampling stability in multiply generated shift invariant subspace of weighted mixed Lebesgue space. Under some restricted conditions and essential results, we prove that with overwhelming probability, the random convolution sampling stability holds for signals in some subset of the defined multiply generated shift invariant subspace when the sampling size is large enough.

    The author declares no conflicts of interest in this paper.



    [1] A. Aldroubi, K. Gröchenig, Nonuniform sampling and reconstruction in shift-invariant space, SIAM Rev., 43 (2001), 585–620. doi: 10.1137/s0036144501386986. doi: 10.1137/s0036144501386986
    [2] R. F. Bass, K. Gröchenig, Random sampling of bandlimited functions, Israel J. Math., 177 (2010), 1–28. doi: 10.1007/s11856-010-0036-7. doi: 10.1007/s11856-010-0036-7
    [3] R. F. Bass, K. Gröchenig, Relevant sampling of bandlimited functions, Illinois J. Math., 57 (2013), 43–58. doi: 10.1215/ijm/1403534485. doi: 10.1215/ijm/1403534485
    [4] A. Benedek, R. Panzone, The space L^{p} with mixed norm, Duke Math. J., 28 (1961), 301–324. doi: 10.1215/s0012-7094-61-02828-9. doi: 10.1215/s0012-7094-61-02828-9
    [5] E. J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, 52 (2006), 489–509. doi: 10.1109/TIT.2005.862083. doi: 10.1109/TIT.2005.862083
    [6] S. H. Chan, T. Zickler, Y. M. Lu, Monte Carlo non-local means: Random sampling for large-scale image filtering, IEEE Trans. Image Process., 23 (2014), 3711–3725. doi: 10.1109/tip.2014.2327813. doi: 10.1109/tip.2014.2327813
    [7] Y. C. Eldar, Compressed sensing of analog signal in a shift-invariant spaces, IEEE Trans. Signal Process., 57 (2009), 2986–2997. doi: 10.1109/TSP.2009.2020750. doi: 10.1109/TSP.2009.2020750
    [8] K. Gröchenig, Weight functions in time-frequency analysis, 2006. Available from: https://arXiv.org/abs/math/0611174.
    [9] Y. Han, B. Liu, Q. Y. Zhang, A sampling theory for non-decaying signals in mixed Lebesgue spaces L^{p, q}(\mathbb{R}\times \mathbb{R}^{d}), Appl. Anal., 2020. doi: 10.1080/00036811.2020.1736286.
    [10] Y. C. Jiang, W. Li, Random sampling in multiply generated shift-invariant subspaces of mixed Lebesgue spaces L^{p, q}(\mathbb{R}\times \mathbb{R}^{d}), J. Comput. Appl. Math., 386 (2021), 113237. doi: 10.1016/j.cam.2020.113237.
    [11] A. Kumar, D. Patel, S. Sampath, Sampling and reconstruction in reproducing kernel subspaces of mixed Lebesgue spaces, J. Pseudo-Differ. Oper. Appl., 11 (2020), 843–868. doi: 10.1007/s11868-019-00315-0. doi: 10.1007/s11868-019-00315-0
    [12] R. Li, B. Liu, R. liu, Q. Y. Zhang, Nonuniform sampling in principle shift-invariant subspaces of mixed Lebesgue spaces L^{p, q}(\mathbb{R}^{d+1}), J. Math. Anal. Appl., 453 (2017), 928–941. doi: 10.1016/j.jmaa.2017.04.036. doi: 10.1016/j.jmaa.2017.04.036
    [13] R. Li, B. Liu, R. Liu, Q. Y. Zhang, The L^{p, q}-stability of the shifts of finitely many functions in mixed Lebesgue spaces L^{p, q}(\mathbb{R}^{d+1}), Acta Math. Sin., Engl. Ser., 34 (2018), 1001–1014. doi: 10.1007/s10114-018-7333-1. doi: 10.1007/s10114-018-7333-1
    [14] Y. X. Li, Q. Y. Sun, J. Xian, Random sampling and reconstruction of concentrated signals in a reproducing kernel space, Appl. Comput. Harmon. Anal., 54 (2021), 273–302. doi: 10.1016/j.acha.2021.03.006. doi: 10.1016/j.acha.2021.03.006
    [15] S. P. Luo, Error estimation for non-uniform sampling in shift invariant space, Appl. Anal., 86 (2007), 483–496. doi: 10.1080/00036810701259236. doi: 10.1080/00036810701259236
    [16] D. Patel, S. Sampath, Random sampling in reproducing kernel subspaces of L^{p}(\mathbb{R}^{n}), J. Math. Anal. Appl., 491 (2020), 124270. doi: 10.1016/j.jmaa.2020.124270.
    [17] S. Smale, D. X. Zhou, Online learning with Markov sampling, Anal. Appl., 7 (2009), 87–113. doi: 10.1142/S0219530509001293. doi: 10.1142/S0219530509001293
    [18] J. B. Yang, Random sampling and reconstruction in multiply generated shift-invariant spaces, Anal. Appl., 17 (2019), 323–347. doi: 10.1142/S0219530518500185. doi: 10.1142/S0219530518500185
    [19] J. B. Yang, W. Wei, Random sampling in shift invariant spaces, J. Math. Anal. Appl., 398 (2013), 26–34. doi: 10.1016/j.jmaa.2012.08.030. doi: 10.1016/j.jmaa.2012.08.030
    [20] D. X. Zhou, The covering number in learning theory, J. Complexity, 18 (2002), 739–767. doi: 10.1006/jcom.2002.0635. doi: 10.1006/jcom.2002.0635
  • This article has been cited by:

    1. S. Arati, P. Devaraj, Ankush Kumar Garg, Random Average Sampling and Reconstruction in Shift-Invariant Subspaces of Mixed Lebesgue Spaces, 2022, 77, 1422-6383, 10.1007/s00025-022-01738-w
    2. Dhiraj Patel, S. Sivananthan, Random Average Sampling in a Reproducing Kernel Subspace of Mixed Lebesgue Space L^{p,q}({\mathbb {R}}^{n+1}), 2024, 79, 1422-6383, 10.1007/s00025-023-02037-8
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2176) PDF downloads(75) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog