Research article Special Issues

A new approach for Cauchy noise removal

  • In this paper, a new total generalized variational (TGV) model for restoring images with Cauchy noise is proposed, which contains a non-convex fidelity term and a TGV regularization term. In order to obtain a strictly convex model, we add an appropriate proximal term to the non-convex fidelity term. We prove that the solution of the proposed model exists and is unique. Due to the convexity of the proposed model and in order to get a convergent algorithm, we employ an alternating minimization algorithm to solve the proposed model. Finally, we demonstrate the performance of our scheme by numerical examples. Numerical results demonstrate that the proposed algorithm significantly outperforms some previous methods for Cauchy noise removal.

    Citation: Lufeng Bai. A new approach for Cauchy noise removal[J]. AIMS Mathematics, 2021, 6(9): 10296-10312. doi: 10.3934/math.2021596

    Related Papers:

    [1] Miyoun Jung . A variational image denoising model under mixed Cauchy and Gaussian noise. AIMS Mathematics, 2022, 7(11): 19696-19726. doi: 10.3934/math.20221080
    [2] Xiao Guo, Chuanpei Xu, Zhibin Zhu, Benxin Zhang . Nonmonotone variable metric Barzilai-Borwein method for composite minimization problem. AIMS Mathematics, 2024, 9(6): 16335-16353. doi: 10.3934/math.2024791
    [3] Myeongmin Kang, Miyoun Jung . Nonconvex fractional order total variation based image denoising model under mixed stripe and Gaussian noise. AIMS Mathematics, 2024, 9(8): 21094-21124. doi: 10.3934/math.20241025
    [4] Aykut Or . Double sequences with ideal convergence in fuzzy metric spaces. AIMS Mathematics, 2023, 8(11): 28090-28104. doi: 10.3934/math.20231437
    [5] Yuzi Jin, Soobin Kwak, Seokjun Ham, Junseok Kim . A fast and efficient numerical algorithm for image segmentation and denoising. AIMS Mathematics, 2024, 9(2): 5015-5027. doi: 10.3934/math.2024243
    [6] Vishal Gupta, Gerald Jungck, Naveen Mani . Some novel fixed point theorems in partially ordered metric spaces. AIMS Mathematics, 2020, 5(5): 4444-4452. doi: 10.3934/math.2020284
    [7] Donghong Zhao, Ruiying Huang, Li Feng . Proximity algorithms for the L1L2/TVα image denoising model. AIMS Mathematics, 2024, 9(6): 16643-16665. doi: 10.3934/math.2024807
    [8] Abdelgader Siddig, Zhichang Guo, Zhenyu Zhou, Boying Wu . Entropy solutions for an adaptive fourth-order nonlinear degenerate problem for noise removal. AIMS Mathematics, 2021, 6(4): 3974-3995. doi: 10.3934/math.2021236
    [9] Mingying Pan, Xiangchu Feng . Application of Fisher information to CMOS noise estimation. AIMS Mathematics, 2023, 8(6): 14522-14540. doi: 10.3934/math.2023742
    [10] Muhammad Naqeeb, Amjad Hussain, Ahmad Mohammed Alghamdi . Blow-up criteria for different fluid models in anisotropic Lorentz spaces. AIMS Mathematics, 2023, 8(2): 4700-4713. doi: 10.3934/math.2023232
  • In this paper, a new total generalized variational (TGV) model for restoring images with Cauchy noise is proposed, which contains a non-convex fidelity term and a TGV regularization term. In order to obtain a strictly convex model, we add an appropriate proximal term to the non-convex fidelity term. We prove that the solution of the proposed model exists and is unique. Due to the convexity of the proposed model and in order to get a convergent algorithm, we employ an alternating minimization algorithm to solve the proposed model. Finally, we demonstrate the performance of our scheme by numerical examples. Numerical results demonstrate that the proposed algorithm significantly outperforms some previous methods for Cauchy noise removal.



    In many imaging applications, images inevitably contain noise. Most of the literatures deal with the reconstruction of images corrupted by additive Gaussian noise, for instance [1,2,3,4,5,6]. However, in many engineering applications the noise has an impulsive characteristic, which is different from Gaussian noise and cannot be modeled by Gaussian noise. Based on [7], a type of impulsive degradation is given by Cauchy noise, which follows Cauchy distribution and appears frequently in radar and sonar applications, atmospheric and underwater acoustic images, wireless communication systems. For more details, we refer to [8,9]. Recently, much attention has been paid to dealing with Cauchy noise and several approaches have been proposed. In [10], Chang et al. employed recursive Markov random field models to reconstruct images corrupted by Cauchy noise. Based on non-Gaussian distributions, Loza et al. [11] proposed a statistical approach in the wavelet domain. By combining statistical methods with denoising techniques, Wan et al. [12] developed a segmentation approach for RGB images corrupted by Cauchy noise. Sciacchitano et al. [13] proposed a total variation (TV)-based variational method for reconstructing images corrupted by Cauchy noise. The variational model in [13] (called as SDZ model) is

    minu BV(Ω)Ω|Du|+λ2(Ωlog(γ2+(uf)2)dx+ηΩ(uu0)2dx), (1.1)

    where Ω is a bounded connected domain in R2,  BV(Ω) is the space of functions of bounded variation, u BV(Ω) (for more details, see (2.1)) represents the restored image and γ>0 is the scale parameter of Cauchy distribution. In (1.1), λ is a positive number, which controls the trade-off between the TV regularization term and the fidelity term, u0 is the image obtained by applying the median filter [14] to the noisy image f, and η>0 is a penalty parameter. If 8ηγ21, the objective functional in (1.1) is strictly convex and its solution is unique. The term η||uu0||22 in (1.1) results in the solution being close to the median filter result, but the median filter does not always perform well as to Cauchy noise removal. In order to avoid this, in [15], the authors developed the the alternating direction method of multipliers (ADMM) to solve the following non-convex variational model (called as MDH model) directly

    minu BV(Ω)Ω|Du|+λ2Ωlog(γ2+(Kuf)2)dx, (1.2)

    where K represents a linear operator. As we know, solutions of variational problems with  TV regularization have many desirable properties, such as the feature of preserving sharp edges. However, these solutions are always accompanied by blocking artifacts due to the property of  BV space.

    In order to overcome blocking artifacts, we will employ TGV as a regularization term. In [16], Bredies et al. proposed the concept of TGV, and they applied TGV to mathematical imaging problems to overcome blocking artifacts. For more details of TGV, we refer interested readers to [17,18]. In order to overcome the defect of the median filter result, based on the proximal algorithm idea, we will use the term ||uz||22 to convexify the non-convex fidelity term Ωlog(γ2+(uf)2)dx. To simplify computing, for the TGV regularization term, we employ the proximal method. Based on these, we propose the following model

    minz BGV2α(Ω),uL2(Ω){ TGV2α(z)+λ(Ωlog(γ2+(uf)2)dx}. (1.3)

    with the constraint u=z. Meanwhile, we compare the proposed model (1.3) with the following model

    minz BGV2α(Ω),uL2(Ω){ TGV2α(u)+λ(Ωlog(γ2+(uf)2)dx+η2Ω(uu0)2dx)}, (1.4)

    where u0 is the image obtained by applying the median filter [14] to the noisy image. According to Table 1, the numerical results show that the proposed model (1.3) is better than the model (1.4). Compared with previous reports, the main novelty of our proposed approach has been condensed into the following points:

    Table 1.  SSIM and PSNR measures for different methods, γ=5.
    SSIM PSNR
    Image Noisy Model (1.4) Ours Noisy Model (1.4) Ours
    Montage 0.3230 0.9213 0.9312 19.14 28.70 30.25
    lena 0.5377 0.9252 0.9287 17.94 31.01 31.24
    Vehicle 0.5707 0.9278 0.9322 19.20 30.83 31.14
    Saturn 0.2080 0.8729 0.9125 19.04 36.01 36.49
    parrot 0.3999 0.8732 0.8757 19.08 28.41 29.28

     | Show Table
    DownLoad: CSV

    1. Compared with the BV regularization term, we employ the TGV regularization term which preserves the image structure and we prove that the proposed model admits a unique solution.

    2. Different from the constraint by applying the median filter, we use the constraint by applying the proximal approach and experiment results show better performance.

    3. The previous literature used ADMM algorithm but we employ non-expansive operator and the fixed point algorithm such that the convergence of the proposed algorithm is more efficiently proved.

    The next part is organized as below. We propose a new model and show the model has a unique solution in Section 2. In Section 3, we employ a minimization scheme to deal with the new model. We show the convergence of the proposed algorithm in Section 4. The performance of the new method is demonstrated by numerical results in Section 5. Some remarks are concluded in Section 6.

    Similar to [13], we propose a new non-convex TGV model for denoising Cauchy noise. For completeness, firstly, a review of BV space and TGV space is given. For more details on TGV models and Cauchy noise removal, we refer to [19,20,21,22].

    For convenience, we introduce the following notations. The function u BV(Ω) iff uL1(Ω) and its TV is finite, where TV of u is

    Ω|Du|=sup{Ωudivϕdx:ϕC0(Ω,R2),||ϕ||1}. (2.1)

    The space  BV(Ω) is a Banach space with the norm ||u|| BV(Ω)=||u||L1(Ω)+Ω|Du| [23,24].

    Throughout the paper, we denote the dimension by d, which is typically 2 or 3. For convenience, Ckc(Ω,Symk(Rd)) expresses the space of compactly supported symmetric tensor field, where Symk(Rd) represents the symmetric tensors space on Rd, which can be written as [16]

    Symk(Rd)={w:Rd××RdkR|w  is  multilinear  and  symmetric}. (2.2)

    The TGV of order k with positive weights α=(α0,α1,,αk1) is defined as [16]

    TGVkα(u)=supϕ{Ωu divk(ϕ)dx  |  ϕCkc(Ω,Symk(Rd)),|| divjϕ||αj,j=0,,k1}. (2.3)

    When k=1,α=1, Sym1(Rd)=Rd, TGV1α(u)=TV(u). When k=2, Sym2(Rd) represents all symmetric Sd×d matrices as follows, for ξSym2(Rd),

    ξ=(ξ11ξ1dξd1ξdd)

    for more details, we refer to [16]. In the following part, we mainly use second-order TGV as

    TGV2α(u)=supϕ{Ωudiv2(ϕ)dx  |  ϕC2c(Ω,Sym2(Rd)),||ϕ||α0,||divϕ||α1}, (2.4)

    where

    (divϕ)i=nj=1ϕijxj,    div2ϕ=ni,j=12ϕijxixj,  ||ϕ||=supxΩ(ni,j=1|ϕi,j|2)12,

    and

    || divϕ||= supxΩ(ni=1|(divϕ)i|2)12.

    Following the notation in [25], we define the discretized grid as

    Ωh={(ih,jh)|i,jN,1iN1,1jN2},

    for some positive N1,N2N, where h denotes the grid width and we take h=1 for convenience. For convenience, we define U,W,Z as

    U={u:ΩhR},W={u:ΩhR2},Z={u:ΩhR2×2}. (2.5)

    For simplicity, the TGV2α functional will be discretized by finite differences with step-size 1. Based on [16], TGV2α(u) can be reformulated as

    TGV2α(u)=minwW{α0||uw||1+α1||ε(w)||1}. (2.6)

    where w=(w1,w2)TW, ε(w)=12(w+Tw). The operators and ε, respectly, denote

    :  UW,  u=(+xu+yu),

    ε:  WZ,  ε(w)=(+xw112(+yw1++xw2)12(+yw1++xw2)+yw2),

     div:WU, divw=xw1+yw2,

     divh:  ZW,   divhz=(xz11+yz12xz21+yz22).

    For more details on the above discretion, we refer to [26].

    By [27], the Cauchy distribution can be written as

    P(x)=γπ((xμ)2+γ2), (2.7)

    where x represents a random variable which obeys the Cauchy distribution, μ represents the peak location, γ>0 is a scale parameter. The scale parameter is similar to the role of the variance. Here, we denote Cauchy distribution by C(μ,γ).

    Following [13], we denote random variables by f,u,v and the respective instances by f,u,v. Denote the noisy image by f=u+v, where v follows the Cauchy noise. We assume that the v follows the Cauchy distribution with μ=0 and its density function is defined as follows

    gV(v)=1πγγ2+v2.

    The MAP estimator of u is obtained by maximizing the conditional probability of u with given f. Based on Bayes' rule, we have

    argmaxuP(u|f)=argmaxuP(f|u)P(u)P(f). (2.8)

    Equation (2.8) is equivalent to

    argminulogP(f|u)logP(u) (2.9)
    =argminuΩlogP(f(x)|u(x))dxlogP(u), (2.10)

    where the term logP(f(x)|u(x)) presents the degradation process between f and u, and logP(u) denotes the prior information on u. For the Cauchy distribution C(0,γ) and each xΩ, we have

    P(f(x)|u(x))=γπ((u(x)f(x))2+γ2).

    In order to overcome blocking artifacts, we employ the prior P(u)=exp(2λTGV2α(u)). Then we obtain the TGV model for denoising as

    minu BGV2α(Ω){TGV2α(u)+λΩlog(γ2+(uf)2)dx}, (2.11)

    where λ>0 is the regularization parameter.

    Next, we show that problem (2.11) admits at least one solution.

    Theorem 2.1. The problem (2.11) has at least one solution in  BGV2α(Ω), if γ1,λ>0.

    Proof. Clearly, if γ1, there exists a lower bound of the model (2.11). Assume that {uk}kN is a minimizing sequence for problem (2.11).

    By contradiction, we show that {uk} is bounded in L2(Ω) and therefore bounded in L1(Ω). Assume that ||uk||2=+, so there exists a set EΩ and measure(E)0, such that for any xE, uk(x)=+. With fL2(Ω), we have log(γ2+(ukf)2)=+ for all xE, which contradicts to Ωlog(γ2+(uf)2)dx<+.

    Noting that ||u||1 and ||ε(w)||1 are both bounded, we obtain that {uk} is a bounded sequence in  BGV2α(Ω). According to Rellich-Kondrachov compactness theorem, there exists a function uL1(Ω) such that uku. Because TGV2α(u) is proper, semi-continuous and convex in  BGV2α(Ω) [16], we obtain that liminfk+TGV2α(uk)TGV2α(u). Meanwhile, according to the Fatou lemma, we can deduce that

    inf{TGV2α(u)+λΩlog(γ2+(uf)2)dx}   lim infk{TGV2α(uk)+λΩlog(γ2+(ukf)2)dx}TGV2α(u)+λΩlog(γ2+(uf)2)dx,

    which means that u is the minimum point of (2.11), i.e., the problem (2.11) has at least one solution in  BGV2α(Ω). Noting that the model (2.11) is strictly convex, based on the standard arguments in convex analysis[28,29], we obtain that the minimum u is unique.

    In order to obtain a convergent algorithm, we employ the alternating minimization algorithm for the variational model (2.11). The model (2.11) can be discretized as

    minz,u{E(z,u)=TGV2α(z)+λ(Ni=1log(γ2+(uif)2)+η2||uz||22)}. (3.1)

    Remark 3.1. The proximal operator [30] proxf:RnRn of f is defined as

    proxf(v)=argminx(f(x)+12||xv||22).

    The definition indicates that proxf(v) is the trade-off between minimizing f and being near to v. Based on this idea, we convexify the model (2.11) by adding a proximal term. The advantages are as follows:

    (1) A strictly convex model is obtained due to the proximal term.

    (2) The result of each iteration is near to the previous one.

    In order to simplify the alternating minimization algorithm, we first introduce the following notations and definitions:

    S(uk1)zk=argminzTGV2α(z)+λη2||zuk1||22, (3.2)
    L(zk)uk=argminuλ(Ni=1log(γ2+(uif)2)+η2||uzk||22). (3.3)

    Now, we solve z-subproblem by (3.2). Based on [31], Ω|Du| can be represented by

    u,p=u, divp,  ||p||1. (3.4)

    Equation (3.4) will make the calculation of the primal dual method very easy. Note that TGV2α(u)=infwW{α0||uw||1+α1||ε(w)||1} (for more details, refer to [2,32]), where α0,α1 are positive constant parameters. Therefore the min-max problem of (3.2) can be reformulated as

    minz,wmaxp,q{η2||zu||22+zw,p+ε(w),qI{||||α0}(p)I{||||α1}(q)}, (3.5)

    where p,q are the dual variables associated with the sets given by

    P={pW|  ||p||α0},  Q={qZ|  ||q||α1}.

    Similar to [25], the solution of the min-max problem (3.5) can be solved as follows

    pk,l=Proj||p||α0(pk,l1+σ(˜zk,l1˜wk,l1)), (3.6)
    qk,l=Proj||q||α1(qk,l1+σε(˜wk,l1)), (3.7)
    zk,l=uk+τλη divpk,l, (3.8)
    wk,l=wk,l1+τ(pk,l+ divh(qk,l)), (3.9)
    (˜zk,l˜wk,l)=2(zk,lwk,l)(zk,l1wk,l1), (3.10)

    where the projection can be computed as

    Proj||p||α0(p)=|p|max(1,|p|α0),
    Proj||q||α1(q)=|q|max(1,|q|α1),

    σ,τ are positive parameters such that στ1/12, and k,l represent iteration numbers.

    The optimality condition for (3.3) is

    2λ(uf)γ2+(uf)2+λ(uv)η(zu)=0. (3.11)

    Based on the proximal-operator idea, we can take v=uk such that the result of each iteration is near to the previous one. Multiplying both sides of (3.11) by γ2+(uf)2, one can obtain that (3.11) is equivalent to

    au3+bu2+cu+d=0, (3.12)

    where

    a=λ+η, (3.13)
    b=(ηz+λuk)2(λ+η)f, (3.14)
    c=(λ+η)f2+γ2(λ+η)+2λ2(ηz+λuk)f, (3.15)
    d=(ηz+λuk)(γ2+f2)2λf. (3.16)

    In order to solve (3.12), we need the following proposition.

    Proposition 3.2. [33] A generic cubic equation with real coefficients

    ax3+bx2+cx+d=0,a0 (3.17)

    has at least one solution among the real numbers. Let

    q=3acb29a2,r=9abc27a2d2b354a3. (3.18)

    If there exists a unique real solution of (3.17), the discriminant, =q3+r2 has to be positive. Furthermore, if 0, the only real root of (3.17) is given by

    x=3r++3rb3a. (3.19)

    Since the problem (3.3) is strictly convex with respect to u, then there exists a unique real solution for (3.3) and it can be obtained by (3.19). Instead of the method presented above, the u subproblem (3.3) can be solved by the Newton method because the objective function in (3.3) is twice continuously differentiable.

    The alternating minimization algorithm for Cauchy noise removal is given in Algorithm 1, where K represents the maximum iteration number.

    Algorithm 1. The alternating minimization algorithm for (3.1).
    input K,f,u0=f,p0,0,q0,0,λ,η,τ,α0,α1,σ.
    Repeat
    step 1: Update zk. Initialization:pk,0=pk1,Kz,qk,0=qk1,Kz,zk,0=zk1,Kz,wk,0=wk1,Kz,
    when k1=0, Kz=0.
      Repeat for l = 1:Kz
      step 1.1: Update pk,l by (3.6),
      step 1.2: Update qk,l by (3.7),
      step 1.3: Update zk,l by (3.8),
      step 1.4: Update wk,l by (3.9),
      step 1.5: Update ˜zk,l,˜wk,l by (3.10),
    Define the next iterate as zk=zk,Kz,
    step 2: Update uk by (3.3),
    Until ||uk,suk,s1||2||uk,s1||2<105 or k>Ku, end.
    step 3: Output ˆz-An optimal solution of (3.1).

     | Show Table
    DownLoad: CSV

    In the following section, we prove the convergence of the proposed Algorithm 1.

    Definition 4.1. ([34]). An operator Q:RNRN is non-expansive, if for y1,y2RN, there holds ||Q(y1)Q(y2)||2||y1y2||2.

    Clearly, the identity map I(x)=x for all x is non-expansive. One can easily check that the product and the sum of two non-expansive operators are also non-expansive respectively. For any fixed vRN, the maps Q(y)=y+v and Q(y)=yv are non-expansive.

    Definition 4.2. ([34]). Given a non-expansive operator P, T=(1β)I+βP, for some β(0,1), is said to be β-averaged non-expansive.

    Definition 4.3. ([34]). An operator G:RNRN is called firmly non-expansive, if for any x1,x2RN, there holds

    (G(x1)G(x2))T(x1x2)||G(x1)G(x2)||22.

    Remark 4.4. An operator G is firmly non-expansive if and only if it is 12-averaged non-expansive.

    Lemma 4.5. ([35]). Let φ be convex and lower semi-continuous, and β>0. Suppose ˆx is defined as follows

    S(y)ˆx=argminx||yx||22+βφ(x).

    Then S is 12-averaged non-expansive.

    Since TGV2α(u) is convex and lower semi-continuous, based on Lemma 4.5, it is obvious that S(u) is 12-averaged non-expansive. Note that

    Ni=1log(γ2+(uif)2)+η2||uzk||22=Ni=1log(γ2+(uif)2)+12||uzk||22+η12||uzk||22.

    Let φ(u)=Ni=1log(γ2+(uif)2)+12||uzk||22, we have

    log(γ2+(uif)2)+η2||uzk||22=φ(u)+η12||uzk||22.

    Noting that φ(u) is convex and by Lemma 4.5, we have that L(z) is 12-averaged non-expansive.

    Lemma 4.6. ([36]) Let P1 and P2 be β1-averaged and β2-averaged non-expansive operators respectively.

    By Lemma 4.5, we obtain that LS is 34-averaged non-expansive.

    Definition 4.7. ([37]). A function ϕ:RnR is proper over a set XRn if ϕ(x)<+ for at least one xX and ϕ(x)> for all xX.

    Definition 4.8. ([37]). A function ϕ:RnR is coercive over a set XRn if for every sequence {xk}X such that ||xk||, we have limkϕ(xk)=.

    The following Lemma 4.9 can be shown easily, and we omit its proof here.

    Lemma 4.9. The functional E(z,u) in (3.1) is coercive.

    Lemma 4.10. ([28]). Let ϕ:RNR be a closed, proper and coercive function. Then the set of the minimizers of ϕ over RN is nonempty and compact.

    Lemma 4.11. The set of the fixed points of LS is non-empty.

    Proof. By Lemma 4.9, the objective function E(z,u) is coercive. Based on Lemma 4.10, the set of minimizers of E(z,u) is non-empty. Set (ˆz,ˆu) is a minimizer of E(z,u). Therefore we have

    Eu(ˆz,ˆu)=0,Ez(ˆz,ˆu)=0.

    It indicates that

    ˆu=L(ˆz)=argminuJ(ˆz,u),ˆz=S(ˆu)=argminzJ(z,ˆu).

    Thus we have ˆu=LS(ˆu).

    According to the Krasnoselskii-Mann (KM) theorem [38], noting that LS is non-expansive and the set of the fixed points of LS is nonempty, one has that the sequence {ui} converges weakly to a fixed point of LS, for any initial point u0. Since E(z,u) is strictly convex and differentiable with u, one has that the minimizer of E(z,u) is unique. Clearly, the fixed points of LS are just the minimizers of E(z,u). Thus the sequence {uk} converges to the unique minimizer of E(z,u). Therefore, we have the following theorem.

    Theorem 4.12. The sequence {uk} converges to the unique minimizer of E(z,u) as k, for any initial point u0.

    In this section we provide numerical results to show the performance of the proposed method for image restoration problems under Cauchy noise. Here, we compare our method with existing models as follows: ROF model[1], the median filter[39], SDZ model[13] and MDH model[15]. For ROF model, we use the primal dual method proposed in [31]. For SDZ model and MDH model, we use the source codes of [13] and [15] respectively.

    Considering the quality of the restoration results, we measure them by different evaluation metrics: The peak-signal-to-noise ratio (PSNR) value and the structural similarity (SSIM) value, which are defined as

    PSNR(u0,u)=20log10max(u)RMSE,    RMSE=Ω(uu0)2M×N,

    where u0 denotes the original signal with mean ˉu0 and u is the denoised signal, M×N is the image size,

    SSIM=(2μuμu0+c1)(2σuu0+c2)(μ2u+μ2u0+c1)(σ2u+σ2u0+c2),

    where μu,μu0,σ2u,σ2u0,σuu0 denote, respectively, mean, variance, co-variance of the image u and u0, c1 and c2 are small positive constants. To compare with different approaches easily, we use the same stopping criterion for all the algorithms, that is

    ||ukuk1||2||uk1||2<105,

    or the maximum number of iterations.

    Combined with relevant reports[13,15,16,17,25,26], we adjust each parameter one by one. For each image in Figure 1, we try our best to tune the parameters of the compared algorithms to obtain the highest PSNR and SSIM. Based on hundreds of experiments, we observe that τ is the key parameter to control the restoration quality and convergence speed. For the proposed model, ROF model, MDH model and the median filter, the grey level range is [0,255]. For SDZ model, the grey level range is normalized to [0, 1]. For the proposed model, the range of τ is [0.3, 0.7], σ=τ/12, the range of λ is [15,30], and the range of η is [0.9, 3]. For MDH model, the range of λ is [25,50]. For SDZ model, the range of λ is [2,9] and the range of η is [0.3, 3]. For ROF model, the range of λ is [1,8].

    Figure 1.  Original images.

    According to the images in Figures 24, images visual quality of our method is better than others. Compared with TV-based methods, block-effects can be more significantly reduced by our method. The reason is that the solution of kernel space of second-order TGV is first-order polynomial, but not the piecewise constant function in BV space. In Tables 2 and 3, we list PSNR, SSIM values of numerical results. Clearly, PSNR, SSIM values of our method are better than others. According to the zoomed images in Figures 24, our method enhances the image quality and reduces noise more significantly while there is much more noise residual in images of other methods. According to the images in Figure 4, the structure around the eye is better preserved in the proposed method than others.

    Figure 2.  The noisy image, restored images and the locally zoomed images, respectively. For ROF method, λ=2.2. For SDZ method, η=0.66,λ=5.0. For MDH method, λ=42. For the proposed method, λ=20,τ=0.58.
    Figure 3.  The noisy image, restored images and the locally zoomed images, respectively. For ROF method, λ=2.6. For SDZ method, η=0.68,λ=5.2. For MDH method, λ=45. For the proposed method, λ=22,τ=0.65.
    Figure 4.  The noisy image, restored images and the locally zoomed images, respectively. For ROF method, λ=1.8. For SDZ method, η=0.65,λ=4.5. For MDH method, λ=42. For the proposed method, λ=20,τ=0.55.
    Table 2.  PSNR measures for different methods, γ=5.
    Image Noisy ROF Median SDZ MDH Ours
    Lena 18.31 26.52 27.91 28.38 30.26 30.81
    Boat 18.01 24.62 26.03 27.12 28.07 29.44
    Montage 19.14 25.88 27.52 28.06 29.88 30.25
    Bridge 19.18 22.17 22.63 24.32 25.25 26.12
    House 17.94 24.56 24.84 25.69 26.71 27.48
    Vehicle 19.20 28.54 28.05 30.98 30.68 31.14
    Saturn 19.04 32.24 34.15 35.65 35.42 36.49
    Parrot 19.08 24.02 27.20 27.19 29.06 29.28

     | Show Table
    DownLoad: CSV
    Table 3.  SSIM measures for deferent methods, γ=5.
    Image Noisy ROF Median SDZ MDH Ours
    Lena 0.5377 0.8187 0.8766 0.9061 0.9126 0.9211
    Boat 0.3252 0.4659 0.7782 0.8276 0.8545 0.8662
    Montage 0.3230 0.8671 0.8772 0.9210 0.9152 0.9312
    Bridge 0.4354 0.7354 0.6325 0.7857 0.8112 0.8893
    House 0.2356 0.7332 0.7510 0.7786 0.8326 0.8627
    Vehicle 0.5707 0.8129 0.9012 0.9121 0.9236 0.9322
    Saturn 0.2080 0.8376 0.8636 0.9063 0.9041 0.9125
    Parrot 0.3999 0.7736 0.8353 0.8471 0.8729 0.8757

     | Show Table
    DownLoad: CSV

    Based on the Moreau envelop[30] idea and TGV regularization, we propose a new approach to Cauchy noise removal. We show that the solution of the proposed model is unique. In order to solve the new model, an alternating minimization method is employed and its convergence is proved. Numerical results demonstrate that the images quality of our method is better than that of some earlier restoration methods.

    We really appreciate authors of [13] and [15] for their codes.

    We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.



    [1] L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), 259–268. doi: 10.1016/0167-2789(92)90242-F
    [2] K. Bredies, H. P. Sun, Preconditioned Douglas-Rachford algorithms for TV and TGV-regularized variational imaging problems, J. Math. Imaging Vis., 53 (2015), 317–344.
    [3] S. Wang, T. Z. Huang, J. Liu, X. G. Lv, An alternating iterative algorithm for image deblurring and denoising problems, Commun. Nonlinear Sci. Numer. Simul., 19 (2014), 617–626. doi: 10.1016/j.cnsns.2013.07.004
    [4] Y. L. Wang, J. F. Yang, W. Yin, Y. Zhang, A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imaging Sci., 1 (2008), 248–272. doi: 10.1137/080724265
    [5] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vis., 20 (2004), 89–97. doi: 10.1023/B:JMIV.0000011321.19549.88
    [6] A. Beck, M. Teboulle, Fast gradient-based algorithm for constrained total variation denoising and deblurring problems, IEEE Trans. Image Process., 18 (2009), 2419–2434. doi: 10.1109/TIP.2009.2028250
    [7] G. A. Tsihrintzis, Statistical modeling and receiver design for multi-user communication networks, In: R. J. Adler, R. E. Feldman, M. S. Taqqu, A practical guide to heavy tails: Statistical techniques and applications, Birkhäuser, 1998.
    [8] B. Kosko, Noise, New York: Viking Press, 2006.
    [9] T. Pander, New polynomial approach to myriad filter computation, Signal Process., 90 (2010), 1991–2001. doi: 10.1016/j.sigpro.2010.01.001
    [10] Y. C. Chang, S. R. Kadaba, P. C. Doerschuk, S. B. Gelfand, Image restoration using recursive Markov random field models driven by Cauchy distributed noise, IEEE Signal Process. Lett., 8 (2001), 65–66. doi: 10.1109/97.905941
    [11] A. Loza, D. Bull, N. Canagarajah, A. Achim, Non-Gaussian model-based fusion of noisy images in the wavelet domain, Comput. Vis. Image Und., 114 (2010), 54–65. doi: 10.1016/j.cviu.2009.09.002
    [12] T. Wan, N. Canagarajah, A. Achim, Segmentation of noisy colour images using Cauchy distribution in the complex wavelet domain, IET Image Process., 5 (2011), 159–170. doi: 10.1049/iet-ipr.2009.0300
    [13] F. Sciacchitano, Y. Q. Dong, T. Y. Zeng, Variational approach for restoring blurred images with Cauchy noise, SIAM J. Imaging Sci., 8 (2015), 1894–1922. doi: 10.1137/140997816
    [14] A. C. Bovik, Handbook of image and video processing, New York: Academic Press, 2000.
    [15] J. J. Mei, Y. Dong, T. Z. Huang, W. Yin, Cauchy noise removal by non-convex ADMM with convergence guarantees, J. Sci. Comput., 74 (2018), 743–766. doi: 10.1007/s10915-017-0460-5
    [16] K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM J. Imaging Sci., 3 (2010), 492–526.
    [17] F. Knoll, K. Bredies, T. Pock, R. Stollberger, Second order total generalized variation (TGV) for MRI, Magn. Reson. Med., 65 (2011), 480–491. doi: 10.1002/mrm.22595
    [18] K. Bredies, K. Kunisch, T. Valkonen, Properties of L1-TGV2: The one-dimensional case, J. Math. Anal. Appl., 398 (2013), 438–454. doi: 10.1016/j.jmaa.2012.08.053
    [19] Q. X. Zhong, C. S. Wu, Q. L. Shu, R. W. Liu, Spatially adaptive total generalized variation-regularized image deblurring with impulse noise, J. Electron. Imaging, 27 (2018), 053006.
    [20] W. Q. Lu, J. M. Duan, Z. W. Qiu, Z. K. Pan, R. W. Liu, L. Bai, Implementation of high-order variational models made easy for image processing, Math. Methods Appl. Sci., 39 (2016), 4208–4233. doi: 10.1002/mma.3858
    [21] R. W. Liu, L. Shi, W. H. Huang, J. Xu, S. C. H. Yu, D. F. Wang, Generalized total variation-based MRI Rician denoising model with spatially adaptive regularization parameters, Mag. Reson. Imaging, 32 (2014), 702–720. doi: 10.1016/j.mri.2014.03.004
    [22] R. W. Liu, L. Shi, S. C. H. Yu, D. F. Wang, Box-constrained second-order total generalized variation minimization with a combined L1, 2 data-fidelity term for image reconstruction, J. Electron. Imaging, 24 (2015), 033026. doi: 10.1117/1.JEI.24.3.033026
    [23] Y. Dong, T. Zeng, A convex variational model for restoring blurred images with multiplicative noise, SIAM J. Imaging Sci., 6 (2013), 1598–1625. doi: 10.1137/120870621
    [24] G. Aubert, J. F. Aujol, A variational approach to removing multiplicative noise, SIAM J. Appl. Math., 68 (2008), 925–946. doi: 10.1137/060671814
    [25] K. Bredies, Recovering piecewise smooth multichannel images by minimization of convex functionals with total generalized variation penalty, In: A. Bruhn, T. Pock, X. C. Tai, Efficient algorithms for global optimization methods in computer vision, Lecture Notes in Computer Science, Berlin: Springer, 8293 (2014), 44–77.
    [26] L. F. Bai, A new non-convex approach for image restoration with Gamma noise, Comput. Math. Appl., 77 (2019), 2627–2639. doi: 10.1016/j.camwa.2018.12.045
    [27] W. Feller, An introduction to probability theory and its applications, John Wiley & Sons, 2008.
    [28] D. P. Bertsekas, A. Nedić, A. E. Ozdaglar, Convex analysis and optimization, Athena Scientific, 2003.
    [29] R. Bergmann, A. Weinmann, A second-order TV-type approach for inpainting and denoising higher dimensional combined cyclic and vector space data, J. Math. Imaging Vis., 55 (2016), 401–427. doi: 10.1007/s10851-015-0627-3
    [30] N. Parikh, S. Boyd, Proximal algorithms, Found. Trends Optim., 1 (2014), 127–239.
    [31] A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vis., 40 (2011), 120–145. doi: 10.1007/s10851-010-0251-1
    [32] W. H. Guo, J. Qin, W. T. Yin, A new detail-preserving regularity scheme, SIAM J. Imaging Sci., 7 (2014), 1309–1334. doi: 10.1137/120904263
    [33] N. Jacobson, Basic algebra II, San Francisco: Freeman Company, 1980.
    [34] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Prob., 20 (2004), 103–120. doi: 10.1088/0266-5611/20/1/006
    [35] P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168–1200. doi: 10.1137/050626090
    [36] Y. W. Wen, M. K. Ng, W. K. Ching, Iterative algorithms based on decoupling of deblurring and denoising for image restoration, SIAM J. Sci. Comput., 30 (2007), 2655–2674.
    [37] Y. M. Huang, M. K. Ng, Y. W. Wen, A fast total variation minimization method for image restoration, Multiscale Model. Simul., 7 (2008), 774–795. doi: 10.1137/070703533
    [38] C. L. Byrne, Applied iterative methods, New York: AK Peters/CRC Press, 2007.
    [39] B. R. Frieden, A new restoring algorithm for the preferential enhancement of edge gradients, J. Opt. Soc. Am., 66 (1976), 116–123.
  • This article has been cited by:

    1. F. Bendaida, A Nonlocal Model for Reconstructing Images Corrupted by Cauchy Noise, 2023, 9, 2351-8227, 48, 10.2478/mjpaa-2023-0003
    2. Miyoun Jung, A variational image denoising model under mixed Cauchy and Gaussian noise, 2022, 7, 2473-6988, 19696, 10.3934/math.20221080
    3. Kehan Shi, Zhichang Guo, Non-Gaussian Noise Removal via Gaussian Denoisers with the Gray Level Indicator, 2023, 65, 0924-9907, 844, 10.1007/s10851-023-01148-9
    4. Fatiha Bendaida, Fahd Karami, Driss Meskine, Nonlocal p-Biharmonic model for Cauchy noise removal, 2025, 0, 2577-8838, 0, 10.3934/mfc.2025003
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3153) PDF downloads(100) Cited by(4)

Figures and Tables

Figures(4)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog