Research article

A variational image denoising model under mixed Cauchy and Gaussian noise

  • In this article, we propose a novel variational model for restoring images in the presence of the mixture of Cauchy and Gaussian noise. The model involves a novel data-fidelity term that features the mixed noise as an infimal convolution of two noise distributions and total variation regularization. This data-fidelity term contributes to suitable separation of Cauchy noise and Gaussian noise components, facilitating simultaneous removal of the mixed noise. Besides, the total variation regularization enables adequate denoising in homogeneous regions while conserving edges. Despite the nonconvexity of the model, the existence of a solution is proven. By employing an alternating minimization approach and the alternating direction method of multipliers, we present an iterative algorithm for solving the proposed model. Experimental results validate the effectiveness of the proposed model compared to other existing models according to both visual quality and some image quality measurements.

    Citation: Miyoun Jung. A variational image denoising model under mixed Cauchy and Gaussian noise[J]. AIMS Mathematics, 2022, 7(11): 19696-19726. doi: 10.3934/math.20221080

    Related Papers:

    [1] Myeongmin Kang, Miyoun Jung . Nonconvex fractional order total variation based image denoising model under mixed stripe and Gaussian noise. AIMS Mathematics, 2024, 9(8): 21094-21124. doi: 10.3934/math.20241025
    [2] Donghong Zhao, Ruiying Huang, Li Feng . Proximity algorithms for the L1L2/TVα image denoising model. AIMS Mathematics, 2024, 9(6): 16643-16665. doi: 10.3934/math.2024807
    [3] Abdelilah Hakim, Anouar Ben-Loghfyry . A total variable-order variation model for image denoising. AIMS Mathematics, 2019, 4(5): 1320-1335. doi: 10.3934/math.2019.5.1320
    [4] Lufeng Bai . A new approach for Cauchy noise removal. AIMS Mathematics, 2021, 6(9): 10296-10312. doi: 10.3934/math.2021596
    [5] Miyoun Jung . Group sparse representation and saturation-value total variation based color image denoising under multiplicative noise. AIMS Mathematics, 2024, 9(3): 6013-6040. doi: 10.3934/math.2024294
    [6] Yating Zhu, Zixun Zeng, Zhong Chen, Deqiang Zhou, Jian Zou . Performance analysis of the convex non-convex total variation denoising model. AIMS Mathematics, 2024, 9(10): 29031-29052. doi: 10.3934/math.20241409
    [7] Ming Shi, Ibrar Hussain . Improved region-based active contour segmentation through divergence and convolution techniques. AIMS Mathematics, 2025, 10(1): 654-671. doi: 10.3934/math.2025029
    [8] Mingying Pan, Xiangchu Feng . Application of Fisher information to CMOS noise estimation. AIMS Mathematics, 2023, 8(6): 14522-14540. doi: 10.3934/math.2023742
    [9] Yuzi Jin, Soobin Kwak, Seokjun Ham, Junseok Kim . A fast and efficient numerical algorithm for image segmentation and denoising. AIMS Mathematics, 2024, 9(2): 5015-5027. doi: 10.3934/math.2024243
    [10] C. T. J. Dodson . Information distance estimation between mixtures of multivariate Gaussians. AIMS Mathematics, 2018, 3(4): 439-447. doi: 10.3934/Math.2018.4.439
  • In this article, we propose a novel variational model for restoring images in the presence of the mixture of Cauchy and Gaussian noise. The model involves a novel data-fidelity term that features the mixed noise as an infimal convolution of two noise distributions and total variation regularization. This data-fidelity term contributes to suitable separation of Cauchy noise and Gaussian noise components, facilitating simultaneous removal of the mixed noise. Besides, the total variation regularization enables adequate denoising in homogeneous regions while conserving edges. Despite the nonconvexity of the model, the existence of a solution is proven. By employing an alternating minimization approach and the alternating direction method of multipliers, we present an iterative algorithm for solving the proposed model. Experimental results validate the effectiveness of the proposed model compared to other existing models according to both visual quality and some image quality measurements.



    Different image acquisition and transmission factors cause the observed images corrupted by a mixture of noise statistics. Image denoising aims to retrieve a clean image from the observed noisy image, which is an essential problem in image processing. We here focus on the image denoising problem in the presence of mixed Cauchy and additive white Gaussian noise. In real circumstances, there are noises with a strong impulsive nature which the Gaussian model fails to describe, such as atmospheric noise caused by lighting, picture noise, radar noise and so on. The Cauchy noise is a type of alpha-stable noise that is impulsive in nature and has a heavy tail. It can occur in low-frequency atmospheric signals [1], underwater acoustic signals [2,3], radar clutter [4,5], multiple access interference in wireless communication systems [6], powerline communication channels [7], air turbulence [8], biomedical images, and synthetic aperture radar images [9,10]. Thus, the removal of Cauchy noise has been of great importance in many applications such as sonar, radar, image processing and communications [11,12,13]. On the other hand, the additive white Gaussian noise frequently appears due to the temperature of the sensor and the level of illumination in the environment that corrupts every pixels. These noises can appear simultaneously in practice. Indeed, the mixed Cauchy and Gaussian noise can occur in real world applications such as communication systems, where in the receiver one has the sum of the Gaussian noise due to electronic components and impulsive noise due to environmental effects [14], or astrophysical image processing [15], where the cosmic microwave background radiation is described as the sum of Gaussian distributed from the antenna beam and symmetric alpha-stable distributed random variables from galaxies. Thus, it is a necessary problem to suppress this mixed noise.

    Let ΩR2 be a connected bounded image domain with a Lipschitz boundary. We consider a noisy image f:ΩR given by

    f=u+n, (1.1)

    where n is the mixture of Cauchy noise and Gaussian noise. The Gaussian noise is assumed to follow a Gaussian distribution, N(0,σ2), with zero mean and standard deviation σ, and the Cauchy noise is assumed to follow a Cauchy distribution, C(0,γ). Specifically, the Cauchy noise, w, is a random variable following a Cauchy distribution, denoted by C(δ,γ), with the probability density function (PDF) [16,17]

    P(w;δ,γ)=γπ((wδ)2+γ2), (1.2)

    where δR is the parameter representing the location of the peak, and γ>0 is the scale parameter that determines the level of noise. The Cauchy distribution looks similar to a Gaussian distribution with a bell-shaped curve, but it is a heavy-tailed distribution, as shown in Figure 1(a). Its tail heaviness is determined by the parameter γ, and it increases as the value of γ increases, which can be seen in Figure 1(b). Owing to the tail heaviness, the Cauchy distribution is more prone to producing values that fall far from its mean. Hence, the noise generated from the Cauchy distribution is more impulsive than the Gaussian one. For instance, the Cauchy noise tends to contain much larger noise spikes than the Gaussian noise. In this work, we intend to recover a clean image u from the noisy image f, which is an ill-posed inverse problem.

    Figure 1.  Comparison of Cauchy distribution C(0,γ) and Gaussian distribution N(0,σ2). (a) PDFs of C(0,1) (red) and N(0,1) (blue), (b) PDFs of C(0,γ) with γ=5 (green), γ=10 (blue), γ=15 (red).

    To solve the ill-posed inverse problem (1.1), one of the well-known approaches is to solve a minimization problem of the following form:

    minuE(u)=Φ(u,f)+R(u), (1.3)

    where Φ is a data-fidelity term that depends on the type of noise, and R is a regularization term that controls the smoothness of u. The most popular regularization is the total variation (TV) [18], due to its convexity and edge preserving property. Many TV based variational models have been proposed for restoring images with various types of noise, such as Gaussian noise [18], impulse noise [19,20,21], multiplicative noise [22,23,24,25], Poisson noise [26], Rician noise [27,28], and Cauchy noise [29,30,31]. The different noise distributions yield different data-fidelity terms, so one can attain suitable variational image denoising models depending on noise types. For instance, the L2 data-fidelity term, Φ(u,f)=Ω(fu)2dx, is typically used for the image denoising models under Gaussian noise [18], and Φ(u,f)=Ω|fu|dx is more appropriate for the ones under impluse noise [19].

    While there are numerous denoising methods for Gaussian noise, several approaches to eliminate Cauchy noise have been suggested. In addition to the Markov random field or wavelet-based denoising methods [32,33,34], a TV-based model was proposed in [29], with the nonconvex data-fidelity term derived from the Cauchy distribution (1.2). The same authors also suggested a convex model by inserting a quadratic penalty term that involves a pre-denoised image achieved by applying the median filtering to the noisy data. However, the median filtering does not always yield satisfactory denoising results. In [30], Mei et al. showed the effectiveness of the nonconvex model in [29] combined with the nonconvex alternating direction method of multipliers (ADMM) [35]. Moreover, a convex TV model was proposed in [31] for restoring images with α-stable noise with α(0,2) (Cauchy noise when α=1, Gaussian noise when α=2). The experimental results showed that the model outperformed the L1-TV [36] and Cauchy [29] models in impulsive noisy environments (i.e., α(0,1.5)), while providing comparable performance in less impulsive noisy environments (i.e., α(1.5,2)). The aforementioned nonconvex or convex TV models in [29] have also been extended in various works [37,38,39,40,41,42,43] by adopting other regularization terms instead of TV.

    Removing the mixture of noise is more challenging because of the unique nature of each of two types of noise. For the removal of mixed impulse and Gaussian noise, various efficient two-phase methods that integrate variational methods with adaptive median filters have been proposed [44,45,46]. These two-phase approaches strongly rely on precise detection of noisy pixels. A unified framework of joint detection of noisy pixels and reduction of noise components has been suggested in [47,48,49]. Moreover, the combination of data-fidelity terms has been considered. Specifically, a linear combination of the L1 and L2 data-fidelity terms was considered [50,51]. The removal of mixed Poisson and Gaussian (MPG) has also been extensively studied. Some early works are based on the noise parameter estimation [52,53], while others are mainly based on transform-domain procedures [54,55,56,57,58], such as variance-stabilizing transformation [59] or Haar wavelet transform. The MAP approaches [60,61] lead to practical difficulties since the log-likelihood function involves infinite summation. The combination of data-fidelity terms has also been considered for MPG removal [61,62,63,64]. In [65,66], the authors proposed new TV-based denoising models under the mixed salt-and-pepper and Gaussian noise or MPG noise, by utilizing a data discrepancy which characterizes the mixed noise as an infimal convolution of two noise distributions. These data-fidelity terms provided better denoising performance than a combination of different data-fidelity terms corresponding to noise types. In [67], new operator-splitting algorithms were suggested for solving the MPG model in [65]. On the other hand, there are a few works for dealing with mixed Cauchy and Gaussian noise, and they are only for 1D signals [14,15,68,69]. In this work, we introduce a new model for restoring images under mixed Cauchy and Gaussian noise, by following the idea of the work [65,66].

    To the best of our knowledge, there is no unified model for simultaneously removing both Cauchy and Gaussian noise from images, so our main contribution is to propose a novel model for denoising images with the mixed noise. In spite of the nonconvexity of the proposed model, the existence of a solution is proved. We also present an efficient iterative optimization algorithm. The rest of the paper is organized as follows. Section 2 recalls some variational models for restoring images with Cauchy noise. In Section 3, we propose a minimization problem for image denoising under mixed noise, along with an optimization algorithm for solving the proposed model. Section 4 presents experimental results with comparisons to other existing models, and Section 6 concludes our work with some comments.

    Assuming that the Cauchy noise follows a zero-centered Cauchy law (δ=0), Sciacchitano et al. [29] derived a TV based model for the removal of Cauchy noise:

    minuBV(Ω)λΩlog(γ2+(fu)2)dx+Ω|Du|, (2.1)

    where BV(Ω) is the subspace of functions uL1(Ω) such that the following BV semi-norm is finite [18]:

    Ω|Du|:=supϕC1c(Ω,R2),ϕ1Ωudiv(ϕ)dx, (2.2)

    where the vector measure Du represents the distributional or weak gradient of u, and is the essential supremum norm. This is also called the total variation of u, denoted by TV(u). If uw1,1(Ω), then Ω|Du|=Ω|u|dx. λ>0 is a tuning parameter that determines the smoothness of the restored image u. Despite the nonconvexity of the model (2.1), the existence and uniqueness of a minimizer was proved under certian conditions. Mei et al. [30] also showed the efficiency of the model (2.1) associated with the alternating direction method of multipliers (ADMM) [35].

    Recently, Yang et al. [31] proposed a convex TV model for restoring images degenerated by the α-stable noise (0<α<2), including Cauchy noise (α=1):

    minuBV(Ω)λ(Ωlog(γ+|fu|)dx+μ2ug22)+Ω|Du|, (2.3)

    where μ>0 is a parameter, and g is the pre-denoised image obtained by applying a median filter to f. The existence and uniqueness of a minimizer for the model (2.3) was shown, and the model was efficiently solved by employing the primal-dual algorithm [70]. The numerical results showed that the model (2.3) provided better denoising performance than the existing models [20,29], especially when the noise has more impulsive properties (i.e., α(0,1.5)).

    In this section, we introduce a new image denoising model in the presence of the mixture of Cauchy noise and Gaussian noise.

    In general, both Cauchy noise and Gaussian noise are additive noise, so we assume that both noises occur simultaneously and independently in the entire domain. Thus, we consider a noisy image fL2(Ω) given by

    f=u+w+v, (3.1)

    where w is the Cauchy noise following the Cauchy distribution C(0,γ), and v is the Gaussian noise following the Gaussian distribution N(0,σ2). Since w and v occur independently, the order of w and v does not matter.

    To eliminate both Gaussian noise v and Cauchy noise w=fuv from the data f, we follow the idea in [65] which suggested infimal convolution-type data-fidelity terms for the dismissal of mixed SP-Gaussian noise or mixed Poisson-Gaussian noise. Now we define an infimal convolution-type data-fidelity term to remove mixed Cauchy and Gaussian noise as

    Φ(u,f):=infvL2(Ω){λ1Φ1(v)+λ2Φ2(u,fv)}, (3.2)

    where Φ1 and Φ2 are the Gaussian and Cauchy noise components, respectively, defined as

    Φ1(v)=v22,Φ2(u,fv)=Ωlog(γ2+(fuv)2)dx, (3.3)

    where Φ2 is acquired from the data-fidelity term in (2.1), which is nonconvex. λ1 and λ2 are positive parameters that balances the smoothing effect of the regularization as well as the fitting with respect to the intensity of each single noise distribution in f.

    By integrating the data-fidelity term in (3.2) with the TV regularization, we propose the following minimization problem for restoring images with mixed Cauchy and Gaussian noise:

    minuX,vL2(Ω){E(u,v)=λ1Φ1(v)+λ2Φ2(u,fv)+Ω|Du|+μ2ug22}, (3.4)

    where X=BV(Ω)L2(Ω), g is the pre-denoised image by applying a median filter to the data f, and μ>0 is a parameter. The median filtering does not always bring adequate denoising results, but we add the last quadratic term mainly for the proof of the existence of a minimizer. Indeed, we in practice set the value of μ very small so that the pre-denoised image g barely impacts on the denoising results. We note that the proposed model can be extended, by utilizing other regularization terms, instead of TV, such as the higher-order regularization [71,72], nonlocal TV [73], dictionary learning [74,75], or tight-frame approach [76], etc. This work mainly focuses on introducing a new data-fidelity term for the mixed Cauchy-Gaussian noise model, so the adoption of new regularization terms are beyond the scope of our work.

    The energy functional E in (3.4) is nonconvex without any specific assumptions on the parameters, but it is still available to show the existence of a minimizer for the minimization problem (3.4). In the following theorem, we prove the existence of a minimizer for the problem (3.4).

    Theorem 1. Let fL2(Ω), and gL2(Ω). Then, the minimization problem (3.4) has at least one solution (u,v)X×L2(Ω) with X=BV(Ω)L2(Ω).

    Proof. Since Φ2 has the minimum value 2|Ω|logγ when u+v=f, the functional E in (3.4) is bounded from below. Then we can choose a minimizing sequence (un,vn) in X×L2(Ω) such that E(un,vn)C for a constant C>0. So all terms in E(un,vn) are bounded, i.e.,

    vn22C,Ωlog(γ2+(funvn)2)dxC,ung22C,Ω|Dun|C. (3.5)

    Hence we can extract a subsequence {vn} (still denoted in the same way) weakly converging to v in L2(Ω) and vnv a.e. in Ω. Then we have that

    v22lim infnvn22. (3.6)

    Since gL2(Ω), {un} is bounded in L2(Ω) from (3.5) and thus bounded in L1(Ω). Besides, {Ω|Dun|} is bounded, so {un} is bounded in BV(Ω). Hence, there is a subsequence {un} (still denoted in the same way) and u in BV(Ω) such that (i) unu strongly in L1(Ω), (ii) unu weakly in L2(Ω), (iii) unu in a.e. Ω, and (iv)

    Ω|Du|lim infnΩ|Dun|. (3.7)

    Lastly, from Fatou's Lemma, we can finally attain that

    E(u,v)lim infnE(un,vn), (3.8)

    which indicates that (u,v) is a minimzer of E(u,v).

    Remark. The functional E in (3.4) is convex under certain conditions. Specifically, it can be easily proven that E(u,) is strictly convex if λ24γ2μ, while E(,v) is strictly convex if λ28γ2λ1. Furthermore, if λ2<12min{8γ2λ1,4γ2μ}, then E(u,v) is strictly convex (for the proof, see Appendix A), so it has a unique minimizer. However, we in practice do not impose this condition on the parameter λ2 due to the following reason: we want to reduce the influence of the pre-denoised image g, so we set the value of μ to be very small. This leads to a small value of λ2 with the aforementioned condition, but a small value of λ2 is not appropriate for the removal of Cauchy noise especially when the level of Cauchy noise is high. Therefore, without the above constraint on λ2, the functional E is nonconvex, hence we practically solve the nonconvex minimization problem.

    To solve the proposed model (3.4), we consider a discretized image domain Ω={(i,j):i=1,2,...,M,j=1,2,...,N}, and let us be the pixel value of an image u at location sΩ. Then we compute numerically the solution pair of the following minimization problem:

    minu,v{λ1v22+λ2G(u,v)+u1+μ2ug22}withG(u,v)=log(γ2+(fuv)2),1, (3.9)

    where , is the inner product, 22=,, and u1 is the discrete version of the isotropic TV norm:

    u1=s(x1u)2s+(x2u)2s,

    with u=[x1u,x2u]T, denoting x1u and x2u by the finite difference operators that estimate the partial derivatives of the image u along the x1-axis and x2-axis, respectively.

    To solve the nonconvex problem (3.9), we first adopt the alternating minimization algorithm (AMA). The AMA minimizes a function of two variables, and its essential idea is to keep one variable fixed while minimizing the other variable and iterate this process. This approach has practically performed well, even though the objective function is nonconvex. The AMA applied to (3.9) yields the following iterative algorithm:

    uk+1argminu{λ2G(u,vk)+u1+μ2ug22},vk+1argminv{λ1v22+λ2G(uk+1,v)}. (3.10)

    In the subsequent paragraphs, we solve the two subproblems in (3.10).

    First, to solve the u-subproblem in (3.10), we adopt the alternating direction method of multipliers (ADMM) [35] applied to nonconvex minimization problems with linear constraints. By introducing an auxiliary variable z, we can rewrite the u-subproblem in (3.10) as the following equivalent constrained problem:

    minu,z{λ2G(z,vk)+u1+μ2ug22},subject to:z=u. (3.11)

    Then the augmented Lagrangian function (ALF) corresponding to (3.11) is given by

    Lτ(u,z,p)=λ2G(z,vk)+u1+μ2ug22p,zu+τ2zu22, (3.12)

    where pRM×N is the Lagrangian multiplier, and τ>0 is a penalty parameter.

    The ADMM applied to (3.11) leads to the following iterative algorithm:

    {u+1argminuLτ(u,z,p),z+1argminzLτ(u+1,z,p),p+1=p+τ(u+1z+1). (3.13)

    Following the Theorem 4.2 in [30], we can prove a convergence of the ADMM in (3.13) as follows:

    Theorem 2. If τ>2λ2γ2μ, then the sequence {(u,z,p)} generated by the algorithm (3.13) converges globally to a point (u,z,p), which is a stationary point of Lτ.

    The proof of this theorem is omitted since it is similar to the proof given in [30]. The stationary point (u,z,p) satisfies the Karush–Kuhn–Tucker (KKT) conditions of problem (3.11). However, the minimization problem (3.11) is non-convex, so the KKT conditions are only necessary optimal conditions to (3.11). Hence, we cannot assure that (u,z) is an optimal solution of (3.11).

    In the following paragraphs, we solve the two subproblems in (3.13).

    u-subproblem in (3.13). First, the u-subproblem in (3.13) is convex, so it can be efficiently solved by various convex optimization algorithms [70,77,78,79,80]. Among them, we again utilize the ADMM [79,80]. Specifically, to handle the nondifferential L1 term, we introduce an auxiliary variable d to replace u, and then achieve the following constrained problem:

    minu,dd1+μ2ug22p,zu+τ2zu22,subject to:d=u. (3.14)

    Analogously, the ALF for the problem (3.14) is

    Lη(u,d,q)=d1+μ2ug22p,zu+τ2zu22q,du+η2du22, (3.15)

    where q(RM×N)2 is the vector of Lagrangian multipliers, and η>0 is a penalty parameter. Then the ADMM algorithm applied to (3.14) results in the following iterative algorithm:

    {um+1argminuLη(u,dm,qm),dm+1argmindLη(um+1,d,qm),qm+1=qm+η(um+1dm+1). (3.16)
    Algorithm 1 Solving the proposed model (3.4).
    1: Input: choose the parameters λ1,λ2,μ,τ,η>0, the maximum iteration numbers Nu, Nuu, Nz.
    2: Initialization: set u0=max(min(f,255),0), v0=0, z0=u0, p0=0, q0=0.
    3: repeat
    4:   Compute uk+1 by iterating for =0,1,2,,Nu:
    5:    compute u+1 by iterating for m=0,1,2,,Nuu:
    6:     um+1 by solving (3.17) using DCT,
    7:     dm+1=shrink(um+1+qmη,1η),
    8:     qm+1=qm+η(um+1dm+1).
    9:    compute z+1 by iterating for n=0,1,2,,Nz:
    10:     zn+1=znτF(zn)F(zn),
    11:    update p+1=p+τ(u+1z+1).
    12:   Compute vk+1 by solving Eq (3.25) using Cardano's formula.
    13: until a stopping condition is satisfied.
    14: Output: restored image u.

     | Show Table
    DownLoad: CSV

    The u-subproblem in (3.16) is a least squares problem, so the solution um+1 in (3.16) can be attained by solving the following normal equation:

    (μ+τ+ηT)u=μg+τ(zp/τ)+ηT(dmqm/η), (3.17)

    where T=div with a discrete divergence opterator such that div(w1,w2)=x1w1+x2w2. In discrete setting, T is equal to Δ, where Δ is a discrete Laplacian operator. The discrete Laplacian operator can be regarded as the convolution of the kernel [010;141;010]. Then, since Δ is a symmetric convolution operator, μ+τ+ηT can be diagonalized by the 2-dimensional discrete cosine transform (DCT2) under the symmetric boundary condition [81]. Thus we solve the Eq (3.17) using the DCT2, denoted by F, under the symmetric boundary condition. Then we can obtain an explicit formula for um+1:

    um+1=F1(F(μg+τ(zp/τ)+ηT(dmqm/η))μ+τ+ηF(T)), (3.18)

    where F1 denotes the inverse DCT2.

    Lastly, the solution dm+1 in (3.16) can be explicitly obtained as

    dm+1=shrink(um+1+qmη,1η), (3.19)

    where shrink is the soft thresholding operator defined as

    shrink(t,ξ)s=ts|ts|max(|ts|ξ,0), (3.20)

    where |ts|=(t1,s)2+(t2,s)2 with t=[t1,t2]T.

    z-subproblem in (3.13). Next we solve the z-subproblem in (3.13), which can be rewritten as

    minzλ2log(γ2+(z+vkf)2),1+τ2zu+1p/τ22. (3.21)

    The first-order optimality condition for z+1 is given by

    F(z)=2λ2z+vkfγ2+(z+vkf)2+τ(zu+1p/τ)=0, (3.22)

    where F(z)=λ2log(γ2+(z+vkf)2)+τ2(zu+1p/τ)2. This normal Eq (3.22) can be efficiently solved by using Newton's method as follows:

    zn+1=znF(zn)F(zn). (3.23)

    The z-subproblem (3.21) is strictly convex if λ24γ2τ. This condition is satisfied in practice, so Eq (3.22) has a unique real root. Hence, with a good initial guess, Newton's method converges fast within a few number of iterations. Meanwhile, Eq (3.22) can be rewritten as a cubic equation by multiplying with the denominator. This cubic equation can be explicitly solved using Cardano's formula [82]. In our simulations, we decide to utilize Newton's method due to its efficiency. Indeed, despite the same denoising results, the total computational time when using Newton's method is shorter than when using Cardano's formula.

    Lastly, we solve the v-subproblem in the AMA (3.10):

    minvλ1v22+λ2log(γ2+(v+uk+1f)2),1. (3.24)

    The necessary optimality condition for vk+1 is

    λ1v+λ2v+uk+1fγ2+(v+uk+1f)2=0. (3.25)

    The problem (3.24) is convex if λ28γ2λ1. In practice, this condition is not satisfied due to the choice of small values for λ1. Thus, the problem (3.24) is not convex, so we utilize the Cardano's formula to solve Eq (3.25). Specifically, Eq (3.25) can be written as the following cubic equation:

    av3+bv2+cv+d=0, (3.26)

    with a=λ1, b=2λ1(uk+1f), c=λ1(γ2+(uk+1f)2)+λ2, d=λ2(uk+1f). The Cardano's formula for real roots of a cubic polynomial is given in the following proposition [82].

    Proposition 1. For the cubic equation with real coefficients

    ax3+bx2+cx+d=0,a0, (3.27)

    define Q=3acb29a2, R=9abc27a2d2b354a3, and the discriminant D=Q3+R2. If D>0, the cubic equation (3.27) has only one real root, which is given by

    x=3R+D+3RDb3a. (3.28)

    Otherwise, if D0, Eq (3.27) has three real roots (possibly equal), which are given by

    x=2Qcos(θ+2kπ3)b3a,k=0,1,2, (3.29)

    where θ=cos1(R/Q3).

    If the Eq (3.25) has three real roots, we choose one which yields the minimum value of the objective function in (3.24).

    Consequently, the whole algorithm for solving the problem (3.9) is given in Algorithm 1.

    This section presents the experimental results of the proposed model (3.4) and comparisons to other existing models. We compare our model with two Cauchy denoising models, nonconvex TV model (Cauchy-TV) [29,30] and Yang et al.'s model [31] (Yang's), given in (2.1) and (2.3), respectively. Furthermore, due to the impulsive characteristics of Cauchy noise, we also present the denoising results of the median filtering (MF) [83] and L1-TV model [19]. For solving the L1-TV and Cauchy-TV models, we employ the convex or nonconvex ADMM algorithms.

    The test images are given in Figure 2, and the size of images are 256×256 or 481×321. The range of intensity values in original images is assumed to be [0,255]. The Cauchy noise w is generated by applying the following property: If X and Y are two independent Gaussian distributed random variables with mean 0 and variance 1, then the ratio X/Y follows the standard Cauchy distribution, i.e., C(0,1) [84,85]. Thus w=γn1n2, where n1 and n2 follow the standard normal distribution, N(0,1), independently. In the experiments, we consider the four mixed noise cases: (γ,σ)=(10,20), (10,30), (15,10), (15,20). In the Yang's model (2.3), the parameter γ is set to be the exact Cauchy noise level like our model.

    Figure 2.  Original clean images. Top to bottom (left to right): Barbara, Bird, Boat, Building (481×321), Cameraman, Castle (481×321), Lake, Lena, Parrot, Peppers, Pirate, Policemen (481×321).

    The quality of restored images is measured by the Peak-Signal-to-Noise-Ratio (PSNR) value as

    PSNR(u,u)=10log10(2552MNuu22), (4.1)

    where u and u are the restored and original images respectively. We also compute the structural similarity index measure (SSIM) [86], which is a perception-based measure that uses information about the structure of the objects in the visual sense. Specifically, we compute the mean SSIM index value between two images u and u using

    SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μ2x+μ2y+c1)(σ2x+σ2y+c2), (4.2)

    where x and y are the spatial patches extracted from u and u respectively, μx and σ2x represent the average and variance of x respectively, and c1 and c2 are some constants for stability. For a noisy data f, the intensity values of some noisy pixels are much larger (or smaller) than 255 (or 0) and they mainly affect the PSNR value of f, so the PSNR of f cannot be properly computed. Thus, we compute the PSNR value of the cropped image, max(min(f,255),0), instead of f, which are given in Tables 14. Moreover, we use this cropped image as an initial condition for u from the experiments in [30].

    Table 1.  Denoising results with mixed Cauchy-Gaussian noise when (γ,σ)=(10,20).
    Model u0 (cropped f) Median filter Cauchy-TV [30] L1-TV [19] Yang et al. [31] Proposed
    Image PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
    barbara 15.41 0.2557 22.34 0.5333 23.77 0.6410 23.92 0.6488 23.90 0.6573 24.33 0.6741
    bird 15.53 0.1354 28.53 0.7056 29.89 0.8610 30.02 0.8693 30.18 0.8736 30.49 0.8786
    boat 15.41 0.2693 22.41 0.5136 23.67 0.6057 23.79 0.6118 23.94 0.6193 24.39 0.6373
    building 15.45 0.2461 22.75 0.5817 23.77 0.6984 23.80 0.7068 24.10 0.7179 24.52 0.7378
    cameraman 15.53 0.2181 22.85 0.5831 24.68 0.7377 24.75 0.7395 24.87 0.7506 25.50 0.7756
    castle 15.41 0.1820 23.36 0.5876 25.00 0.7442 25.11 0.7502 25.14 0.7560 25.84 0.7777
    lake 15.33 0.3048 22.10 0.5793 23.28 0.6725 23.29 0.6772 23.31 0.6764 24.13 0.7150
    lena 15.39 0.2183 24.95 0.6421 25.91 0.7211 26.01 0.7240 26.26 0.7368 26.45 0.7494
    parrot 15.39 0.2453 23.12 0.6435 24.91 0.7375 24.94 0.7375 25.21 0.7583 25.51 0.7811
    peppers 15.40 0.2055 25.26 0.7033 26.15 0.7825 25.86 0.7790 26.36 0.7981 26.92 0.8084
    pirate 15.43 0.2889 23.03 0.4892 23.83 0.5483 24.00 0.5634 24.03 0.5613 24.31 0.5690
    policemen 15.40 0.2839 20.00 0.4963 21.61 0.6541 21.82 0.6712 21.82 0.6762 22.47 0.7079
    Avergae 15.42 0.2377 23.29 0.5882 24.71 0.7003 24.77 0.7065 24.92 0.7151 25.41 0.7343
    * Note: Bold values indicate the best denoising performance.

     | Show Table
    DownLoad: CSV
    Table 2.  Denoising results with mixed Cauchy-Gaussian noise when (γ,σ)=(10,30).
    Model u0 (cropped f) Median filter Cauchy-TV [30] L1-TV [19] Yang et al. [31] Proposed
    Image PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
    barbara 14.55 0.2193 21.53 0.4620 22.88 0.5732 23.19 0.5965 23.30 0.6100 23.61 0.6238
    bird 14.62 0.1143 26.76 0.6129 28.16 0.8319 28.67 0.8459 28.73 0.8544 29.12 0.8471
    boat 14.46 0.2293 21.98 0.4618 22.86 0.5510 23.21 0.5739 23.32 0.5779 23.63 0.5972
    building 14.61 0.2145 22.22 0.5175 22.90 0.6487 23.21 0.6697 23.42 0.6803 23.45 0.6874
    cameraman 14.62 0.1908 22.22 0.5097 23.70 0.7040 24.12 0.7033 24.32 0.7208 24.66 0.7465
    castle 14.58 0.1603 22.71 0.5027 24.20 0.7069 24.56 0.7158 24.58 0.7280 24.97 0.7399
    lake 14.63 0.2707 21.59 0.5313 22.49 0.6202 22.86 0.6461 22.91 0.6519 23.35 0.6752
    lena 14.53 0.1889 24.09 0.5762 24.82 0.6754 25.12 0.6909 25.36 0.7066 25.67 0.7100
    parrot 14.67 0.2231 22.48 0.5799 23.71 0.7106 24.12 0.7172 24.24 0.7434 24.55 0.7482
    peppers 14.55 0.1783 24.14 0.6316 24.72 0.7373 24.77 0.7448 25.16 0.7667 25.61 0.7655
    pirate 14.59 0.2489 22.42 0.4526 23.09 0.5029 23.31 0.5159 23.33 0.5062 23.61 0.5283
    policemen 14.65 0.2576 19.72 0.4315 20.88 0.6118 21.28 0.6386 21.32 0.6448 21.45 0.6564
    Average 14.58 0.2080 22.65 0.5224 23.70 0.6562 24.03 0.6715 24.16 0.6825 24.47 0.6938
    * Note: Bold values indicate the best denoising performance.

     | Show Table
    DownLoad: CSV
    Table 3.  Denoising results with mixed Cauchy-Gaussian noise when (γ,σ)=(15,10).
    Model u0 (cropped f) Median filter Cauchy-TV [30] L1-TV [19] Yang et al. [31] Proposed
    Image PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
    barbara 14.61 0.2278 22.38 0.5471 24.06 0.6666 23.86 0.6517 23.95 0.6627 24.13 0.6687
    bird 14.44 0.1150 28.98 0.7423 30.44 0.8736 30.17 0.8668 30.58 0.8792 30.72 0.8800
    boat 14.49 0.2331 22.45 0.5289 24.31 0.6476 23.99 0.6318 23.89 0.6213 24.42 0.6486
    building 14.51 0.2171 22.83 0.6055 24.27 0.7328 23.76 0.7120 23.99 0.7182 24.31 0.7386
    cameraman 14.54 0.1914 23.05 0.6201 25.06 0.7684 24.76 0.7519 24.82 0.7647 25.28 0.7761
    castle 14.47 0.1605 23.41 0.6129 25.58 0.7719 25.28 0.7504 25.13 0.7601 25.83 0.7810
    lake 14.58 0.2785 22.14 0.6005 23.86 0.7082 23.51 0.6937 23.60 0.6965 24.12 0.7204
    lena 14.53 0.1952 25.16 0.6629 26.35 0.7471 26.06 0.7339 26.22 0.7417 26.52 0.7518
    parrot 14.48 0.2236 23.18 0.6741 25.34 0.7788 24.82 0.7674 25.18 0.7771 25.57 0.7870
    peppers 14.46 0.1789 25.30 0.7296 26.77 0.7998 26.13 0.7763 26.36 0.8029 26.97 0.8068
    pirate 14.48 0.2497 23.10 0.4990 24.23 0.5749 24.08 0.5715 23.95 0.5508 24.36 0.5780
    policemen 14.47 0.2560 20.03 0.5213 21.88 0.6906 21.58 0.6757 21.55 0.6695 22.05 0.7044
    Average 14.50 0.2105 23.50 0.6120 25.18 0.7300 24.83 0.7152 24.93 0.7203 25.35 0.7367
    * Note: Bold values indicate the best denoising performance.

     | Show Table
    DownLoad: CSV
    Table 4.  Denoising results with mixed Cauchy-Gaussian noise when (γ,σ)=(15,20).
    Model u0 (cropped f) Median filter Cauchy-TV [30] L1-TV [19] Yang et al. [31] Proposed
    Image PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
    barbara 14.04 0.2053 21.63 0.4855 23.50 0.6235 23.42 0.6159 23.54 0.6304 23.64 0.6315
    bird 14.16 0.1067 27.49 0.6569 29.16 0.8538 29.11 0.8532 29.25 0.8592 29.49 0.8652
    boat 14.11 0.2205 22.21 0.4869 23.49 0.5958 23.43 0.5946 23.49 0.5984 23.89 0.6119
    building 14.11 0.2024 22.41 0.5462 23.39 0.6877 23.36 0.6834 23.63 0.6959 23.97 0.7126
    cameraman 14.09 0.1784 22.44 0.5448 24.35 0.7404 24.10 0.7343 24.31 0.7368 24.76 0.7583
    castle 14.11 0.1498 22.94 0.5392 24.84 0.7416 24.71 0.7310 24.77 0.7385 25.17 0.7561
    lake 14.18 0.2580 21.82 0.5581 23.18 0.6693 23.10 0.6644 23.17 0.6675 23.48 0.6836
    lena 14.07 0.1771 24.47 0.6083 25.51 0.7159 25.36 0.7099 25.64 0.7199 25.75 0.7255
    parrot 14.10 0.2087 22.71 0.6146 24.50 0.7413 24.24 0.7350 24.52 0.7543 24.78 0.7643
    peppers 14.07 0.1682 24.55 0.6633 25.69 0.7672 25.06 0.7652 25.51 0.7818 25.99 0.7862
    pirate 14.03 0.2259 22.59 0.4673 23.60 0.5405 23.56 0.5421 23.58 0.5337 23.83 0.5436
    policemen 14.12 0.2446 19.84 0.4621 21.52 0.6560 21.36 0.6497 21.44 0.6565 22.07 0.6860
    Average 14.09 0.1954 22.92 0.5527 24.39 0.6944 24.23 0.6898 24.40 0.6973 24.74 0.7104
    * Note: Bold values indicate the best denoising performance.

     | Show Table
    DownLoad: CSV

    The stopping criterion for our model is given by

    giter+1giter2giter+12<tol,oriter>MaxIter, (4.3)

    where g=u or z, tol is a given tolerance, and MaxIter is a maximum iteration number. For our model, we set tol=104. For the L1-TV and Yang's models, we use the same stopping condition as ours, while for the Cauchy-TV model we use the stopping condition given in [30]. For our model, the maximum iteration number for Newton's method for z is fixed as Nz=5, and we fix Nu=10 and Nuu=5.

    The selection of parameters for our model is as follows: The parameter μ is fixed as 107, and the penalty parameters τ and η in the ADMM algorithms are fixed as 1. The parameters λ1 and λ2 mainly influences on the quality of the restored images, and they are tuned to achieve the best denoising results. Indeed, the parameter λ1 is set to be 0.05 when (γ,σ)=(15,10), and 0.02 for the other noise cases. The values of the parameter λ2 are given in all figures.

    First, Figure 3 compares noisy images and signals degraded by the Cauchy noise with γ=10, Gaussian noise with σ=20, mixed Cauchy and Gaussian noise with (γ,σ)=(10,20), respectively. The cross-sectional lines in the right column show that the vertical scale for the noisy signal corrupted by the Cauchy noise goes from -200 to 900, which shows the impulsive feature of the Cauchy noise. Besides, although the noisy signal distorted by the mixed noise is mainly influenced by the Cauchy noise, it can be seen that the Gaussian noise also further distorts the signal.

    Figure 3.  Noisy images (left) and their cross-sectional lines (right), degraded by (top row) Cauchy noise with γ=10, (middle) Gaussian noise with σ=20, (bottom) mixed Cauchy-Gaussian noise with (γ,σ)=(10,20). Blue line: Original clean signal, Red line: Noisy signal.

    In Figure 4, we compare the denoising results of all models when the noise level is (γ,σ)=(10,20). The data images are given in Figure 5. First we can see that the MF enables to remove both noise to some extent, but its restored images are noisier and blurrier than the ones using the other TV models. Among the TV models, the L1-TV provides similar denoising results to the Cauchy-TV model. This is due to that the L1 data-fidelity is suitable for eliminating both impulsive noise and Gaussian noise. On the other hand, the Yang's model supplies slightly better or similar denoising results than the L1-TV model, which also explains the capability of the Yang's model to handle both noise. In fact, the Yang's model provides averagely higher PSNR and SSIM values than the L1-TV model in all noise cases, as shown in Tables 14. Compared with these existing models, our model generates cleaner homogeneous regions, such as the sky areas in Boat and Cameraman, while better conserving textures and details, such as the textural part in Barbara, the ropes and iron pillars in Boat, the face and tripod parts in Cameraman. This can be more clearly seen in the zoomed images in Figure 6. Moreover, all these observations also match the highest PSNR and SSIM values of our model, as shown in Table 1. Overall, these examples justify the effectiveness of our data-fidelity term for getting rid of mixed Cauchy and Gaussian noise.

    Figure 4.  Denoising results of our model and other models when (γ,σ)=(10,20). (a) median filter of size 4×4 (top row) or 5×5 (middle-bottom rows). PSNR values (left to right): (top row) 22.34/23.77/23.92/23.90/24.33, (middle) 22.41/23.67/23.79/23.94/24.39, (bottom) 22.85/24.68/24.75/24.87/25.50. Parameter λ2 of the proposed model (top to bottom): 24, 23, 23.
    Figure 5.  Noisy data f. 1st-3rd columns: Mixed Cauchy-Gaussian noise with (γ,σ)=(10,20), 4th-6th columns: Mixed Cauchy-Gaussian noise with (γ,σ)=(15,10).
    Figure 6.  Zoomed denoised images from Figure 4.

    In Figure 7, we present the denoising results with higher Cauchy noise level and smaller Gaussian noise level, i.e., (γ,σ)=(15,10). Unlike the denoising results when (γ,σ)=(10,20), the Cauchy-TV model furnishes better denoising results than the L1-TV and Yang's model. This implies that as the Cauchy noise level increases while the Gaussian noise level decreases, the performance of the Cauchy-TV model much enhances unlike the other models. In particular, the Yang's model retain some Cauchy noise in the restored images, which can be more obviously seen in the zoomed images in Figure 8. Indeed, this problem might be fixed by decreasing the regularization parameter λ or increasing the size of a median filter applied to g, but these cause much smoother restored images with much less PSNR values. Thus we choose these given images as the best restored images of the Yang's model. Despite the improved performance of the Cauchy-TV model, our model provides better preserved details and cleaner smooth regions than the other models, which can be seen in the window parts in Castle and sky areas in Policemen. These also lead to the highest PSNR values of our model. These also show the outstanding performance of our model over the other models. Lastly, Figure 9 presents the final Gaussian components, v, corresponding to the final restored images, u, given in 5. This validates that our model properly separates the Gaussian and Cauchy noise components.

    Figure 7.  Denoising results of our model and other models when (γ,σ)=(15,10). (a) median filter of size 5×5. PSNR values (left to right): (top row) 23.41/25.58/25.28/25.13/25.83, (middle) 25.30/26.77/26.13/26.36/26.97, (bottom) 20.03/21.88/21.58/21.55/22.05. Parameter λ2 of the proposed model (top to bottom): 29, 30, 30.
    Figure 8.  Zoomed denoised images from Figure 7.
    Figure 9.  Final Gaussian noise components v corresponding to the denoised images u of the proposed model in Figure 7 when (γ,σ)=(15,10).

    Figure 10 compares the denoising results when (γ,σ)=(10,20), (10,30), (15,20). First, it can be observed that the Cauchy-TV model does not perform well in the presence of high-level of Gaussian noise, although it properly removes the Cauchy noise. Moreover, the Yang's model seems to deal with both noise to a certain degree, but it keeps some Cauchy noise under a high level of Cauchy noise, as shown in the denoised image when (γ,σ)=(15,20). The L1-TV model moderately performs for deleting both noise, but its performance does not exceed the Yang's one with respect to the PSNR and SSIM values. In contrast, our model well eliminates both noise in all noise cases, by sufficiently denoising homogenous regions while conserving fine structures and edges than the other models.

    Figure 10.  Comparison of denoising results with different noise levels, (γ,σ)=(10,20) (2nd row), (γ,σ)=(10,30) (3rd row), (γ,σ)=(15,20) (4th row). Parameter λ2 of the proposed model (top to bottom): 21, 20, 29.

    In Figure 11, we present zoomed denoised images when (γ,σ)=(10,30) and (γ,σ)=(15,20), respectively. The zoomed data images are given in the first row. When (γ,σ)=(10,30), the denoised images of our model and Yang's model seem to be visually indistinguishable, but our model brings cleaner homogeneous regions, such as the sky areas in both examples. On the other hand, the Yang's model fails to adequately remove the Cauchy noise, whose trace can be detected in some pixels in the Pirate image. But our model sufficiently denoise smooth regions with better keeping delicate features than the other models, leading to the highest PSNR values. These can be observed in the eye area in Parrot and face region in Pirate. Hence, our model attains more satisfactory denoising results even when the noise level of Cauchy or Gaussian noise is high.

    Figure 11.  Zoomed denoised images using our model and other models when (γ,σ)=(10,30) (2nd-3rd rows), (γ,σ)=(15,20) (4th-5th rows). Parameter λ2 of the proposed model (top to bottom): 20, 22, 31, 33.

    In Tables 14, we report the PSNR and SSIM values of the restored images of all models. The proposed model yields the highest PSNR and SSIM values in almost all cases. Overall, our model leads to the best denoising results with regard to these image quality assessments. These also confirm the superior performance of our model over the existing models.

    Figure 12 presents the plots of the energy values E(uk,vk) in (3.4) and PSNR values of uk via the outer iteration numbers k. In all cases, as the outer iteration increases, the energy values decrease and the PSNR values increase. Moreover, in Figure 13, we present the plots of relative errors of u and v. We can see that these relative errors generally decrease as the iteration increases, despite slight fluctuations in the relative errors of v. All of these illustrate the convergence behavior of the proposed iterative algorithm even though it is not theoretically proven.

    Figure 12.  Plots of (left) energy values E(uk,vk) and (right) PSNR values of uk via the outer iteration k, when (left) (γ,σ)=(15,10), (right) (γ,σ)=(15,20).
    Figure 13.  Plots of relative errors of uk (top) and vk (bottom) via the outer iteration k. Top: ln(uk+1uk2/uk+12), bottom: ln(vk+1vk2/vk+12).

    In Figures 14 and 15, we compare the denoising results of the proposed model with different values for the parameter λ2 in (3.2). In Figure 14, we present the denoising results when (γ,σ)=(10,20) and (10,30), while in Figure 15, we present the denoising resuls when (γ,σ)=(10,20) and (15,20). First, it can be observed that as λ2 increases, the restored image u includes more details, but it also tends to retain noise. Thus, we try to select a sufficiently denoised image with PSNR and SSIM values as high as possible for the best restored image. The optimal values of λ2 may be different depending on images, as shown in the 1st rows in both figures, because different images have different structures and characteristics. Moreover, the optimal value of λ2 depends on the noise level, as λ2 is a regularization parameter that needs to be adjusted to obtain an adequately denoised image. Specifically, the optimal value of λ2 tends to decrease as the Gaussian noise level σ increases, while it tends to increase as the Cauchy noise level γ increases, which can be seen in the 2nd rows in both figures. However, we note that the optimal values of λ2 are restricted to some values; for all the test images except Bird, the optimal λ2 is chosen from {21,22,23,24} when (γ,σ)=(10,20), {20,21,22,23} when (γ,σ)=(10,30) and {29,30,31,32,33,34} when (γ,σ)=(15,20), while λ1 is fixed at 0.02. Furthermore, the restored images obtained by using the values in the same set do not change significantly and their PSNR and SSIM values change slightly, as shown in these examples. Hence, these examples show that the proposed model is not sensitive to the choice of parameter λ2.

    Figure 14.  Denoising results with different λ2 in the proposed model, when (γ,σ)=(10,20) (top row), (γ,σ)=(10,30) (bottom row). PSNR/SSIM values are presented. best represents the optimal value of λ2.
    Figure 15.  Denoising results with different λ2 in the proposed model, when (γ,σ)=(10,20) (top row), (γ,σ)=(15,20) (bottom row). PSNR/SSIM values are presented. best represents the optimal value of λ2.

    Figure 16 presents the denoising results of the proposed model with two different values of λ1 when (γ,σ)=(15,10). We recall that for the other noise cases such as (γ,σ)=(10,20), (10,30) and (15,20), λ1 is fixed at 0.02. However, when (γ,σ)=(15,10), the restored images obtained with λ1=0.02 tend to retain some Cauchy noise despite the parameter λ2 being adjusted, as seen in the top row. This is due to that the Gaussian noise level is smaller than the other noise cases, which indicates that the parameter λ1 is influenced by the Gaussian noise level. By increasing the value of λ1 to 0.05, we could obtain satisfactory restored images, as shown in the bottom row. We note that when (γ,σ)=(15,10), the optimal λ2 is chosen from {27,28,29,30,31} for all the test images. Thus, although the parameter λ1 depends on the noise level, two values of λ1 are enough to attain satisfactory denoising results. These imply that the choice of parameter λ1 is not tricky in the proposed model.

    Figure 16.  Denoising results with two different values of λ1 in the proposed model, when (γ,σ)=(15,10). Top: λ1=0.02, Bottom: λ1=0.05. PSNR/SSIM values are presented. best represents the optimal value of λ2.

    In Figure 17, we present some denoising results of the proposed model with different values for the parameter γ in (3.2). Throughout the experiments, for the Yang's model and our model, the parameter γ is assumed to be the exact Cauchy noise level, denoted by γ. So we here present the denoising results when γ is not the accurate Cauchy noise level. Two values for γ are roughly chosen; one is smaller than γ while the other one is larger than γ. For the Lena image, we test with γ=5, 15 when γ=10, whereas for the Cameraman image, we test with γ=10, 20 when γ=15. It can be observed that even with the values for γ that are different from the precise noise level γ, we can attain similar denoising results to the ones obtained with γ, especially when γ>γ. These indicate that although γ=γ generates the best denoising results, the parameter γ does not significantly affect the denoising results. The estimation for the parameter γ in the presence of mixed Cauchy-Gaussian noise is another effortful task, so we leave this problem as a future work.

    Figure 17.  Denoising results with different γ in the proposed model, when (γ,σ)=(10,20) (top row), (γ,σ)=(15,10) (bottom row). PSNR values are presented. Parameter λ2 (left to right): (top) 9, 20, 31, (bottom) 22, 30, 38.

    Lastly, in Table 5, the computational cost of the variational models is reported. The computational time of our model is the slowest among the models. Thus, there is a tradeoff between computing time and restoration performance. Despite the efficiency of our optimization algorithm, reducing the computational time of our model remains an issue.

    Table 5.  Computational time (in seconds) when (γ,σ)=(10,20).
    Model/Image Cauchy-TV [30] L1-TV [19] Yang et al. [31] Proposed
    barbara 4.9 1.5 1.3 6.9
    bird 8.6 3.2 2.0 10.8
    boat 4.9 1.7 1.4 7.0
    building 25.7 10.5 6.4 44.2
    cameraman 4.6 2.4 1.6 7.2
    castle 22.6 11.3 5.7 35.9
    lake 4.9 1.8 1.7 6.5
    lena 4.2 1.9 1.6 6.0
    parrot 4.6 2.5 1.5 6.5
    peppers 4.1 1.7 1.5 5.6
    pirate 4.5 1.8 1.6 6.6
    policemen 27.9 11.6 5.7 51.9

     | Show Table
    DownLoad: CSV

    In this paper, we introduced a novel image denoising model under mixed Cauchy and Gaussian noise. The model is composed of a nonconvex data-fidelity term, expressed as an infimal convolution combination of two data-fidelity terms associated with two noise distributions, and total variation regularization. This new data-fidelity term enabled the separation of Cauchy noise and Gaussian noise components. It facilitated simultaneous removal of both noise. Total variation regularization assisted in the sufficient elimination of the mixed noise in homogeneous regions, while keeping structural edges and fine features. Regardless of the nonconvexity of the model, we proved the existence of a minimizer. To solve the proposed model, we utilized an alternating minimization approach and the alternating direction method of multipliers. This contributed to an iterative algorithm, and its convergence was shown experimentally. Numerical results demonstrated the effectiveness of the proposed model, comparing to other existing models, regarding both visual aspect and image quality assessments. However, theoretical convergence analysis of the proposed algorithm remains an issue. Blind denoising with unknown Cauchy noise level γ is another demanding problem to be investigated in the future.

    The author was supported by the Hankuk University of Foreign Studies Research Fund and the National Research Foundation of Korea (2021R1F1A1048111).

    The authors declare no conflicts of interest in this paper.

    For each fixed xΩ, we define the function h:R×RR as

    h(s,t)=λ1t2+λ2log(γ2+(s+tf(x))2)+μ2(sg(x))2.

    The first-order and second-order partial derivatives of h are obtained as

    hs=2λ2s+tf(x)γ2+(s+tf(x))2+μ(sg(x)),ht=2λ2s+tf(x)γ2+(s+tf(x))2+2λ1t,2hs2=C+μ,2ht2=C+2λ1,2hst=C, (A.1)

    where C is defined as

    C=2λ2γ2(s+tf(x))2(γ2+(s+tf(x))2)2.

    It can be easily proven that if 4γ2μλ2, then 2hs2(s,)0, so h is convex with respect to the variable s. Then, hs(s,)=0 is equivalent to a cubic equation that has either two real roots or one real root. Thus, the function h(s,) has only one minimizer, so h(s,) is strictly convex if 4γ2μλ2. Similarly, if 8γ2λ1λ2, then 2ht2(,t)0, so h is strictly convex with respect to t. Moreover, for the convexity of h on R×R, the Hessian matrix of h needs to be positive semi-definite. That is, in addition to the aforementioned constraints, the determinant of the Hessian matrix of h should be nonnegative:

    2hs22ht2(2hst)2=C(μ+2λ1)+2μλ10.

    This inequality holds if λ2(μ+2λ1)8γ2μλ1, which is satisfied when λ212min{8γ2λ1,4γ2μ}. Thus, if λ212min{8γ2λ1,4γ2μ}, then the Hessian matrix of h is positive semi-definte, so h is convex. In addition, h is strictly convex if λ2<12min{8γ2λ1,4γ2μ}.



    [1] M. Shinde, S. Gupta, Signal detection in the presence of atmospheric noise in tropics, IEEE Trans. Commun., 22 (1974), 1055–1063. https://doi.org/10.1109/TCOM.1974.1092336 doi: 10.1109/TCOM.1974.1092336
    [2] M. A. Chitre, J. R. Potter, S. H. Ong, Optimal and near optimal signal detection in snapping shrimp dominated ambient noise, IEEE J. Oceanic Eng., 31 (2006), 497–503. https://doi.org/10.1109/JOE.2006.875272 doi: 10.1109/JOE.2006.875272
    [3] S. Banerjee, M. Agrawal, Underwater acoustic communication in the presence of heavy-tailed impulsive noise with bi-parameter cauchy-gaussian mixture model, 2013 Ocean Electronics (SYMPOL), 2013, 1–7. https://doi.org/10.1109/SYMPOL.2013.6701903
    [4] G. A. Tsihrintzis, P. Tsakalides, C. L. Nikias, Signal detection in severely heavy-tailed radar clutter, Conference Record of The Twenty-Ninth Asilomar Conference on Signals, Systems and Computers, 1995,865–869. https://doi.org/10.1109/ACSSC.1995.540823
    [5] E. E. Kuruoglu, W. J. Fitzgerald, P. J. W. Rayner, Near optimal detection of signals in impulsive noise modeled with asymmetric alpha-stable distribution, IEEE Commun. Lett., 2 (1998), 282–284. https://doi.org/10.1109/4234.725224 doi: 10.1109/4234.725224
    [6] H. El Ghannudi, L. Clavier, N. Azzaoui, F. Septier, P. A. Rolland, α-stable interference modeling and cauchy receiver for an ir-uwb ad hoc network, IEEE Trans. Commun., 58 (2010), 1748–1757. https://doi.org/10.1109/TCOMM.2010.06.090074 doi: 10.1109/TCOMM.2010.06.090074
    [7] M. Zimmermann, K. Dostert, Analysis and modeling of impulsive noise in broad-band powerline communications, IEEE Trans. Electromagn. Compat., 44 (2002), 249–258. https://doi.org/10.1109/15.990732 doi: 10.1109/15.990732
    [8] P. M. Reeves, A non-gaussian turbulence simulation, Air Force Flight Dynamics Laboratory, 1969.
    [9] A. Achim, P. Tsakalides, A. Bezerianos, Sar image denoising via bayesian wavelet shrinkage based on heavy-tailed modeling, IEEE Trans. Geosci. Remote Sens., 41 (2003), 1773–1784. https://doi.org/10.1109/TGRS.2003.813488 doi: 10.1109/TGRS.2003.813488
    [10] Y. Peng, J. Chen, X. Xu, F. Pu, Sar images statistical modeling and classification based on the mixture of alpha-stable distributions, Remote Sens., 5 (2013), 2145–2163. https://doi.org/10.3390/rs5052145 doi: 10.3390/rs5052145
    [11] C. L. Nikias, M. Shao, Signal processing with alpha-stable distri?butions and applications, Hoboken, NJ, USA: Wiley, 1995.
    [12] S. A. Kassam, Signal detection in non-gaussian noise, New York, USA: Springer, 2012.
    [13] S. R. Krishna Vadali, P. Ray, S. Mula, P. K. Varshney, Linear detection of a weak signal in additive cauchy noise, IEEE Trans. Commun., 65 (2017), 1061–1076. https://doi.org/10.1109/TCOMM.2016.2647599 doi: 10.1109/TCOMM.2016.2647599
    [14] J. Ilow, D. Hatzinakos, Detection in alpha-stable noise environments based on prediction, Int. J. Adapt. Control Signal Proc., 11 (1997), 555–568.
    [15] D. Herranz, E. E. Kuruoglu, L. Toffolatti, An α-stable approach to the study of the p(d) distribution of unresolved point sources in cmb sky maps, Astron. Astrophys., 424 (2004), 1081–1096. https://doi.org/10.1051/0004-6361:20035858 doi: 10.1051/0004-6361:20035858
    [16] W. Feller, An introduction to probability theory and its applications, Vol. 2, 2 Eds., New York: John Wiley & Sons Inc., 1991.
    [17] N. L. Johnson, S. Kotz, N. Balakrishnan, Continuous univariate distributions, Vol. 1, 2 Eds., New York: Wiley, 1994.
    [18] L. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithm, Phys. D, 60 (1992), 259–268. https://doi.org/10.1016/0167-2789(92)90242-F doi: 10.1016/0167-2789(92)90242-F
    [19] M. A. Nikolova, A variational approach to remove outliers and impulse noise, J. Math. Imaging Vis., 20 (2004), 99–120. https://doi.org/10.1023/B:JMIV.0000011326.88682.e5 doi: 10.1023/B:JMIV.0000011326.88682.e5
    [20] R. H. Chan, Y. Dong, M. Hintermuller, An efficient two-phase l1-tv method for restoring blurred images with impulse noise, IEEE Trans. Image Process., 19 (2010), 1731–1739. https://doi.org/10.1109/TIP.2010.2045148 doi: 10.1109/TIP.2010.2045148
    [21] J. F. Cai, R. Chan, M. Nikolova, Fast two-phase image deblurring under impulse noise, J. Math. Imaging Vis., 36 (2010), 46–53. https://doi.org/10.1007/s10851-009-0169-7 doi: 10.1007/s10851-009-0169-7
    [22] G. Aubert, J. F. Aujol, A variational approach to removing multiplicative noise, SIAM J. Appl. Math., 68 (2008), 925–946. https://doi.org/10.1137/060671814 doi: 10.1137/060671814
    [23] J. Shi, S. Osher, A nonlinear inverse scale space method for a convex multiplicative noise model, SIAM J. Imaging Sci., 1 (2008), 294–321. https://doi.org/10.1137/070689954 doi: 10.1137/070689954
    [24] Y. Dong, T. Zeng, A convex variational model for restoring blurred images with multiplicative noise, SIAM J. Imaging Sci., 6 (2013), 1598–1625. https://doi.org/10.1137/120870621 doi: 10.1137/120870621
    [25] J. Lu, L. Shen, C. Xu, Y. Xu, Multiplicative noise removal in imaging: An exp-model and its fixed-point proximity algorithm, Appl. Comput. Harmon. Anal., 41 (2016), 518–539. https://doi.org/10.1016/j.acha.2015.10.003 doi: 10.1016/j.acha.2015.10.003
    [26] T. Le, R. Chartrand, T. Asaki, A variational approach to reconstructing images corrupted by poisson noise, J. Math. Imaging Vis., 27 (2007), 257–263. https://doi.org/10.1007/s10851-007-0652-y doi: 10.1007/s10851-007-0652-y
    [27] P. Getreuer, M. Tong, L. A. Vese, A variational model for the restoration of mr images corrupted by blur and rician noise, In: Advances in visual computing, Lecture Notes in Computer Science, Berlin, Heidelberg: Springer, 2011. https://doi.org/10.1007/978-3-642-24028-7_63
    [28] L. Chen, T. Zeng, A convex variational model for restoring blurred images with large rician noise, J. Math. Imaging Vis., 53 (2015), 92–111. https://doi.org/10.1007/s10851-014-0551-y doi: 10.1007/s10851-014-0551-y
    [29] F. Sciacchitano, Y. Dong, T. Zeng, Variational approach for restoring blurred images with cauchy noise, SIAM J. Imag. Sci., 8 (2015), 1894–1922. https://doi.org/10.1137/140997816 doi: 10.1137/140997816
    [30] J. J. Mei, Y. Dong, T. Z. Hunag, W. Yin, Cauchy noise removal by nonconvex admm with convergence guarantees, J. Sci. Comput., 74 (2018), 743–766. https://doi.org/10.1007/s10915-017-0460-5 doi: 10.1007/s10915-017-0460-5
    [31] Z. Yang, Z. Yang, G. Gui, A convex constraint variational method for restoring blurred images in the presence of alpha-stable noises, Sensors, 18 (2018). https://doi.org/10.3390/s18041175
    [32] Y. Chang, S. R. Kadaba, P. C. Doerschuk, S. B. Gelfand, Image restoration using recursive markov random field models driven by cauchy distributed noise, IEEE Signal Process. Lett., 8 (2001), 65–66. https://doi.org/10.1109/97.905941 doi: 10.1109/97.905941
    [33] A. Achim, E. Kuruoǧlu, Image denoising using bivariate α-stable distributions in the complex wavelet domain, IEEE Signal Process. Lett., 12 (2005), 17–20. https://doi.org/10.1109/LSP.2004.839692 doi: 10.1109/LSP.2004.839692
    [34] A. Loza, D. Bull, N. Canagarajah, A. Achim, Non-gaussian model-based fusion of noisy images in the wavelet domain, Comput. Vis. Image Und., 114 (2010), 54–65.
    [35] Y. Wang, W. Yin, J. Zeng, Global convergence of ADMM in nonconvex nonsmooth optimization, J. Sci. Comput., 78 (2019), 29–63. https://doi.org/10.1007/s10915-018-0757-z doi: 10.1007/s10915-018-0757-z
    [36] J. Yang, Y. Zhang, W. Yin, An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Computing, 31 (2009), 2842–2865. https://doi.org/10.1137/080732894 doi: 10.1137/080732894
    [37] M. Ding, T. Z. Huang, S. Wang, J. J. Mei, X. L. Zhao, Total variation with overlapping group sparsity for deblurring images under cauchy noise, Appl. Math. Comput., 341 (2019), 128–147. https://doi.org/10.1016/j.amc.2018.08.014 doi: 10.1016/j.amc.2018.08.014
    [38] J. H. Yang, X. L. Zhao, J. J. Mei, S. Wang, T. H. Ma, T. Z. Huang, Total variation and high-order total variation adaptive model for restoring blurred images with cauchy noise, Comput. Math. Appl., 77 (2019), 1255–1272. https://doi.org/10.1016/j.camwa.2018.11.003 doi: 10.1016/j.camwa.2018.11.003
    [39] G. Kim, J. Cho, M. Kang, Cauchy noise removal by weighted nuclear norm minimization, J. Sci. Comput., 83 (2020), 1–21. https://doi.org/10.1007/s10915-020-01203-2 doi: 10.1007/s10915-020-01203-2
    [40] S. Lee, M. Kang, Group sparse representation for restoring blurred images with cauchy noise, J. Sci. Comput., 83 (2020), 1–27. https://doi.org/10.1007/s10915-020-01227-8 doi: 10.1007/s10915-020-01227-8
    [41] M. Jung, M. Kang, Image restoration under cauchy noise with sparse representation prior and total generalized variation, J. Comput. Math., 39 (2021), 81–107. https://doi.org/10.4208/jcm.1907-m2018-0234 doi: 10.4208/jcm.1907-m2018-0234
    [42] L. Bai, A new approach for cauchy noise removal, AIMS Math., 6 (2021), 10296–10312. https://doi.org/10.3934/math.2021596 doi: 10.3934/math.2021596
    [43] X. Ai, G. Ni, T. Zeng, Nonconvex regularization for blurred images with cauchy noise, Inverse Probl. Imag., 16 (2022), 625–646. https://doi.org/10.3934/ipi.2021065 doi: 10.3934/ipi.2021065
    [44] J. F. Cai, R. H. Chan, M. Nikolova, Two-phase approach for deblurring images corrupted by impulse plus gaussian noise, Inverse Probl. Imag., 2 (2008), 187–204. https://doi.org/10.3934/ipi.2008.2.187 doi: 10.3934/ipi.2008.2.187
    [45] Y. Xiao, T. Y. Zeng, J. Yu, M. K. Ng, Restoration of images corrupted by mixed gaussian-impulse noise via l1-l0 minimization, Pattern Recogn., 44 (2010), 1708–1720. https://doi.org/10.1016/j.patcog.2011.02.002 doi: 10.1016/j.patcog.2011.02.002
    [46] R. Rojas P. Rodríguez, B. Wohlberg, Mixed gaussian-impulse noise image restoration via total variation, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012, 1077–1080. https://doi.org/10.1109/ICASSP.2012.6288073
    [47] B. Dong, H. Ji, J. Li, Z. W. Shen, Y. H. Xu, Wavelet frame based blind image inpainting, Appl. Comput. Harmon. Anal., 32 (2011), 268–279. https://doi.org/10.1016/j.acha.2011.06.001 doi: 10.1016/j.acha.2011.06.001
    [48] J. Liu, X. C. Tai, H. Y. Huang, Z. D. Huan, A weighted dictionary learning models for denoising images corrupted by mixed noise, IEEE Trans. Image Process., 22 (2013), 1108–1120. https://doi.org/10.1109/TIP.2012.2227766 doi: 10.1109/TIP.2012.2227766
    [49] M. Yan, Restoration of images corrupted by impulse noise and mixed gaussian impulse noise using blind inpainting, SIAM J. Imaging Sci., 6 (2013), 1227–1245. https://doi.org/10.1137/12087178X doi: 10.1137/12087178X
    [50] M. Hintermüller, A. Langer, Subspace correction methods for a class of nonsmooth and nonadditive convex variational problems with mixed L1/L2 data-fidelity in image processing, SIAM J. Imaging Sci., 6 (2013), 2134–2173. https://doi.org/10.1137/120894130 doi: 10.1137/120894130
    [51] A. Langer, Automated parameter selection in the L1-L2-TV model for removing Gaussian plus impulse noise, Inverse Probl., 33 (2017). https://doi.org/10.1088/1361-6420/33/7/074002
    [52] A. Foi, M. Trimeche, V. Katkovnik, K. Egiazarian, Practical poissonian-gaussian noise modeling and fitting for single-image raw-data, IEEE Trans. Image Process., 17 (2008), 1737–1754. https://doi.org/10.1109/TIP.2008.2001399 doi: 10.1109/TIP.2008.2001399
    [53] A. Jezierska, C. Chaux, J. Pesquet, H. Talbot, An EM approach for Poisson-Gaussian noise modeling, 2011 19th European Signal Processing Conference, 2011, 2244–2248.
    [54] F. Murtagh, J. L. Starck, A. Bijaoui, Image restoration with noise suppression using a multiresolution support, Astron. Astrophys. Suppl. Ser., 112 (1995), 179–189.
    [55] B. Begovic, V. Stankovic, L. Stankovic, Contrast enhancement and denoising of poisson and gaussian mixture noise for solar images, 2011 18th IEEE International Conference on Image Processing, 2011,185–188. https://doi.org/10.1109/ICIP.2011.6115829
    [56] F. Luisier, T. Blu, M. Unser, Image denoising in mixed Poisson-Gaussian noise, IEEE Trans. Image Process., 20 (2011), 696–708. https://doi.org/10.1109/TIP.2010.2073477 doi: 10.1109/TIP.2010.2073477
    [57] M. Makitalo, A. Foi, Optimal inversion of the generalized anscombe transformation for Poisson-Gaussian noise, IEEE Trans. Image Process., 22 (2013), 91–103. https://doi.org/10.1109/TIP.2012.2202675 doi: 10.1109/TIP.2012.2202675
    [58] Y. Marnissi, Y. Zheng, J. Pesquet, Fast variational bayesian signal recovery in the presence of Poisson-Gaussian noise, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, 3964–3968. https://doi.org/10.1109/ICASSP.2016.7472421
    [59] F. J. Anscombe, The transformation of poisson, binomial and negative-binomial data, Biometrika, 35 (1948), 246–254. https://doi.org/10.1093/biomet/35.3-4.246 doi: 10.1093/biomet/35.3-4.246
    [60] F. Benvenuto, A. La Camera, C. Theys, A. Ferrari, H. Lantéri, M. Bertero, The study of an iterative method for the reconstruction of images corrupted by poisson and gaussian noise, Inverse Probl., 24 (2008), 035016.
    [61] E. Chouzenoux, A. Jezierska, J. C. Pesquet, H. Talbot, A convex approach for image restoration with exact Poisson-Gaussian likelihood, SIAM J. Imaging Sci., 8 (2015), 17–30. https://doi.org/10.1137/15M1014395 doi: 10.1137/15M1014395
    [62] J. C. De los Reyes, C. B. Schönlieb, Image denoising: Learning the noise model via nonsmooth pde-constrained optimization, Inverse Probl. Imaging, 7 (2013), 1183–1214. https://doi.org/10.3934/ipi.2013.7.1183 doi: 10.3934/ipi.2013.7.1183
    [63] L. Calatroni, C. Chung, J. C. De Los Reyes, C. B. Schönlieb, T. Valkonen, Bilevel approaches for learning of variational imaging models, In: Variational methods: In imaging and geometric control, Berlin, Boston: De Gruyter, 2017. https://doi.org/10.1515/9783110430394-008
    [64] D. N. H. Thanh, S. D. Dvoenko, A method of total variation to remove the mixed poisson-gaussian noise, Pattern Recognit. Image Anal., 26 (2016), 285–293. https://doi.org/10.1134/S1054661816020231 doi: 10.1134/S1054661816020231
    [65] L. Calatroni, J. C. De Los Reyes, C. B. Schönlieb, Infimal convolution of data discrepancies for mixed noise removal, SIAM J. Imaging Sci., 10 (2017), 1196–1233. https://doi.org/10.1137/16M1101684 doi: 10.1137/16M1101684
    [66] L. Calatroni, K. Papafitsoros, Analysis and automatic parameter selection of a variational model for mixed gaussian and salt-and-pepper noise removal, Inverse Probl., 35 (2019), 114001.
    [67] J. Zhang, Y. Duan, Y. Lu, M. K. Ng, H. Chang, Bilinear constraint based admm for mixed poisson-gaussian noise removal, SIAM J. Imaging Sci., 15 (2021), 339–366. https://doi.org/10.3934/ipi.2020071 doi: 10.3934/ipi.2020071
    [68] Y. Chen, E. E. Kuruoglu, H. C. So, L. T. Huang, W. Q. Wang, Density parameter estimation for additive cauchy-gaussian mixture, 2014 IEEE Workshop on Statistical Signal Processing (SSP), 2014,197–200. https://doi.org/10.1109/SSP.2014.6884609
    [69] Y. Chen, E. E. Kuruoglu, H. C. So, Optimum linear regression in additive cauchy-gaussian noise, Signal Process., 106 (2015), 312–318. https://doi.org/10.1016/j.sigpro.2014.07.028 doi: 10.1016/j.sigpro.2014.07.028
    [70] A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vis., 40 (2011), 120–145. https://doi.org/10.1007/s10851-010-0251-1 doi: 10.1007/s10851-010-0251-1
    [71] F. Li, C. Shen, C. Shen J. Fan, Image restoration combining a total variational filter and a fourth-order filter, J. Vis. Commun. Image Represent., 18 (2007), 322–330. https://doi.org/10.1016/j.jvcir.2007.04.005 doi: 10.1016/j.jvcir.2007.04.005
    [72] K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM J. Imaging Sci., 3 (2010), 492–526. https://doi.org/10.1137/090769521
    [73] G. Gilboa, S. Osher, Nonlocal operators with applications to image processing, SIAM J. Multiscale Model. Simul., 7 (2009), 1005–1028. https://doi.org/10.1137/070698592 doi: 10.1137/070698592
    [74] M. Aharon, M. Elad, A. Bruckstein, K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., 54 (2006), 4311–4322. https://doi.org/10.1109/TSP.2006.881199 doi: 10.1109/TSP.2006.881199
    [75] M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process., 15 (2006), 3736–3745. https://doi.org/10.1109/TIP.2006.881969 doi: 10.1109/TIP.2006.881969
    [76] Y. R. Li, L. Shen, D. Q. Dai, B. W. Suter, Framelet algorithms for de-blurring images corrupted by impulse plus gaussian noise, IEEE Trans. Image Process., 20 (2011), 1822–1837. https://doi.org/10.1109/TIP.2010.2103950 doi: 10.1109/TIP.2010.2103950
    [77] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vis., 20 (2004), 89–97. https://doi.org/10.1023/B:JMIV.0000011325.36760.1e doi: 10.1023/B:JMIV.0000011325.36760.1e
    [78] T. Goldstein, S. Osher, The split bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323–343. https://doi.org/10.1137/080725891 doi: 10.1137/080725891
    [79] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3 (2010), 1–122. http://doi.org/10.1561/2200000016 doi: 10.1561/2200000016
    [80] C. Chen, M. K. Ng, X. L. Zhao, Alternating direction method of multipliers for nonlinear image restoration problems, IEEE Trans. Image Process., 24 (2015), 33–43. http://doi.org/10.1109/TIP.2014.2369953 doi: 10.1109/TIP.2014.2369953
    [81] M. K. Ng, R. H. Chan, W. C. Tang, A fast algorithm for deblurring models with neumann boundary conditions, SIAM J. Sci. Comput., 21 (1999), 851–866. https://doi.org/10.1137/S1064827598341384 doi: 10.1137/S1064827598341384
    [82] N. Jacobson, Basic algebra, Freeman, New York, 1974.
    [83] B. R. Frieden, A new restoring algorithm for the preferential enhancement of edge gradients, J. Opt. Soc. Am., 66 (1976), 280–283. https://doi.org/10.1364/JOSA.66.000280 doi: 10.1364/JOSA.66.000280
    [84] J. P. Nolan, Numerical calculation of stable densities and distribution functions, Commun. Stat. Stoch. Models, 13 (1997), 759–774. https://doi.org/10.1080/15326349708807450 doi: 10.1080/15326349708807450
    [85] N. Balakrishnan, V. B. Nevzorov, A primer on statistical distributions, New York: John Wiley & Sons, 2003. https://doi.org/10.1002/0471722227
    [86] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), 600–612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2110) PDF downloads(175) Cited by(0)

Figures and Tables

Figures(17)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog