
Inspired by the ROF model and the L1/TV image denoising model, we propose a combined model to remove Gaussian noise and salt-and-pepper noise simultaneously. This model combines the L1 -data fidelity term, L2 -data fidelity term and a fractional-order total variation regularization term, and is termed the L1L2/TVα model. We have used the proximity algorithm to solve the proposed model. Through this method, the non-differentiable term is solved by using the fixed-point equations of the proximity operator. The numerical experiments show that the proposed model can effectively remove Gaussian noise and salt and pepper noise through implementation of the proximity algorithm. As we varied the fractional order α from 0.8 to 1.9 in increments of 0.1, we observed that different images correspond to different optimal values of α.
Citation: Donghong Zhao, Ruiying Huang, Li Feng. Proximity algorithms for the L1L2/TVα image denoising model[J]. AIMS Mathematics, 2024, 9(6): 16643-16665. doi: 10.3934/math.2024807
[1] | Xiaoyan Jiang, Jianguo Sun . Local geometric properties of the lightlike Killing magnetic curves in de Sitter 3-space. AIMS Mathematics, 2021, 6(11): 12543-12559. doi: 10.3934/math.2021723 |
[2] | R. Marcinkevicius, I. Telksniene, T. Telksnys, Z. Navickas, M. Ragulskis . The construction of solutions to $ {}^{C} \mathit{\boldsymbol{{D}}}^{(1/n)} $ type FDEs via reduction to $ \left({}^{C} \mathit{\boldsymbol{{D}}}^{(1/n)}\right)^n $ type FDEs. AIMS Mathematics, 2022, 7(9): 16536-16554. doi: 10.3934/math.2022905 |
[3] | Khalid K. Ali, Mohamed S. Mohamed, M. Maneea . A novel approach to $ \mathit{q} $-fractional partial differential equations: Unraveling solutions through semi-analytical methods. AIMS Mathematics, 2024, 9(12): 33442-33466. doi: 10.3934/math.20241596 |
[4] | Jiang-Wei Ke, Jin-E Zhang . Associative memories based on delayed fractional-order neural networks and application to explaining-lesson skills assessment of normal students: from the perspective of multiple $ \mathit O(t^{-\alpha}) $ stability. AIMS Mathematics, 2024, 9(7): 17430-17452. doi: 10.3934/math.2024847 |
[5] | Wei Ma, Qiongfen Zhang . Existence of solutions for Kirchhoff-double phase anisotropic variational problems with variable exponents. AIMS Mathematics, 2024, 9(9): 23384-23409. doi: 10.3934/math.20241137 |
[6] | Ling Peng, Qiong Liu . The construction conditions of a Hilbert-type local fractional integral operator and the norm of the operator. AIMS Mathematics, 2025, 10(1): 1779-1791. doi: 10.3934/math.2025081 |
[7] | Lufeng Bai . A new approach for Cauchy noise removal. AIMS Mathematics, 2021, 6(9): 10296-10312. doi: 10.3934/math.2021596 |
[8] | Yang Yang, Yanyan Song, Haifeng Fan, Haiyan Qiao . A note on the generalized Gaussian Estrada index and Gaussian subgraph centrality of graphs. AIMS Mathematics, 2025, 10(2): 2279-2294. doi: 10.3934/math.2025106 |
[9] | Adel Alahmadi, Altaf Alshuhail, Patrick Solé . The mass formula for self-orthogonal and self-dual codes over a non-unitary commutative ring. AIMS Mathematics, 2023, 8(10): 24367-24378. doi: 10.3934/math.20231242 |
[10] | Miyoun Jung . A variational image denoising model under mixed Cauchy and Gaussian noise. AIMS Mathematics, 2022, 7(11): 19696-19726. doi: 10.3934/math.20221080 |
Inspired by the ROF model and the L1/TV image denoising model, we propose a combined model to remove Gaussian noise and salt-and-pepper noise simultaneously. This model combines the L1 -data fidelity term, L2 -data fidelity term and a fractional-order total variation regularization term, and is termed the L1L2/TVα model. We have used the proximity algorithm to solve the proposed model. Through this method, the non-differentiable term is solved by using the fixed-point equations of the proximity operator. The numerical experiments show that the proposed model can effectively remove Gaussian noise and salt and pepper noise through implementation of the proximity algorithm. As we varied the fractional order α from 0.8 to 1.9 in increments of 0.1, we observed that different images correspond to different optimal values of α.
Throughout the evolution of digital image processing, a variety of processing technologies have been formed, including the wavelet transform, partial differential equation (PDE), and stochastic model. In image processing, the edge of an image is the most important visual feature. In 1992, Rudin et al. proposed the well-known total variation (TV) model [1], which has been named the ROF model. The ROF model can balance edge preservation and noise removal because it can take advantage of the inherent regularity of the image. The ROF model is as follows:
minu∫Ωλ2||u−u0||22+||u||TVdΩ, | (1) |
where Ω⊂Rn is an open bounded set, n≥2 [2], u0(x,y) denotes the noisy image and u(x,y) denotes the desired clean image. λ denotes a real positive number and ‖u‖TV denotes the TV of u(x,y) , which is defined as ‖∇u‖1 . The ROF model has played an important role in image denoising, deblurring and inpainting. However, the solution of the ROF model is a piecewise constant function, so it is easy to generate a blocky effect in the flat region. To reduce the block effect, scholars have proposed a fourth-order PDE [3] and LLT [4], which can effectively remove the noise and reduce the blocky effect. The LLT model is as follows:
minu∫Ωλ2||u−u0||22+||Δu||1dΩ, | (2) |
where Δu=(∂2xu,∂2yu) and ‖Δu‖1=|∂2xu|+|∂2yu| . The disadvantage of the LLT model is that it produces excessive smoothing in the edge region. To solve this problem, an adaptive fourth-order PDE has been proposed [5]. Both the ROF model and LLT model have the L2 -data fidelity term. The type of noise that corrupts the image typically affects the data fidelity term selection. In general, images are affected by different types of noise. If the image is only affected by a mixture of Gaussian noise and Poisson noise, the noises can be converted into additive Gaussian noise. This is probably why most of the literature is devoted to removing Gaussian noise. The L2 - data fidelity term is suitable for removing Gaussian additive noise, but it is almost invalid for other noises. The L1 -data fidelity term can effectively remove non-additive Gaussian noise, such as Laplacian noise and impulse noise [6,7]. The L1/TV model is as follows:
minu∫Ωλ2||u−u0||1+||u||TVdΩ. | (3) |
The L1/TV model has some unique features. It does not destroy the geometric structures or morphological invariance of the images under processing [8,9]. Therefore, the L1/TV denoising image model is widely used in practical applications, such as face recognition [10], shape denoising [11] and image texture decomposition [12]. In fact, images are generally not corrupted by only one type of noise. The mixture of Gaussian and salt-and-pepper noise is considered in this paper. In particular, salt-and-pepper noise is a simple type of impulse noise [13]. An L1 - L2 -data fidelity term was introduced and proved to be suitable for the removal of a mixture of Gaussian and impulse noise in [14]. The L1L2/TV model is as follows:
minu∫Ωλ||u−u0||22+μ||u−u0||1+||u||TVdΩ, | (4) |
where λ , μ≥0 . The L1L2/TV model (4) is a generalization of (1) and (3). For example, if we set λ=0 in (4) then we get the L1/TV model. If we set μ=0 then we get the L2/TV model. In particular, the choice of parameters critically affects the quality of image restoration. Small values of λ and μ lead to an oversmoothed reconstruction, which eliminates both noise and detail in the image. In contrast, large values of λ and μ retain noise [15]. An improvement of the L1L2/TV model has been proposed in [16], where ‖Wu‖1 replaces the TV. In [17], the authors used second-order total generalized variation [18] as a regularization term and incorporated box constraints.
In this paper, the fractional-order TV regularization term is the focus. We propose a combined model with a fractional-order TV regularization term, an L1 -data fidelity term, and an L2 - data fidelity term, which we term the L1L2/TVα model. This model aims to remove the mixture of Gaussian noise and salt-and-pepper noise.
It is difficult to minimize the objective function because the fractional-order TV regularization term is non-differentiable. Numerous efforts have been devoted to addressing this issue. There are some methods to solve the fractional-order TV model, including the use of the primal-dual algorithm [19], fractional-order Euler-Lagrange equations [20], alternating projection algorithm for the fractional-order multi-scale variational model [21,22], and majorization-minimization algorithm [23]. The Split Bregman iterative algorithm [24] and alternating direction method of multipliers [25] can also effectively solve non-differentiable terms. Recently, proximity algorithms [26–30] for solving the ROF model or the L1/TV denoising image model have attracted widespread attention in digital image processing. The method mainly combines a convex function with a linear transformation to represent the non-differentiable term ‖u‖TV . The issue of solving the proximity operator of the convex function can be reformulated into solving a fixed-point equation. Consequently, the proximity operator of the convex function can be obtained. The convergence of the fixed-point proximity algorithm has been proven [26]. The L1/TV model requires solving two fixed point equations due to the non-differentiability of the L1 -data fidelity term [28]. In this paper, the proximity algorithm is used to solve the L1L2/TVα model.
The structure of the paper is as follows. Section 1 introduces the prior works and our motivation. Section 2 proposes the L1L2/TVα model and proves the existence of its solution. The proximity algorithm is applied to solve the model and the convergence of the algorithm is proved. Section 3 presents several numerical experiments and shows the results. Finally, Section 4 concludes the paper.
This section first introduces two very important concepts of convex functions: the proximity operator and the subdifferential. The relationship between them will also be given.
Initially, we introduce some notations. We denote the m -dimensional Euclidean space by Rm . For x,y∈Rm , we define the standard inner product of Rmas<x,y>≔∑mi=1xiyiandthep−normofavectorx∈Rm as ||x||p≔(∑mi=1|xi|p)1p . The proximity operator was introduced in [31]. We recall its definition as follows.
Definition 2.1. (Proximity operator): Let f be a proper lower-semi-continuous convex function on Rm , where Rm is m -dimensional Euclidean space. The proximity operator of f is defined for any x∈Rm by proxf(x)=argminu{12‖u−x‖22+f(u):u∈Rm} .
Definition 2.2. (Subdifferential): Let f be a proper lower-semi-continuous convex function on Rm , where Rm is m -dimensional Euclidean space. The subdifferential of f is defined for y∈Rm by ∂f(x)≔{y∈Rmandf(z)≥f(x)+⟨y,z−x⟩,∀z∈Rm} .
The following lemma describes the relationship between the proximity operator and the convex function subdifferential.
Lemma 2.1. (Proposition 2.6 in [27]): If f is a convex function on Rm and x∈Rm , then
y∈∂f(x)ifandonlyifx=proxf(x+y). | (5) |
The proof of this lemma is given in [27]. Based on the Lemma 2.1, we can get that
y∈∂f(x)ifandonlyify=(I−proxf)(x+y). | (6) |
Recently in [13], it has been demonstrated that the L1L2/TV model is effective at removing mixtures of Gaussian and impulse noise. In this approach, an image is restored by solving the following equation:
minp∫Ωλ||p−p0||22+μ||p−p0||1+||p||TVdΩ, | (7) |
where p0∈RN×N denotes the noise image, N is a positive integer, p∈RN×N denotes the denoising image, and λ,μ are the parameters of L2 -data and L1 -data fidelity terms respectively. This model combines two kinds of data fidelity terms, L1 and L2 , which can combine the advantages of both norms. Therefore, it has a significant effect in the removal of mixtures noise of Gaussian noise and salt-and-pepper noise.
However, we observe that the numerical solution produced by the L1L2/TV model yields a substantial block effect. Additionally, this model fails to completely remove salt-and-pepper noise. The fractional-order TV regularization term has been proved to effectively reduce the block effect. This section introduces a minimum optimization denoising model, termed the L1L2/TVα model. The L1L2/TVα model includes three terms: an L2 -data fidelity term for Gaussian noise, an L1 -data fidelity term for salt-and-pepper noise, and a fractional-order TV regularization term for a balance between detail preservation and noise reduction. The model is as follows:
minpE(p)=minp∫Ω(λ||p−p0||22+μ||p−p0||1+||p||TVα)dΩ, | (8) |
where p0∈RN×N denotes the noise image and p∈RN×N denotes the denoising image. ‖p‖TVα is the α fractional-order TV of p , and ‖p‖TVα is defined as ‖∇αp‖1 , where ∇αp=(∂αxp,∂αyp) and ‖∇αp‖1=|∂αxp|+|∂αyp| . In particular, note the following:
● When setting λ=0 , the model (8) simplifies L1/TVα .
● When setting μ=0 , the model (8) simplifies L2/TVα .
The parameter settings of λ and μ for these specialized models demonstrates the flexibility of the L1L2/TVα model.
To prove the existence of a solution to the L1L2/TVα model, it is critical to prove the boundedness of the potential solution [33].
Lemma 2.2. (Boundedness) Let p0∈L2(Ω) , where Ω⊂Rn(n≥2) is an open bounded set. Given infΩp0>0 , if the model has a solution ˆp , then infΩp0<ˆp<supΩp0 .
Proof of Lemma 2.2. Let ω=infΩp0 and ν=supΩp0 . When p>p0 , functions |p−p0| and (p−p0)2 increase monotonically. Then,
∫Ω||inf(p,ν)−p0||1dΩ≤∫Ω||p−p0||1dΩ, | (9) |
∫Ω‖inf(p,ν)−p0‖22dΩ≤∫Ω‖p−p0‖22dΩ, | (10) |
where inf(p,ν) is the lower bound of p and ν . That is, inf(p,ν) is the minimum value of p and ν .
Moreover, based on Lemma 2 in the literature [34], there exists TVα(inf(p,ν))≤TVα(p) . Thus, we have
E(inf(p,ν))≤E(p), | (11) |
and the equation holds if and only if p≤ν .
Since ˆp is the minimum solution of optimization problem (8), the equation holds when p=ˆp and hence ˆp≤ν . Similarly, E(p)≤E(sup(p,ω)) ; then, ˆp≥ω can be obtained. In summary, infΩf<ˆp<supΩf .
In what follows, we will give the existence of a solution for the optimization problem (8).
Lemma 2.3. (Existence): Let p0∈L2(Ω) , where Ω⊂Rn ( n≥2 ) is an open bounded set. Given infΩp0>0 , the optimization problem (8) has at least one solution in the solution space BVα(Ω) .
Proof of Lemma 2.3. The space of bounded variational functions BVα(Ω) can be defined as follows: BVα(Ω)={f:f∈L1(Ω)} , forming Banach spaces under the BVα norm ‖f‖BVα=‖f‖L1+TVα(f) .
Define ω=infΩp0 and ν=supΩp0 . Because p=ν∈BVα(Ω) , the solution space is not empty [35]. Consider that the optimization problem (8) has a minimization sequence {pn}∈BVα(Ω) with ω≤pn≤ν .
Because BVα(Ω) is a Banach space and Ω is bounded, it follows that
‖pn‖L1=∫Ω|pn|dΩ≤+∞. | (12) |
Moreover, because {pn} is a minimization sequence, there exists a constant C>0 such that E(pn)≤C . Because ∫Ω||p−p0||22+||p−p0||1dΩ is nonnegative, there is a constant C'>0 and
TVα(pn)≤C'. | (13) |
Equations (12) and (13) yield that {pn} is consistently bounded. Due to the compactness of BVα(Ω) , there exists a subsequence {pnj} of {pn} and a function pϵBVα(Ω) such that
{pnj}→p,inL1(Ω). |
Using the Lebesgue control convergence theorem, we obtain
∫Ω||p−p0||1dΩ=limj→∞∫Ω||pnj−p0||1dΩ, | (14) |
∫Ω||p−p0||22dΩ=limj→∞∫Ω||pnj−p0||22dΩ. | (15) |
According to the lower semi-continuity of the function, the following inequality holds:
E(p)≤limn→∞infE(pn). | (16) |
Since {pn} is a minimization sequence, p is the smallest solution to the optimization problem (8).
Consider an image represented by a grid of N×N pixels. The discretization of the data term is given by
∫Ω||p−p0||22dΩ≈∑i,j(pi,j−p0i,j)2,∫Ω‖p−p0‖1dΩ≈∑i,j|pi,j−p0i,j|, |
where (i,j) denotes the coordinates at the points. For the fractional-order TV term, we obtain the following discretization:
∫Ω‖∇αp‖1dΩ≈∑i,j|∇αxpi,j|+|∇αypi,j|, |
∇αxpi,j=∑K−1k=0(−1)kCαkpi−k,j,∇αypi,j=∑K−1k=0(−1)kCαkpi,j−k, |
where C(α)k=(−1)αΓ(α+1)Γ(k+1)Γ(α−k+1) and Γ(x) is the gamma function.
Considering that the proximity algorithm is suitable for vectors, we respectively transform the image matrices p and p0 into vectors u and u0 by using the formulas pi,j=u(j−1)n+i and p0i,j=u0(j−1)n+i , i,j=1,2,….,N . We describe the minimization problem (8) as follows:
argminu{λ||u−u0||22+μ||u−u0||1+||∇αu||1}, | (17) |
where u∈Rm and u0∈Rm,m=N2 .
The proximity operator of ‖∇αu‖1 is not easy to compute. To overcome this difficulty, we treat ‖∇αu‖1 as the composition of a convex function with a fractional-order difference operator by using the formula ‖∇αu‖1=(ϕ∘Bα)(u) . In the formula, ϕ:R2m→R is defined as the norm ‖⋅‖1,Bα is a 2m×m matrix, and ∇αu can be represented as Bαu . The (i,j) component of ∇αu can thus be represented as a multiplication of the vector u∈Rm by a matrix Bαn∈R2×m for n=1,2,...,m :
Bαnu={(∑i−1k=0C(a)kun−k,∑j−1k=0C(a)kun−Nk)Ti>1,j>1(um,∑j−1k=0C(a)kun−Nk)Ti=1,j>1(∑i−1k=0C(a)kun−k,un)Ti>1,j=1(un,un)Ti=1,j=1, | (18) |
where the matrix Bαn=[Bα1,Bα2,…,BαN]T∈R2×2m [29]. Therefore, we describe the minimization problem as follows:
argminu{λ||u−u0||22+μ||u−u0||1+(ϕ∘Bα)(u)}. | (19) |
Consider φ to be a convex function on Rm at u∈Rm , as follows:
φ(u)=λ||u−u0||22+μ||u−u0||1. | (20) |
Therefore, we can describe the above minimization problem as follows:
argminu{φ(u)+(ϕ∘Bα)(u)}. | (21) |
Proposition 2.1. Let ϕ be a proper convex function on Rm ; Bα is a 2m×m matrix. If u∈Rm is a solution of model (21), then for any positive numbers β1, β2>0 , there exists a vector b∈R2m such that
u=prox1αφ(u−β2β1(Bα)Tb), | (22) |
b=(I−prox1β2ϕ)(Bαu+b). | (23) |
On the contrary, if b∈R2m and u∈Rm satisfies (22) and (23) for some positive β1 , β2>0 , then u is a solution of (21).
Proof. If u∈Rm is a solution of (21), then, by Fermat's theorem on convex analysis, it follows that
0∈∂(φ(u)+(ϕ∘Bα)(u)). |
By the chain rule
∂((ϕ∘Bα)(u))=(Bα)T∂ϕ(Bαu), |
then
0∈∂φ(u)+(Bα)T∂ϕ(Bαu). | (24) |
For any β1 , β2>0 , we choose two vectors a∈1β1∂φ(u) and b∈1β2∂ϕ(Bau) such that
0=β1a+β2(Bα)Tb. | (25) |
By (5) and a∈1β1∂φ(u) , we have that
u=prox1β1φ(u+a). | (26) |
Using (25), we conclude that a=−β2β1(Bα)Tb ; by substituting a into (26), we obtain (22). By applying the definition of the proximity operator and b∈1β2∂ϕ(Bαu) , we obtain (23). Conversely, if there exist β1,β2>0 , b∈R2m , and u∈Rm satisfying (22) and (23), then by Lemma 2.1, we obtain that b∈1β2∂ϕ(Bαu) and −β2β1(Bα)Tb∈1α∂φ(u) . We can yield that
0=β1(−β2β1(Bα)Tb)+β(Bα)Tb∈∂φ(u)+(Bα)T∂φ(Bαu). |
This implies that u∈Rm is a solution of (21).
According to Proposition 2.1, we can conclude the following corollary.
Corollary 2.1. Suppose that u0∈Rm is given, λ , μ are two positive numbers, Bα is a 2m×m matrix, φ is the function defined by (12), and ϕ is a differentiable convex function on R2m . If u∈Rm is a solution of (21), then for any β1 > 0,
u=prox1β1φ(u−1β1(Bα)T∂ϕ(Bαu)). | (27) |
Conversely, if for some β1 > 0 there exists u∈Rm satisfying (27), then u∈Rm is a solution to (21).
Proof. By Proposition 2.1, a solution u∈Rm of (21) satisfies (22) and (23). If the function ϕ is differentiable, then ∂ϕ(u)={∇ϕ(u)} , where ∂ϕ(u) is the gradient of ϕ at u . Therefore, (6) and (23) imply that b=1β2∂ϕ(Bαu) . Hence, (22) yields the fixed-point equation (27).
The fixed-point equation (27) can be viewed as an instance of the split forward-backward formula [31]. Suppose that ∂ϕ is Lipschitz continuous with a Lipschitz constant L , that is
||∂ϕ(p)−∂ϕ(q)||2≤L||p−q||2,∀p,q∈Rm, | (28) |
and that β1 is chosen to satisfy
1β1<2L||Bα||22. | (29) |
It was proved in [33], that for any initial point u0∈Rm , the Picard iteration
uk+1=prox1β1φ(uk−1β1(Bα)T∂ϕ(Bαuk)), | (30) |
converges to a fixed point of (27), which is a minimum of (21).
Let Hu:=u−1β1(Bα)T∂ϕ(Bαuk) and Qu:=(prox1β1φ∘H)u . To prove that (30) is convergent, we only need to prove that H and Q are non-expansive averaged operators. We recall the definitions of non-expansive operators [31].
Definition 2.3. (Non-expansive operator): An operator T on Rm is non-expansive if it satisfies the following condition ∀x,y∈Rm:‖Tx−Ty‖2≤‖x−y‖2 .
Both proxf(x) and (I−proxf)(x) are operators; see [31].
Definition 2.4. (Firmly non-expansive operator): An operator T on Rm is firmly non-expansive if it satisfies the following condition ∀x,y∈Rm:‖Tx−Ty‖2≤<x−y,Tx−Ty> .
Definition 2.5. (Non-expansive averaged operators): A non-expansive operator Q on Rm is a non-expansive averaged operator if there exists k∈(0,1) and it satisfies the following condition ∀x,y∈Rm:Q=kI+(1−k)P , where P is a non-expansive operator. If k=12 , then Q is a firmly non-expansive operator.
Both proxf(x) and (I−proxf)(x) are firmly non-expansive operators; see [32].
Proposition 2.2. If ϕ is a convex function and Bα is a 2m×m matrix, then H is firmly non-expansive.
Proof. First, by the definition of the operator H , ∀x,y∈Rm , we have
Hx−Hy=x−y−1β1(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy)), | (31) |
(I−H)x−(I−H)y=1β1(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy)). | (32) |
We have
∥Hx−Hy∥2=∥x−y∥2−2β1⟨(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy)),x−y⟩ |
+1β12‖(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy))‖2, | (33) |
‖(I−H)x−(I−H)y‖2=1β12∥(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy))∥2. | (34) |
According to the sub-gradient inequalities of convex functions, we have
⟨∂ϕ(Bαx)−∂ϕ(Bαy),Bαx−Bαy⟩≥0. | (35) |
Substituting (35) into (33), we have
∥Hx−Hy∥2≤∥x−y∥2+1β12∥(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy))∥2. | (36) |
Combining (36) with (34), we have
∥Hx−Hy∥2≤∥x−y∥2+∥(I−H)x−(I−H)y∥2. | (37) |
We have
‖Hx−Hy‖2≤<x−y,Hx−Hy>. |
This completes the proof.
If H:Rm→Rm is firmly non-expansive, then H is a non-expansive 12 -averaged operator (see Lemma 3.8 in [26]). Thus Q is a non-expansive averaged operator (see Lemma 3.7 in [26]).
We prove the convergence of (30). To simplify (30) and find an iterative format that is equivalent to (30), we make the following substitution
uk−1β1(Bα)T∂ϕ(Bαuk)=v. | (38) |
Let v∈Rm be a given vector and x∈Rm ; we denote the proximity operator of 1β1φ for the given v∈Rm as follows
prox1β1φ(v)=argminx{12||x−v||22+λβ1||x−u0||22+μβ1||x−u0||1}. | (39) |
We have
prox1β1φ(v)=u0+argminx{12||x−v+u0||22+λβ1||x||22+μβ1||x||1}. | (40) |
Let g and f be two functions on Rm ; then, we have
g(x)=12||x−v+u0||22+λβ1||x||22, | (41) |
f(x)=μβ1||x||1. | (42) |
Because the function g is differentiable, it can be expanded by applying the Taylor formula to (v−u0)∈Rm :
g(x)=g(v−u0)+<∇g(v−u0),x−v+u0>+12r||x−v+u0||22, | (43) |
where r denotes a constant greater than 1 .
We can use (43) to find the following minimum value problem:
argminx{g(x)+f(x)} |
=argminx{g(v−u0)+<∇g(v−u0),x−v+u0>+12r||x−v+u0||22+f(x)} |
=argminx{12r||x−v+u0+r∇g(v−u0)||22+f(x)} |
=proxrf(v−u0−r∇g(v−u0)). | (44) |
By (41), we can get
∇g(x)=(x−v+u0)+λ2β1x=(1+2λβ1)x−v+u0. | (45) |
Using (45), we obtain
∇g(v−u0)=2λβ1(v−u0). | (46) |
Therefore, substituting (42), (44) and (46) into (40), we conclude that
prox1β1φ(v)=u0+proxrμβ1||⋅||1(β1−2λrβ1(v−u0)). | (47) |
We can combine (38) and uk+1=prox1β1φ(v) with (47) to obtain
uk+1=u0+proxrμβ1||⋅||1(β1−2λrβ1(uk−1α(Bα)T∂ϕ(Bαpk)−u0)). | (48) |
Substituting bk=1β2∂ϕ(Bαuk) into (48) shows that (49) and (50) are equivalent iterations of (30).
uk+1=u0+proxrμβ1||⋅||1(β1−2λrβ1(uk−β2β1(Bα)Tbk−u0)), | (49) |
bk+1=(I−prox1β2ϕ)(Bαuk+1+bk). | (50) |
Hence, according to the iterative equations (48) and (49), We can propose the following algorithm.
Algorithm |
1. Noisy image u0∈Rm ; choose λ ≥ 0, μ ≥ 0, β1 > 0, β2 > 0; |
2. Initialization: u0=p0, b0=0 ; |
3. For k∈N , update u and b as follows: |
uk+1←u0+proxrμβ1‖∙‖1(β1−2λrβ1(uk−1β1(Bα)Tbk−u0)) |
bk+1←(I−prox1β2ϕ)(Bαuk+1+b) |
4. Stop if the preset stop criteria are met; otherwise, return to 2 to continue iteration. |
This section describes several image denoising experiments that were conducted to demonstrate the behavior of the proposed algorithm. The peak signal to noise ratio (PSNR) is currently the most widely used tool for objectively evaluating image quality, and it is consistent with human subjective perception. A larger value of PSNR indicates better quality of the recovered image. It is defined as follows:
PSNR=10log102552n2||u∗−u||22(dB), | (51) |
where u∗ is the original image and u is the denoised image. All experiments' iterations were ceased when the following criterion was satisfied:
||uk−uk+1||||uk+1||≤0.001. | (52) |
In this study, images of size 256×256 pixels were used to conduct numerical experiments with r=β1β1+2λ . We used the L1L2/TVα model to remove Gaussian noise, salt-and-pepper noise, and mixed noise. Original images of the experiment are shown in Figure 1. In particular, the different noise regimes yielded different results, as shown in Figure 2. Salt-and-pepper noise involves setting a value of a pixel to the minimal or maximal value of the image intensity range. Gaussian noise may extend this intensity range. We considered adding salt-and-pepper noise to the original image after Gaussian noise.
This study included a total of four groups of experiments. The first experiment was to restore images affected with σ=20 , which is the level of Gaussian noise. The second experiment was to restore images affected with s=0.03 , which is the level of salt-and-pepper noise. The third experiment was to restore images affected by the mixed noise. The fourth experiment was to explore the convergence of our proposed fractional-order TV denoising algorithm.
We began by investigating the effects of different parameters on the experimental results. Inspired by [27], we consistently chose α=6,β=128 . We determined the most suitable values λ and μ through trial and error. When Gaussian noise with σ=20 was added to the image 'Lena', we found that λ=0.07,μ=0 performed better. When salt-and-pepper noise with s=0.03 was added to the image 'Square', we found that λ=0,μ=3 performed better. We verified that these selected parameters were effective for other images with the same noise levels. Additionally, we increased α from 0.8 to 1.9 in increments of 0.1.
In the first experiment, the Gaussian noise was added to the 'Lena' image at different levels. We chose λ=0.07,μ=0 to deal with noisy images. Table 1 shows the values of PSNR, while Figure 3 shows the experimental results. In addition, Gaussian noise was added to the other images at σ=20 . Table 2 shows the values of PSNR. The first experimental results demonstrated that α has an impact on the denoising results. The best denoising result often did not appear when α=1 . Therefore, the fractional-order TV model can be applied to improve the denoising performance of the TV model.
α | σ=15 | σ=20 | σ=25 | σ=30 |
0.8 | 29.7892 | 27.4593 | 25.0759 | 23.0262 |
0.9 | 30.2569 | 27.8891 | 25.4172 | 23.3055 |
1 | 30.5238 | 28.2065 | 25.7044 | 23.5671 |
1.1 | 30.5492 | 28.3853 | 25.9207 | 23.7875 |
1.2 | 30.5086 | 28.5046 | 25.1172 | 24.0012 |
1.3 | 30.4731 | 28.6112 | 26.3034 | 24.2273 |
1.4 | 30.4378 | 28.7046 | 26.4887 | 24.4474 |
1.5 | 30.3991 | 28.7886 | 26.6701 | 24.6726 |
1.6 | 30.3599 | 28.8649 | 26.8453 | 24.8976 |
1.7 | 30.3170 | 28.9319 | 27.0052 | 25.1174 |
1.8 | 30.2558 | 28.9798 | 27.1521 | 25.3290 |
1.9 | 30.1917 | 29.0035 | 27.2755 | 25.5298 |
α | Man | Pepper | Square |
0.8 | 27.5666 | 27.3701 | 29.7428 |
0.9 | 27.9925 | 27.8178 | 30.4608 |
1 | 28.2736 | 28.1412 | 31.8894 |
1.1 | 28.3151 | 28.1936 | 31.1472 |
1.2 | 28.2914 | 28.3959 | 31.3280 |
1.3 | 28.2809 | 28.4796 | 31.4950 |
1.4 | 28.2808 | 28.5583 | 31.6557 |
1.5 | 28.2824 | 28.6456 | 31.8110 |
1.6 | 28.2666 | 28.7134 | 31.9377 |
1.7 | 28.2452 | 28.7684 | 32.0184 |
1.8 | 28.2056 | 28.8044 | 32.0599 |
1.9 | 28.1466 | 29.8223 | 32.0439 |
In the second experiment, we chose λ=0, μ=4.8 to deal with the 'Square' image corrupted by the salt-and-pepper noise at noise levels of 0.01, 0.02, 0.03, 0.05. Table 3 shows the values of PSNR, while Figure 4 shows the experimental results. In addition, salt-and-pepper noise was applied to the other images at s=0.03 . Table 4 shows the values of PSNR. The experimental results indicate that when α is larger, the effect is better.
α | s=0.01 | s=0.02 | s=0.03 | s=0.05 |
0.8 | 26.6361 | 23.5071 | 21.5444 | 19.3476 |
0.9 | 26.8099 | 23.6805 | 21.7061 | 19.5088 |
1 | 26.9692 | 23.8357 | 21.8529 | 19.6589 |
1.1 | 27.184 | 24.0518 | 22.0587 | 19.8782 |
1.2 | 27.3262 | 24.1908 | 22.1921 | 20.0358 |
1.3 | 28.6720 | 25.4874 | 23.4443 | 21.2008 |
1.4 | 30.3872 | 27.1640 | 25.0587 | 22.7285 |
1.5 | 32.3797 | 29.0880 | 26.8780 | 24.4160 |
1.6 | 34.7346 | 31.2814 | 26.8750 | 26.2035 |
1.7 | 37.4620 | 33.7378 | 30.9774 | 27.7989 |
1.8 | 40.4214 | 36.6253 | 33.2592 | 30.0331 |
1.9 | 42.8820 | 39.2325 | 35.0649 | 31.0890 |
α | Lena | Man | Pepper |
0.8 | 22.4379 | 22.3227 | 22.5437 |
0.9 | 22.7182 | 22.5793 | 22.8359 |
1 | 22.9806 | 22.8058 | 23.1135 |
1.1 | 23.2306 | 23.0675 | 23.3906 |
1.2 | 23.4360 | 23.2628 | 23.6167 |
1.3 | 24.7680 | 24.4272 | 24.8107 |
1.4 | 26.8144 | 26.1801 | 26.6391 |
1.5 | 29.1850 | 28.1721 | 28.6084 |
1.6 | 31.5879 | 30.1103 | 30.4547 |
1.7 | 33.4039 | 31.3361 | 31.7732 |
1.8 | 34.7360 | 31.8994 | 32.7549 |
1.9 | 35.4855 | 32.2108 | 32.4564 |
In the third experiment, we added Gaussian noise at σ=20 and salt-and-pepper noise at s=0.03 to four images and explore the performance of the algorithm. We chose λ=0.009 , μ=2.3 . Table 5 shows the values of PSNR, while Figure 5 shows the experimental results Figure 6 shows the original image, the noisy image, and the denoised image for different values of α (from 0.8 to 1.9 ). The third and fourth rows represent their corresponding contour map. The data from Table 5 indicate that a larger α yields better denoising performance. Consequently, the fractional-order TV model outperformed the traditional TV model under mixed noise.
α | Lena | Man | Pepper | Square |
0.8 | 23.0276 | 22.7010 | 23.0907 | 22.5688 |
0.9 | 23.5519 | 23.128 | 23.5879 | 22.9819 |
1 | 24.0592 | 23.5512 | 24.0694 | 23.4123 |
1.1 | 24.6520 | 24.0576 | 24.6195 | 24.0678 |
1.2 | 25.3215 | 24.6112 | 25.2445 | 24.8745 |
1.3 | 25.8844 | 25.0583 | 25.7745 | 25.6832 |
1.4 | 26.3067 | 25.3817 | 26.1841 | 26.4623 |
1.5 | 26.6177 | 25.5930 | 26.4867 | 27.1617 |
1.6 | 26.8404 | 25.7059 | 26.7017 | 27.7913 |
1.7 | 27.0049 | 25.7356 | 26.8421 | 28.2917 |
1.8 | 27.1119 | 25.7179 | 26.9169 | 28.6416 |
1.9 | 27.1679 | 25.6679 | 26.9147 | 28.8466 |
Based on the PSNR values and denoised images from the first three experiments, we can see that the fractional-order TV model can effectively reduce the block effect and perform better than the TV model.
The fourth experiment focused on the convergence of the algorithm. We applied α=1.8 as an example in the α range of 0.8 to 1.9. The PSNR value was recorded at each iteration. Figure 6 shows the experimental results. The blue line represents the noisy image results for σ=15,s=0.01 , the red line represents the noisy image results for σ=20,s=0.03 , and the yellow line represents the noisy image results for σ=20,s=0.05 . From Figure 6, it is obvious that our proposed fractional-order TV denoising algorithm is convergent.
Furthermore, we will show that our proposed model demonstrated good performance on the task of removing mixed noise. For this purpose, we added Gaussian noise at σ=20 and salt-and-pepper noise at s=0.03 to image 'Lena'. We chose α=1.9 . Figure 7 shows the 60th and 100th rows of the 'Lena' image from a one-dimensional perspective. The original, noisy and denoised images are represented by black, pink and blue lines, respectively. The blue solid line and the black solid line nearly coincide, which indicates that our proposed model exhibited good denoising performance. Figure 8 shows that the histogram for the noisy image was completely different from that of the original image, while the histogram for the denoised image was similar to the histogram for the original image. We took a small part of the 'Lena' image and marked it with a red rectangle; the experimental results can be seen in Figure 9.
In this paper, we developed a fractional-order TV ( L1L2/TVα ) model to remove mixtures of Gaussian noise and salt-and-pepper noise, by incorporating an L1 -data fidelity term and L2 -data fidelity term into the model. The existence of the solution of this model has been proved. We solved the proposed model by using the proximity algorithm, which prevents non-differentiability of the fractional order TV regularization terms. The convergence of the algorithm has been proved. The numerical experiments revealed the following: (1) The L1L2/TVα model can effectively reduce the block effect and achieve better denoising performance than the L1L2/TV model. (2) The L1L2/TVα model effectively removes the mixture of Gaussian noise and salt-and-pepper noise owing to the proximity algorithm. (3) In the L1L2/TVα model, α should range from 0.8 to 1.9. Different images will have different optimal values of α .
Donghong Zhao: Conceptualization, funding acquisition, supervision, methodology; Ruiying Huang: Writing-original draft, writing-review & editing, software, formal analysis; Li Feng: Writing-original draft, software.
The authors declare that they have not used artificial intelligence tools in the creation of this article.
This research was funded by Graduate Online Course Research Project of USTB (2024ZXB002), the National Natural Science Foundation of China (grant number 12371481) and Youth Teaching Talents Training Program of USTB (2016JXGGRC-002).
All authors declare no conflict of interest.
[1] |
L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation-based noise removal algorithms, Physical. D, 60 (1992), 259–268. https://doi.org/10.1016/0167-2789(92)90242-F doi: 10.1016/0167-2789(92)90242-F
![]() |
[2] |
A. Chambolle, V. Caselles, D. Cremers, M. Novaga, T. Pock, An introduction to total variation for image analysis, Radon Series Comp. Appl. Math., 9 (2010), 263–340. https://doi.org/10.1515/9783110226157.263 doi: 10.1515/9783110226157.263
![]() |
[3] |
Y. L. You, M. Kaveh, Fourth-order partial differential equations for noise removal, IEEE Trans. Image Process., 9 (2000), 1723–1730. https://doi.org/10.1109/83.869184 doi: 10.1109/83.869184
![]() |
[4] |
M. Lysaker, A. Lundervold, X. C. Tai, Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time, IEEE Trans. Image Process., 12 (2003), 1579–1590. https://doi.org/10.1109/TIP.2003.819229 doi: 10.1109/TIP.2003.819229
![]() |
[5] |
X. W. Liu, L. H. Huang, Z. Y. Gao, Adaptive fourth-order partial differential equation filter for image denoising, Appl. Math. Lett., 24 (2011), 1282–1288. https://doi.org/10.1016/j.aml.2011.01.028 doi: 10.1016/j.aml.2011.01.028
![]() |
[6] |
D. N. H. Thanh, V. B. S. Prasath, L. M. Hieu, A review on CT and X-Ray images denoising methods, Informatica, 43 (2019), 151–159. https://doi.org/10.31449/INF.V43I2.2179 doi: 10.31449/INF.V43I2.2179
![]() |
[7] | D. N. H. Thanh, V. B. S. Prasath, L.T. Thanh, Total variation L1 fidelity Salt-and-pepper denoising with adaptive regularization parameter, In: 2018 5th NAFOSTED Conference on information and computer science (NICS), 2018,400–405. https://doi.org/10.1109/NICS.2018.8606870 |
[8] |
T. F. Chan, S. Esedo, Aspects of total variation regularized L1 function approximation, SIAM. J. Appl. Math., 65 (2005), 1817–1837. https://doi.org/10.1137/040604297 doi: 10.1137/040604297
![]() |
[9] |
W. Yin, D. Goldfard, S. Osher, The total variation regularized L1 model for multiscale decomposition, Multiscale Model. Simul., 6 (2007), 190–211. https://doi.org/10.1137/060663027 doi: 10.1137/060663027
![]() |
[10] |
T. Chen, W. Yin, X. S. Zhou, D. Comaniciu, T. S. Huang, Total variation models for variable lighting face regularization, IEEE Trans. Pattern Anal. Mach. Intell., 28 (2006), 1519–1524. https://doi.org/10.1109/TPAMI.2006.195 doi: 10.1109/TPAMI.2006.195
![]() |
[11] | C. Zach, T. Pock, H. Bischof, A duality based approach for real time TV-L1 optical flow, In: Lecture notes in computer science, Heidelberg: Springer, Berlin, 4713 (2007), 214–223. https://doi.org/10.1007/978-3-540-74936-3_22 |
[12] |
K. Padmavathi, C.S. Asha, V. K. Maya, A novel medical image fusion by combining TV-L1 decomposed textures based on adaptive weighting scheme, Eng. Sci. Technol., 23 (2020), 225–239. https://doi.org/10.1016/j.jestch.2019.03.008 doi: 10.1016/j.jestch.2019.03.008
![]() |
[13] |
M. Hintermüller, A. Langer, Subspace correction methods for a class of nonsmooth and nonadditive convex variational problems with mixed L1- L2data-fidelity in image processing, SIAM. J. Imaging Sci., 6 (2013), 34–73. https://doi.org/10.1137/120894130 doi: 10.1137/120894130
![]() |
[14] |
Z. Gong, Z. Shen, K. C. Toh, Image restoration with mixed or unknown noises, Multiscale Model. Simul., 12 (2014), 58–87. https://doi.org/10.1137/130904533 doi: 10.1137/130904533
![]() |
[15] |
A. Langer, Automated parameter selection in the L1-L2-TV model for removing Gaussian plus impulse noise, Inverse probl., 33 (2017), 074002. https://doi.org/10.1088/1361-6420/33/7/074002 doi: 10.1088/1361-6420/33/7/074002
![]() |
[16] |
D. N. H. Thanh, L. T. Thanh, N. N. Hien, S. Prasath, Adaptive total variation L1 regularization for salt and pepper image denoising, Optik, 208 (2008), 163677. https://doi.org/10.1016/j.ijleo.2019.163677 doi: 10.1016/j.ijleo.2019.163677
![]() |
[17] |
R. W. Liu, L. Shi, S. C. H. Yu, D. Wang, Box-constrained second-order total generalized variation minimization with a combined L1,2 data-fidelity term for image reconstruction, J. Electron Imaging, 34 (2015), 033026. https://doi.org/10.1117/1.JEI.24.3.033026 doi: 10.1117/1.JEI.24.3.033026
![]() |
[18] |
K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM. J. Imaging Sci., 3 (2010), 492–526. https://doi.org/10.1137/090769521 doi: 10.1137/090769521
![]() |
[19] |
D. Chen, Y. Chen, D. Xue, Fractional-order total variation image restoration based on primal-dual algorithm, Abstr. Appl. Anal., 2013 (2013), 585310. https://doi.org/10.1155/2013/585310 doi: 10.1155/2013/585310
![]() |
[20] |
D. Chen, H. Sheng, Y. Chen, D. Xue, Fractional-order variational optical flow model for motion estimation, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 371 (2013), 20120148. https://doi.org/10.1098/rsta.2012.0148 doi: 10.1098/rsta.2012.0148
![]() |
[21] |
J. Zhang, Z. Wei, A class of fractional-order multi-scale variational models and alternating projection algorithm for image denoising, Appl. Math. Model., 35 (2011), 2516–2528. https://doi.org/10.1016/j.apm.2010.11.049 doi: 10.1016/j.apm.2010.11.049
![]() |
[22] |
J. Zhang, Z. Wei, L. Xiao, Adaptive fractional-order multi-scale method for image denoising, J. Math. Imaging. Vis., 43 (2012), 39–49. https://doi.org/10.1007/s10851-011-0285-z doi: 10.1007/s10851-011-0285-z
![]() |
[23] |
D. Chen, S. Sun, C. Zhang, Y. Chen, D. Xue, Fractional-order TV-L2 model for image denoising, Cent. Eur. J. Phys., 11 (2013), 1414–1422. https://doi.org/10.2478/s11534-013-0241-1 doi: 10.2478/s11534-013-0241-1
![]() |
[24] |
J. F. Cai, S. Osher, Z. W. Shen, Split Bregman methods and frame based image restoration, Multiscale Model. Simul., 8 (2009), 337–369. https://doi.org/10.1137/090753504 doi: 10.1137/090753504
![]() |
[25] |
Z. Qin, D. Goldfarb, S. Ma, An alternating direction method for total variation denoising, Optim. Methods Softw., 30 (2011), 594–615. https://doi.org/10.1080/10556788.2014.955100 doi: 10.1080/10556788.2014.955100
![]() |
[26] |
C. A. Micchelli, L. Shen, Y. Xu, Proximity algorithms for image models: denoising, Inverse Probl., 27 (2011), 045009. https://doi.org/10.1088/0266-5611/27/4/045009 doi: 10.1088/0266-5611/27/4/045009
![]() |
[27] |
Q. Li, C. A. Micchelli, L. Shen, Y. Xu, A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models, Inverse Probl., 28 (2012), 095003. https://doi.org/10.1088/0266-5611/28/9/095003 doi: 10.1088/0266-5611/28/9/095003
![]() |
[28] |
C. A. Micchelli, L. Shen, Y. Xu, X. Zeng, Proximity algorithms for the L1/TV image denoising model, Adv. Comput. Math., 38 (2013), 401–426. https://doi.org/10.1007/s10444-011-9243-y doi: 10.1007/s10444-011-9243-y
![]() |
[29] |
D. Chen, Y. Chen, D. Xue, Fractional-order total variation image denoising based on proximity algorithm, Appl. Math. Comput., 257 (2015), 537–545. https://doi.org/10.1016/j.amc.2015.01.012 doi: 10.1016/j.amc.2015.01.012
![]() |
[30] |
Y. H. Hu, C. Li, X. Q. Yang, On convergence rates of linear proximal algorithms for convex composite optimization with applications, SIAM J. Optim., 26 (2016), 1207–1235. https://doi.org/10.1137/140993090 doi: 10.1137/140993090
![]() |
[31] | J. J. Moreau, Fonctions convexes duales et points proximaux dans un espace hilbertien, Comptes rendus hebdomadaires des séances de l'Académie des sciences, 255 (1962), 2897–2899. |
[32] |
P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168–1120. https://doi.org/10.1137/050626090 doi: 10.1137/050626090
![]() |
[33] |
X. Y. Yu, D. H. Zhao, A weberized total variance regularization-based image multiplicative noise model, Image Anal. Stereol., 42 (2023), 65–76. https://doi.org/10.5566/ias.2837 doi: 10.5566/ias.2837
![]() |
[34] |
J. M. Shapiro, Embedded image coding using zerotrees of wavelet coefficients, IEEE Trans. Signal Process., 41 (1993), 3445–3462. https://doi.org/10.1109/78.258085 doi: 10.1109/78.258085
![]() |
[35] | L. Rudin, P. L. Lions, S. Osher, Multiplicative denoising and deblurring: Theory and algorithms, In: Geometric level set methods in imaging, vision, and graphics, New York: Springer, 2003,103–119. https://doi.org/10.1007/0-387-21810-6_6 |
1. | Yating Zhu, Zixun Zeng, Zhong Chen, Deqiang Zhou, Jian Zou, Performance analysis of the convex non-convex total variation denoising model, 2024, 9, 2473-6988, 29031, 10.3934/math.20241409 |
α | σ=15 | σ=20 | σ=25 | σ=30 |
0.8 | 29.7892 | 27.4593 | 25.0759 | 23.0262 |
0.9 | 30.2569 | 27.8891 | 25.4172 | 23.3055 |
1 | 30.5238 | 28.2065 | 25.7044 | 23.5671 |
1.1 | 30.5492 | 28.3853 | 25.9207 | 23.7875 |
1.2 | 30.5086 | 28.5046 | 25.1172 | 24.0012 |
1.3 | 30.4731 | 28.6112 | 26.3034 | 24.2273 |
1.4 | 30.4378 | 28.7046 | 26.4887 | 24.4474 |
1.5 | 30.3991 | 28.7886 | 26.6701 | 24.6726 |
1.6 | 30.3599 | 28.8649 | 26.8453 | 24.8976 |
1.7 | 30.3170 | 28.9319 | 27.0052 | 25.1174 |
1.8 | 30.2558 | 28.9798 | 27.1521 | 25.3290 |
1.9 | 30.1917 | 29.0035 | 27.2755 | 25.5298 |
α | Man | Pepper | Square |
0.8 | 27.5666 | 27.3701 | 29.7428 |
0.9 | 27.9925 | 27.8178 | 30.4608 |
1 | 28.2736 | 28.1412 | 31.8894 |
1.1 | 28.3151 | 28.1936 | 31.1472 |
1.2 | 28.2914 | 28.3959 | 31.3280 |
1.3 | 28.2809 | 28.4796 | 31.4950 |
1.4 | 28.2808 | 28.5583 | 31.6557 |
1.5 | 28.2824 | 28.6456 | 31.8110 |
1.6 | 28.2666 | 28.7134 | 31.9377 |
1.7 | 28.2452 | 28.7684 | 32.0184 |
1.8 | 28.2056 | 28.8044 | 32.0599 |
1.9 | 28.1466 | 29.8223 | 32.0439 |
α | s=0.01 | s=0.02 | s=0.03 | s=0.05 |
0.8 | 26.6361 | 23.5071 | 21.5444 | 19.3476 |
0.9 | 26.8099 | 23.6805 | 21.7061 | 19.5088 |
1 | 26.9692 | 23.8357 | 21.8529 | 19.6589 |
1.1 | 27.184 | 24.0518 | 22.0587 | 19.8782 |
1.2 | 27.3262 | 24.1908 | 22.1921 | 20.0358 |
1.3 | 28.6720 | 25.4874 | 23.4443 | 21.2008 |
1.4 | 30.3872 | 27.1640 | 25.0587 | 22.7285 |
1.5 | 32.3797 | 29.0880 | 26.8780 | 24.4160 |
1.6 | 34.7346 | 31.2814 | 26.8750 | 26.2035 |
1.7 | 37.4620 | 33.7378 | 30.9774 | 27.7989 |
1.8 | 40.4214 | 36.6253 | 33.2592 | 30.0331 |
1.9 | 42.8820 | 39.2325 | 35.0649 | 31.0890 |
α | Lena | Man | Pepper |
0.8 | 22.4379 | 22.3227 | 22.5437 |
0.9 | 22.7182 | 22.5793 | 22.8359 |
1 | 22.9806 | 22.8058 | 23.1135 |
1.1 | 23.2306 | 23.0675 | 23.3906 |
1.2 | 23.4360 | 23.2628 | 23.6167 |
1.3 | 24.7680 | 24.4272 | 24.8107 |
1.4 | 26.8144 | 26.1801 | 26.6391 |
1.5 | 29.1850 | 28.1721 | 28.6084 |
1.6 | 31.5879 | 30.1103 | 30.4547 |
1.7 | 33.4039 | 31.3361 | 31.7732 |
1.8 | 34.7360 | 31.8994 | 32.7549 |
1.9 | 35.4855 | 32.2108 | 32.4564 |
α | Lena | Man | Pepper | Square |
0.8 | 23.0276 | 22.7010 | 23.0907 | 22.5688 |
0.9 | 23.5519 | 23.128 | 23.5879 | 22.9819 |
1 | 24.0592 | 23.5512 | 24.0694 | 23.4123 |
1.1 | 24.6520 | 24.0576 | 24.6195 | 24.0678 |
1.2 | 25.3215 | 24.6112 | 25.2445 | 24.8745 |
1.3 | 25.8844 | 25.0583 | 25.7745 | 25.6832 |
1.4 | 26.3067 | 25.3817 | 26.1841 | 26.4623 |
1.5 | 26.6177 | 25.5930 | 26.4867 | 27.1617 |
1.6 | 26.8404 | 25.7059 | 26.7017 | 27.7913 |
1.7 | 27.0049 | 25.7356 | 26.8421 | 28.2917 |
1.8 | 27.1119 | 25.7179 | 26.9169 | 28.6416 |
1.9 | 27.1679 | 25.6679 | 26.9147 | 28.8466 |
α | σ=15 | σ=20 | σ=25 | σ=30 |
0.8 | 29.7892 | 27.4593 | 25.0759 | 23.0262 |
0.9 | 30.2569 | 27.8891 | 25.4172 | 23.3055 |
1 | 30.5238 | 28.2065 | 25.7044 | 23.5671 |
1.1 | 30.5492 | 28.3853 | 25.9207 | 23.7875 |
1.2 | 30.5086 | 28.5046 | 25.1172 | 24.0012 |
1.3 | 30.4731 | 28.6112 | 26.3034 | 24.2273 |
1.4 | 30.4378 | 28.7046 | 26.4887 | 24.4474 |
1.5 | 30.3991 | 28.7886 | 26.6701 | 24.6726 |
1.6 | 30.3599 | 28.8649 | 26.8453 | 24.8976 |
1.7 | 30.3170 | 28.9319 | 27.0052 | 25.1174 |
1.8 | 30.2558 | 28.9798 | 27.1521 | 25.3290 |
1.9 | 30.1917 | 29.0035 | 27.2755 | 25.5298 |
α | Man | Pepper | Square |
0.8 | 27.5666 | 27.3701 | 29.7428 |
0.9 | 27.9925 | 27.8178 | 30.4608 |
1 | 28.2736 | 28.1412 | 31.8894 |
1.1 | 28.3151 | 28.1936 | 31.1472 |
1.2 | 28.2914 | 28.3959 | 31.3280 |
1.3 | 28.2809 | 28.4796 | 31.4950 |
1.4 | 28.2808 | 28.5583 | 31.6557 |
1.5 | 28.2824 | 28.6456 | 31.8110 |
1.6 | 28.2666 | 28.7134 | 31.9377 |
1.7 | 28.2452 | 28.7684 | 32.0184 |
1.8 | 28.2056 | 28.8044 | 32.0599 |
1.9 | 28.1466 | 29.8223 | 32.0439 |
α | s=0.01 | s=0.02 | s=0.03 | s=0.05 |
0.8 | 26.6361 | 23.5071 | 21.5444 | 19.3476 |
0.9 | 26.8099 | 23.6805 | 21.7061 | 19.5088 |
1 | 26.9692 | 23.8357 | 21.8529 | 19.6589 |
1.1 | 27.184 | 24.0518 | 22.0587 | 19.8782 |
1.2 | 27.3262 | 24.1908 | 22.1921 | 20.0358 |
1.3 | 28.6720 | 25.4874 | 23.4443 | 21.2008 |
1.4 | 30.3872 | 27.1640 | 25.0587 | 22.7285 |
1.5 | 32.3797 | 29.0880 | 26.8780 | 24.4160 |
1.6 | 34.7346 | 31.2814 | 26.8750 | 26.2035 |
1.7 | 37.4620 | 33.7378 | 30.9774 | 27.7989 |
1.8 | 40.4214 | 36.6253 | 33.2592 | 30.0331 |
1.9 | 42.8820 | 39.2325 | 35.0649 | 31.0890 |
α | Lena | Man | Pepper |
0.8 | 22.4379 | 22.3227 | 22.5437 |
0.9 | 22.7182 | 22.5793 | 22.8359 |
1 | 22.9806 | 22.8058 | 23.1135 |
1.1 | 23.2306 | 23.0675 | 23.3906 |
1.2 | 23.4360 | 23.2628 | 23.6167 |
1.3 | 24.7680 | 24.4272 | 24.8107 |
1.4 | 26.8144 | 26.1801 | 26.6391 |
1.5 | 29.1850 | 28.1721 | 28.6084 |
1.6 | 31.5879 | 30.1103 | 30.4547 |
1.7 | 33.4039 | 31.3361 | 31.7732 |
1.8 | 34.7360 | 31.8994 | 32.7549 |
1.9 | 35.4855 | 32.2108 | 32.4564 |
α | Lena | Man | Pepper | Square |
0.8 | 23.0276 | 22.7010 | 23.0907 | 22.5688 |
0.9 | 23.5519 | 23.128 | 23.5879 | 22.9819 |
1 | 24.0592 | 23.5512 | 24.0694 | 23.4123 |
1.1 | 24.6520 | 24.0576 | 24.6195 | 24.0678 |
1.2 | 25.3215 | 24.6112 | 25.2445 | 24.8745 |
1.3 | 25.8844 | 25.0583 | 25.7745 | 25.6832 |
1.4 | 26.3067 | 25.3817 | 26.1841 | 26.4623 |
1.5 | 26.6177 | 25.5930 | 26.4867 | 27.1617 |
1.6 | 26.8404 | 25.7059 | 26.7017 | 27.7913 |
1.7 | 27.0049 | 25.7356 | 26.8421 | 28.2917 |
1.8 | 27.1119 | 25.7179 | 26.9169 | 28.6416 |
1.9 | 27.1679 | 25.6679 | 26.9147 | 28.8466 |