
Total variation (TV) regularizer has diffusely emerged in image processing. In this paper, we propose a new nonconvex total variation regularization method based on the generalized Fischer-Burmeister function for image restoration. Since our model is nonconvex and nonsmooth, the specific difference of convex algorithms (DCA) are presented, in which the subproblem can be minimized by the alternating direction method of multipliers (ADMM). The algorithms have a low computational complexity in each iteration. Experiment results including image denoising and magnetic resonance imaging demonstrate that the proposed models produce more preferable results compared with state-of-the-art methods.
Citation: Benxin Zhang, Xiaolong Wang, Yi Li, Zhibin Zhu. A new difference of anisotropic and isotropic total variation regularization method for image restoration[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 14777-14792. doi: 10.3934/mbe.2023661
[1] | Xiaolei Gu, Wei Xue, Yanhong Sun, Xuan Qi, Xiao Luo, Yongsheng He . Magnetic resonance image restoration via least absolute deviations measure with isotropic total variation constraint. Mathematical Biosciences and Engineering, 2023, 20(6): 10590-10609. doi: 10.3934/mbe.2023468 |
[2] | Dayu Xiao, Jianhua Li, Ruotong Zhao, Shouliang Qi, Yan Kang . Iterative CT reconstruction based on ADMM using shearlet sparse regularization. Mathematical Biosciences and Engineering, 2022, 19(12): 11840-11853. doi: 10.3934/mbe.2022552 |
[3] | Si Li, Limei Peng, Fenghuan Li, Zengguo Liang . Low-dose sinogram restoration enabled by conditional GAN with cross-domain regularization in SPECT imaging. Mathematical Biosciences and Engineering, 2023, 20(6): 9728-9758. doi: 10.3934/mbe.2023427 |
[4] | Huiying Li, Yizhuang Song . Sparse-view X-ray CT based on a box-constrained nonlinear weighted anisotropic TV regularization. Mathematical Biosciences and Engineering, 2024, 21(4): 5047-5067. doi: 10.3934/mbe.2024223 |
[5] | Yan Huo, Diyuan Guan . Size measurement and prediction for L-glutamic acid crystal growth during stirred crystallization based on imaging analysis. Mathematical Biosciences and Engineering, 2021, 18(2): 1864-1878. doi: 10.3934/mbe.2021097 |
[6] | Hongan Li, Qiaoxue Zheng, Wenjing Yan, Ruolin Tao, Xin Qi, Zheng Wen . Image super-resolution reconstruction for secure data transmission in Internet of Things environment. Mathematical Biosciences and Engineering, 2021, 18(5): 6652-6671. doi: 10.3934/mbe.2021330 |
[7] | Li-na Zhang, Chen-yu Cui, Xiao-yu Zhang, Wei Wu . Adaptive visual cryptography scheme design based on QR codes. Mathematical Biosciences and Engineering, 2022, 19(12): 12160-12179. doi: 10.3934/mbe.2022566 |
[8] | Yanyan Zhang, Jingjing Sun . An improved BM3D algorithm based on anisotropic diffusion equation. Mathematical Biosciences and Engineering, 2020, 17(5): 4970-4989. doi: 10.3934/mbe.2020269 |
[9] | Sanjaykumar Kinge, B. Sheela Rani, Mukul Sutaone . Restored texture segmentation using Markov random fields. Mathematical Biosciences and Engineering, 2023, 20(6): 10063-10089. doi: 10.3934/mbe.2023442 |
[10] | Zhuang Zhang, Wenjie Luo . Hierarchical volumetric transformer with comprehensive attention for medical image segmentation. Mathematical Biosciences and Engineering, 2023, 20(2): 3177-3190. doi: 10.3934/mbe.2023149 |
Total variation (TV) regularizer has diffusely emerged in image processing. In this paper, we propose a new nonconvex total variation regularization method based on the generalized Fischer-Burmeister function for image restoration. Since our model is nonconvex and nonsmooth, the specific difference of convex algorithms (DCA) are presented, in which the subproblem can be minimized by the alternating direction method of multipliers (ADMM). The algorithms have a low computational complexity in each iteration. Experiment results including image denoising and magnetic resonance imaging demonstrate that the proposed models produce more preferable results compared with state-of-the-art methods.
Total variation [1] regularization method is widely used in image processing, such as image reconstruction and denoising. Generally, there are two types: isotropic and anisotropic. TV has a strong edge preserving ability and can process piecewise constant images. However, the image obtained by the TV often suffers from staircasing artifacts. Therefore, in order to suppress artifacts and improve image quality, TV variants have been studied by many scholars [2,3,4,5,6].
TV consists of the L1 norm and gradient operator. From the factor of statistics, the L1 norm may have deviation estimations for larger coefficients, where it will lose the oracle property [7]. Instead of the L1 norm, the L0 norm is the appropriate choice to ensure sparseness. However, the minimization of L0 is a combination optimization problem and NP hard. Then, several penalty functions that integrate the benefits of L1 and L0 norms while circumventing their limitations have been suggested, such as smoothly clipped absolute deviation [8], capped-L1 [9], transformed-L1 [10], and L1−L2 [11]. Numerous algorithms employing convex norms on gradients have also proposed, including the alternating minimization algorithm [12] and the augmented Lagrange multiplier algorithm [13]. The alternating direction method of multipliers (ADMM) [14] and primal-dual algorithm [15] are two benchmark methods. In fact, the primal-dual hybrid gradient method introduced by Chambolle and Pock in [15] is equivalent to the preconditioned ADMM. Additionally, the widely used split Bregman method [16] can be considered as an instance of ADMM with two variables. Additionally, there are also many algorithms for solving nonconvex models. As variational models are inherently nonconvex, obtaining the solution for global optimization is difficult. Some approaches, such as the proximal alternating linearized minimization method [17] and the difference-of-convex algorithm (DCA) [18], typically converge to local minimizers. In [19], DCA was firstly created by Dinh et al. for solving the DC programming. Because of its simplicity and efficiency, it has been widely used and extensively studied.
DC programming and DCA are the natural extension from modern convex analysis to nonconvex analysis, broad enough to include most real world nonconvex optimizations. During the last decades, these theoretical and algorithmic tools have been greatly enriched. He et al. [20] proposed a unified Douglas-Rachford algorithm for generalized DC programming. Niu et al. [21] developed a refined inertial DC algorithm for DC programming. Artacho et al. [22,23] investigated the line search idea in a DC program with linear constraint. Wen et al. [24] used the Nesterov acceleration technique for a special class of DC program. Moreover, DC programming and DCA have always been popular for nonconvex models in image processing, such as image restoration, image reconstruction, and compressive sensing. In [25,26], Lou et al. proved that L1−L2 is closer to L0 than L1 and developed a weighted difference of anisotropic and isotropic TV as a regularization for image denoising, deblurring, segmentation and magnetic resonance imaging (MRI). Its solution was obtained by DCA and the split Bregman. In [27], Li et al. introduced a multiplicative noise removal model, whose objective function can be written as the difference of two convex functions. DCA with a primal-dual hybrid gradient method was used for solving nonconvex programming. Sun et al. [28] studied the DCA to address the Log-norm TV image restoration problem and ADMM was used to solve the resulting subproblem. In [29], for impulsive noise removal, DCA with adaptive proximal parameters was proposed. DCA is a simple descent algorithm. Interested readers are encouraged to consult these state of the art developments in [30].
The contributions of this study are as follows. First, motivated by the idea that L1−L2 can be considered as a Fischer-Burmeister (FB) function, we propose a new nonconvex TV regularization term based on a general FB function. Second, two efficient DCAs are designed for solving nonconvex optimization problems. The proposed DCA needs to solve the TV type of subproblem, which can be efficiently handled by ADMM. Last, experiments on image denoising and MRI reconstruction show that the proposed methods improve on the classical TV model and are compared with state-of-the-art methods. Especially, for MRI, the new model can successfully reconstruct the Shepp-Logan image using merely 7 radial lines.
The rest of this paper is organized as follows. We show our new models in Section 2. Section 3 presents the DCA with ADMM for solving the new nonconvex models. Section 4 shows the effectiveness of the new models and algorithms through the presentation of experimental results. The conclusion is provided in the last section.
The problem of restoring image with additive noise is the following:
f=Au+e, | (2.1) |
where f∈Rm is noise image, A∈Rm×n is a linear operator, u∈Rn is original image and e∈Rm is Gaussian noise. To simplify the presentation, we express both the original and observation images or data as vectors.
Our aim is to restore the initial image u from the noisy data f. Image restoration is a typical inverse problem, and it's impossible to obtain the original image u from f by just the direct inversion of (2.1). A classic method is variational regularization for addressing the ill-posed problem. This approach can make the solutions stable. Then, for the Gaussian noise removal, the famous TV model is as follows:
minu‖u‖TV,s.t.Au=f, |
where ‖⋅‖ represents the Euclidean norm. As discussed above, TV can be classified into two categories: isotropic and anisotropic:
‖Du‖2,1=‖√|Dxu|2+|Dyu|2‖1,‖Du‖1=‖Dxu‖1+‖Dyu|‖1, |
where D is the gradient operator, Dx and Dy are the horizontal and vertical derivative operators, respectively, and ‖⋅‖1 means the L1 norm.
Illuminated by L1−L2 minimization in compression sensing [11], Lou et al. [26] proposed the well-known L1−γL2 regularization function, which is the coupling of anisotropic and isotropic TV:
J(u)=‖Du‖1−γ‖Du‖2,1=‖Dxu‖1+‖Dyu|‖1−γ‖√|Dxu|2+|Dyu|2‖1, |
where γ is an element of the interval(0, 1]. In [31], the authors find that J(u) can be considered as the application of the FB function to the gradient of TV (FBTV), where the scalar-valued FB function is defined as
ϕ(v1,v2)=v1+v2−γ√v21+v22,γ∈(0,1]. |
FBTV considers the non-sparse gradient vectors along the boundaries in the image. The effectiveness of FBTV was demonstrated to surpass that of either anisotropic or isotropic TV.
Motivated by J(u) can be described as a FB function and L1−L2 is closer to L0 than L1, we propose a new difference regularization method by using the following generalized FB (GFB) function:
ψ(v1,v2)=v1+v2−√(1−θ)(v21+v22)+θ(v1−v2)2,θ∈[0,1). |
Now, we present the nonconvex variational regularization models, named GFBTV-C, as
minu‖Dxu‖1+‖Dyu|‖1−‖√(1−θ)(|Dxu|2+|Dyu|2)+θ(|Dxu|−|Dyu|)2‖1,s.t.Au=f. | (2.2) |
and GFBTV as
minu‖Dxu‖1+‖Dyu|‖1−‖√(1−θ)(|Dxu|2+|Dyu|2)+θ(|Dxu|−|Dyu|)2‖1+α2‖Au−f‖2, | (2.3) |
where α>0 is the regularization parameter. It is observed that ψ(v1,v2) takes into account other gradient directions, not just the horizontal or vertical. This allows it to regulate the impact of anisotropic TV for detecting sharp edges while reducing the blocky artifacts. To better understand the different metrics, we plot the level curves corresponding to L0, L1, L1−0.5L2, L1−L2 and GFB with θ = 0.1 and 0.01 in Figure 1. From the figure, it can observe that the contour lines of GFB are bending more inward and closer to L0. This illustrates that GFB can enhance sparsity, and θ acts like a parameter controlling to what extent.
Though ‖√(1−θ)(|Dxu|2+|Dyu|2)+θ(|Dxu|−|Dyu|)2‖1 is not convex, DCA can also be used to easily solve the GFBTV-C (2.2) and GFBTV (2.3) models. In the following, we apply the DCA with ADMM to handle the two nonsmooth and nonconvex models.
First, in order to use the DCA, we separate into (2.2) as follows:
minuΓ(u)=Φ(u)−Ψ(u),s.t.Au=f, | (3.1) |
where
{Φ(u)=‖Dxu‖1+‖Dyu|‖1,Ψ(u)=‖√(1−θ)(|Dxu|2+|Dyu|2)+θ(|Dxu|−|Dyu|)2‖1. |
DCA optimizes (3.1) via linearizing Ψ(u):
uk+1=argminu{Φ(u)−(Ψ(uk)+⟨qk,D(u−uk)⟩),s.t.Au=f}, | (3.2) |
where
qk=(qkx,qky)=(Dxuk−θDyuk,Dyuk−θDxuk)√(1−θ)(|Dxuk|2+|Dyuk|2)+θ(|Dxuk|−|Dyuk|)2. |
If the denominator is zero, the corresponding value is set as zero. Therefore, by the form of ADMM, the convex subproblem (3.2) could be rewritten as the unconstrained problems:
un+1=argminu{Φ(u)−(Ψ(uk)+⟨qk,D(u−uk)⟩)+μ2‖Au−zn‖2}, | (3.3) |
zn+1=zn+f−Aun+1. |
where z is a Lagrange multiplier which enforces Au=f and the parameter μ is the penalty parameter.
The subproblem (3.3) equals to solve the TV problem. Therefore, ADMM is also used to find the solution. In order to utilize ADMM, two auxiliary variables, dx and dy, are introduced to rephrase the original minimization problem (3.3) as a constrained problem:
minu,dx,dy‖dx‖1+‖dy|‖1−(qkxdx+qkydy)+μ2‖Au−zn‖2,s.t.dx=Dxu,dy=Dyu. | (3.4) |
As per the classical optimization theory, the augmented Lagrangian function for Eq (3.4) can be expressed as follows:
L(u,dx,dy,bx,by)=‖dx‖1+‖dy|‖1−(qkxdx+qkydy)+μ2‖Au−zn‖2+λ2‖dx−Dxu−bx‖2+λ2‖dy−Dyu−by‖2, | (3.5) |
where bx,by are the Lagrange multipliers and λ>0 is the penalty parameter.
Subsequently, our attention turns to obtain the solution for subproblems in (3.5). The u-subproblem can be written as follows:
uj+1=argminuL(u,djx,djy,bjx,bjy). | (3.6) |
Using the first-order optimality condition of Eq (3.6), we have
(μATA−λDTD)uj+1=λDTx(djx−bjx)+λDTy(djy−bjy)+μATzn. | (3.7) |
In the problem of reconstructing undersampled magnetic resonance imaging (MRI) using compressed sensing [32], the composite A=RF is formed by combining a sampling operator R and a Fourier transform operator F. When periodic boundary conditions are applied, both matrices DTD and ATA become block circulant with circulant blocks. As a result, the coefficient matrix μATA−λDTD can be diagonalized using a discrete Fourier transform. This means that Eq (3.7) can be efficiently solved by a fast Fourier transform (FFT):
uj+1=F−1(F(λDTx(djx−bjx))+F(λDTy(djx−bjx))+μF(A)∗⊙F(zn)μF(A)∗⊙F(A)−λF(D)∗⊙F(D)). |
Here, F(⋅) and F−1(⋅) refer to the FFT and inverse FFT operations, respectively. The symbol "∗" represents a complex conjugation, while "⊙" denotes an elementwise multiplication.
For dx,dy-subproblem in (3.5), the solution can be updated via soft shrinkage
dj+1x=shrink(Dxuj+1+bjx+qkx/λ,1/λ), | (3.8) |
dj+1y=shrink(Dyuj+1+bjy+qky/λ,1/λ), | (3.9) |
where
shrink(s,μ)=signmax{|s|−μ,0}, |
and sign as the signum function. Additionally, the Lagrange multipliers are updated as follows:
bj+1x=bjx+Dxuj+1−dj+1x, | (3.10) |
bj+1y=bjy+Dyuj+1−dj+1y. | (3.11) |
At last, we present the DCA with ADMM for solving the proposed model GFBTV-C in Algorithm 1.
Algorithm 1 DCA-ADMM for solving the model GFBTV-C (2.2). |
Set u0=q0x=q0y=0, z0=f, λ0=0, MaxDCA, kmax, jmax |
For k=0,1,2,⋯, MaxDCA, b0x=b0y=0, |
For n=0,1,2,⋯,nmax |
For j=0,1,2,⋯,jmax |
Calculate uj+1 by (3.7), |
Calculate dj+1x by (3.8), |
Calculate dj+1y by (3.9), |
Calculate bj+1x by (3.10), |
Calculate bj+1y by (3.11). |
End For |
un+1=ujmax, |
zn+1=zn+f−Aun+1. |
End For |
uk+1=unmax, |
(qk+1x,qk+1y)=(Dxuk+1−θDyuk+1,Dyuk+1−θDxuk+1)√(1−θ)(|Dxuk+1|2+|Dyuk+1|2)+θ(|Dxuk+1|−|Dyuk+1|)2. |
End For |
For the corresponding unconstrained problem (2.3), the DCA subproblem is expressed as
uk+1=argminu{Φ(u)−(Ψ(uk)+⟨qk,D(u−uk)⟩)+α2‖Au−f‖2}. |
Similar to (3.5), we have
minu,dx,dy‖dx‖1+‖dy|‖1−(qkxdx+qkydy)+α2‖Au−f‖2,s.t.dx=Dxu,dy=Dyu. | (3.12) |
The augmented Lagrangian function for Eq (3.12) can be written as
L(u,dx,dy,bx,by)=‖dx‖1+‖dy|‖1−(qkxdx+qkydy)+α2‖Au−f‖2+λ2‖dx−Dxu−bx‖2+λ2‖dy−Dyu−by‖2. | (3.13) |
The next step is to express the u-subproblem as
uj+1=argminuL(u,djx,djy,bjx,bjy). | (3.14) |
Then, using the first-order optimality condition of Eq (3.14), we get
(αATA−λDTD)uj+1=λDTx(djx−bjx)+λDTy(djy−bjy)+αATf. | (3.15) |
Moreover, if DTD and ATA have special structure, Eq (3.15) can be solved via FFT as follows:
uj+1=F−1(F(λDTx(djx−bjx))+F(λDTy(djx−bjx))+αF(A)∗⊙F(f)αF(A)∗⊙F(A)−λF(D)∗⊙F(D)). | (3.16) |
For dx,dy-subproblem in (3.13), the solution can be updated via soft shrinkage
dj+1x=shrink(Dxuj+1+bjx+qkx/λ,1/λ), | (3.17) |
dj+1y=shrink(Dyuj+1+bjy+qky/λ,1/λ), | (3.18) |
and the Lagrange multipliers are updated as follows:
bj+1x=bjx+Dxuj+1−dj+1x, | (3.19) |
bj+1y=bjy+Dyuj+1−dj+1y. | (3.20) |
Algorithm 2 is presented to demonstrate the DCA for resolving the proposed model GFBTV. To tackle the unconstrained problem (2.3), Algorithm 2 is nearly identical to Algorithm 1, but differs by including an extra update on z.
Algorithm 2 DCA-ADMM for solving the model GFBTV. |
Set u0=q0x=q0y=0, z0=f, λ0=0, MaxDCA, jmax, |
For k=0,1,2,⋯, MaxDCA, b0x=b0y=0, |
For j=0,1,2,⋯,jmax |
Calculate uj+1 by (3.15), |
Calculate dj+1x by (3.17), |
Calculate dj+1y by (3.18), |
Calculate bj+1x by (3.19), |
Calculate bj+1y by (3.20). |
End For |
uk+1=ujmax, |
(qk+1x,qk+1y)=(Dxuk+1−θDyuk+1,Dyuk+1−θDxuk+1)√(1−θ)(|Dxuk+1|2+|Dyuk+1|2)+θ(|Dxuk+1|−|Dyuk+1|)2. |
End For |
We present the experimental results of model GFBTV-C (2.2) utilizing Algorithm 1 for MRI and model GFBTV (2.3) employing Algorithm 2 for image denoising. From Figure 1, we find that θ = 0.01 and 0.1 can approximate the L0 norm perfectly. Hence, we choose the control parameter θ = 0.01 for image denoising and θ = 0.1 for MRI naturally. All experiments were executed on a desktop computer with a 2.9 GHz processor and 16GB RAM. The peak signal-to-noise ratio (PSNR) is used to evaluate the restored quality, which is defined as
PSNR=20∗log10255MSEdB. |
where MSE represents the mean square error between uk and uori, uk is the recovered image, and uori represents the original image. Furthermore, it is worth noting that the structure similarity (SSIM) [33] has also been reported. It knows that as the PSNR and SSIM values increase, the quality of the reconstructed image improves.
In this subsection, we will discuss the issue of image denoising, in which the operator A is set to an identity matrix for the nonconvex model GFBTV. We compare it with several well-known variational models and a deep learning-based approach named DnCNN [34]. The first variational model we consider is the TV model, which is also known as the classic ROF model [1]. The second model is L1−γL2 with γ equaling 0.5 or 1, as discussed in [26]. The last model is the transform total variation (TTV) in [35], and DCA is formulated for the TTV minimization problem. We can efficiently solve the new model GFBTV using Algorithm 2, where uk+1 can be calculated by (3.16) with A=I.
For this experiment, 100 images were randomly selected from the Pascal dataset as the test images. Both of these images were subject to degradation caused by additive Gaussian noise with a mean of zero and a standard deviation of σ. The values of σ chosen for the experiment are 0.05 and 0.1. The penalty parameter μ was set to 5 for σ = 0.1 and 15 for σ = 0.05 during testing. For the proposed model, we set θ=0.01. The default parameter settings are the same as the references [26,35] for a fair comparison. We initialize u0=f, and the maximum number of DCA is 2.
The average comparison results are reported in Table 1. The PSNR and SSIM values of the different algorithms for various levels of σ are provided. It is shown from Table 1 that DnCNN is superior to the gradient models in terms of PSNR and SSIM values. Although our new approach has slightly lower PSNR and SSIM values compared to DnCNN, it outperforms TV based models for various noise levels about SSIM. The denoising results of the six models are presented in Figures 2–5. All of the methods effectively remove noise as shown in the four figures and some fine structures and details are easy to see from the zoomed regions. DnCNN demonstrates the highest performance regarding PSNR among the compared approaches. However, when evaluating the four test images, our proposed method exhibits higher SSIM values than both DnCNN and other TV based models, and can reserve more details of the original images. It is also obvious that artifacts appear in the DnCNN, TV and L1−L2 methods. Hence, the experimental results show that the new model can effectively reduce the blocky artifacts and obtain better visual effects.
Image | σ | DnCNN | TV | L1−0.5L2 | L1−L2 | TTV | GFBTV | ||||||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | ||
Pascal | 0.05 | 34.39 | 0.9048 | 31.98 | 0.8772 | 33.18 | 0.8953 | 32.88 | 0.8829 | 33.01 | 0.8930 | 33.48 | 0.8976 |
0.1 | 30.79 | 0.8319 | 27.62 | 0.7995 | 29.04 | 0.8231 | 29.53 | 0.8283 | 29.84 | 0.8254 | 29.68 | 0.8290 |
In this subsection, we consider the problem of MR reconstruction. Compressive sensing based magnetic resonance imaging technology is widely used in images solving inverse problems. The sparse image reconstruction model recovers images from under-sampled k-space data, which effectively solves the problem of data acquisition efficiency in MRI domain. Consequently, we adopt the constrained formulation GFBTV-C (2.2), and Algorithm 1 is used for solving it. Additionally, we compare with TV, L1−γL2 with γ = 0.5 or 1 in [26], and TTV [35]. In this test, θ is set as 0.1. The related parameters in the proposed method and competing methods are left to their default settings in [26,35].
First, we consider the issue of MRI reconstruction utilizing a Shepp-Logan phantom from seven and eight radial projections. The results of various models in reconstructing the Shepp-Logan phantom are shown in Figure 6. Among these methods, only GFBTV-C can attain precise recovery using 7 lines, and the result is excellent. When it comes to 8 lines, GFBTV-C, TTV and L1−0.5L2 can achieve perfect reconstruction. TV and L1−L2 methods have some artifacts in the reconstructed images.
Second, we use two brain images test as the test images: Brain-1 and Brain-2 with dimensions of 256 × 256. These are generated from the literatures [36,37]. Then, we perform undersampling reconstruction under different sampling patterns. The k-space data for Brain-1 were randomly sampled using a Cartesian mask with 87 lines. For Brain-2, a variable density mask with a sampling rate of 30% was used to undersample the k-space data. The brain images reconstructed the residue images between the reconstruction images and original images by the five models, and are shown in Figures 7 and 8. All methods have better recovery effects. Visually, there is no significant difference. The residue images illustrate that the proposed method can visually gain better reconstructions. It appears that the TV model has the most residual images. From the residual images, it can be observed that TTV, L1−0.5L2, L1−L2 methods have similar effects on the image details. Table 2 presents the numerical performance of the five compared models. From the table, we see that the TV model needs less CPU time (in second). TTV, L1−0.5L2, L1−L2 and our GFBTV-C have similar time costs. Meanwhile, we know that GFBTV-C provides an improved reconstruction quality than TV, L1−0.5L2, and L1−L2 as it has the highest PSNR and SSIM values. The PSNR values of the new method are improved by at least 0.82 dB, and the SSIM values around 0.02. As the sampling rate becomes smaller, the performance of the proposed model improves more significantly.
Image | Mask | TV | L1−0.5L2 | L1−L2 | TTV | GFBTV-C | ||||||||||
PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | ||
Shepp-Logan | 7 lines | 18.44 | 0.4541 | 29.21 | 21.45 | 0.6624 | 34.45 | 20.66 | 0.5615 | 34.56 | 20.25 | 0.5643 | 33.81 | 42.17 | 0.9937 | 34.12 |
8 lines | 24.22 | 0.7148 | 28.87 | 48.82 | 0.9971 | 33.90 | 37.01 | 0.9702 | 34.46 | 53.83 | 0.9965 | 33.03 | 63.19 | 0.9995 | 34.87 | |
Brain-1 | Cartesian | 32.61 | 0.9564 | 29.89 | 35.73 | 0.9454 | 34.20 | 35.37 | 0.9465 | 33.34 | 31.59 | 0.9425 | 34.51 | 36.65 | 0.9741 | 33.95 |
Brain-2 | Random | 30.49 | 0.8614 | 29.56 | 31.57 | 0.8661 | 34.96 | 30.99 | 0.8122 | 34.17 | 30.09 | 0.8728 | 33.18 | 32.42 | 0.8922 | 33.87 |
This paper introduces a novel nonconvex TV term that utilizes the GFB function to restore images corrupted by Gaussian noise. Additionally, we use DCA to solve the optimization problem. The results of our experiments on MRI and image denoising exhibit that our proposed models are capable of achieving better solutions in comparison to both convex models and other nonconvex models, and produce more preferable results.
Besides, we observe that the new regularization function has an improved to encourage sparsity compared to L1, L1−γL2 norm, and approximates L0 norm more closely. This property leads to a better solution. Furthermore, the proposed model succeeds in reconstructing the Shepp-Logan phantom image from merely 7 radial lines. Hence, the superiority of our approach is more apparent when the sampling rate is relatively low or the available measurements are limited. The proposed model in this paper is mainly developed for image denoise and MRI, in the future, we will intend to extend the new method to computed tomography, Parallel MRI, electrical impedance tomography and so on. It is also essential to highlight that the convergence of our algorithm remains an open challenge and will be addressed in the future work.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported in part by the National Natural Science Foundation of China under Grant 11901137 and Grant 61967004, in part by the Guangxi Key Laboratory of Automatic Detecting Technology and Instruments under Grant YQ22108, and in part by the Innovation Project of GUET Graduate Education under Grant 2022YCXS161.
The authors declare no conflict of interest.
[1] |
L. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), 259–268. https://doi.org/10.1016/0167-2789(92)90242-F doi: 10.1016/0167-2789(92)90242-F
![]() |
[2] | K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM J. Imaging Sci., 3 (2010), 492–526. https://doi.org/10.1137/090769521 |
[3] |
L. Condat, Discrete total variation: New definition and minimization, SIAM J. Imaging Sci., 10 (2017), 1258–1290. https://doi.org/10.1137/16M1075247 doi: 10.1137/16M1075247
![]() |
[4] |
Z. Jia, M. K. Ng, W. Wang, Color image restoration by saturation-value total variation, SIAM J. Imaging Sci., 12 (2019), 972–1000. https://doi.org/10.1137/16M1075247 doi: 10.1137/16M1075247
![]() |
[5] |
S. Pan, Q. Dai, H. Chen, Global optimality analysis and solution of the total variation signal denoising model, Math. Biosci. Eng., 20 (2023), 6932–6946. https://doi.org/10.3934/mbe.2023299 doi: 10.3934/mbe.2023299
![]() |
[6] |
D. Xiao, J. Li, R. Zhao, S. Qi, Y. Kang, Iterative CT reconstruction based on ADMM using shearlet sparse regularization, Math. Biosci. Eng., 19 (2022), 11840–11853. https://doi.org/10.3934/mbe.2022552 doi: 10.3934/mbe.2022552
![]() |
[7] |
R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Stat. Soc. B, 58 (1996), 267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x doi: 10.1111/j.2517-6161.1996.tb02080.x
![]() |
[8] |
J. Fan, H. Peng, Nonconcave penalized likelihood with a diverging number of parameters, Ann. Stat., 32 (2004), 928–961. https://doi.org/10.1214/009053604000000256 doi: 10.1214/009053604000000256
![]() |
[9] | T. Zhang, Analysis of multi-stage convex relaxation for sparse regularization, J. Mach. Learn. Res., 11 (2010), 1081–1107. |
[10] |
S. Zhang, J. Xin, Minimization of transformed L1 penalty: theory, difference of convex function algorithm, and robust application in compressed sensing, Math. Program., 169 (2018), 307–336. https://doi.org/10.1007/s10107-018-1236-x doi: 10.1007/s10107-018-1236-x
![]() |
[11] |
Y. Lou, P. Yin, Q. He, J. Xin, Computing sparse representation in a highly coherent dictionary based on difference of L1 and L2, J. Sci. Comput., 64 (2015), 178–196. https://doi.org/10.1007/s10915-014-9930-1 doi: 10.1007/s10915-014-9930-1
![]() |
[12] |
Y. Wang, J. Yang, W. Yin, Y. Zhang, A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imaging Sci., 1 (2008), 248–272. https://doi.org/10.1137/080724265 doi: 10.1137/080724265
![]() |
[13] |
M. Fortin, R. Glowinski, On decomposition-coordination methods using an augmented lagrangian, Stud. Math. Appl., 15 (1983), 97–146. https://doi.org/10.1016/S0168-2024(08)70028-6 doi: 10.1016/S0168-2024(08)70028-6
![]() |
[14] |
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3 (2011), 1–122. https://doi.org/10.1561/2200000016 doi: 10.1561/2200000016
![]() |
[15] |
A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision, 40 (2011), 120–145. https://doi.org/10.1007/s10851-010-0251-1 doi: 10.1007/s10851-010-0251-1
![]() |
[16] |
T. Goldstein, S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323–343. https://doi.org/10.1137/080725891 doi: 10.1137/080725891
![]() |
[17] |
J. Bolte, S. Sabach, M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Math. Program., 146 (2014), 459–494. https://doi.org/10.1007/s10107-013-0701-9 doi: 10.1007/s10107-013-0701-9
![]() |
[18] |
H. L. Thi, T. P. Dinh, DC programming and DCA: thirty years of developments, Math. Program., 169 (2018), 5–68. https://doi.org/10.1007/s10107-018-1235-y doi: 10.1007/s10107-018-1235-y
![]() |
[19] | P. D. Tao, Algorithms for solving a class of nonconvex optimization problems, methods of subgradients, in North-Holland Mathematics Studies, 129 (1986), 249–271. https://doi.org/10.1016/S0304-0208(08)72402-2 |
[20] |
C. Chuang, H. He, Z. Zhang, A unified Douglas-Rachford algorithm for generalized DC programming, J. Global Optim., 82 (2022), 331–349. https://doi.org/10.1007/s10898-021-01079-y doi: 10.1007/s10898-021-01079-y
![]() |
[21] |
Y. You, Y. Niu, A refined inertial DC algorithm for DC programming, Optim. Eng., 24 (2023), 65–91. https://doi.org/10.1007/s11081-022-09716-5 doi: 10.1007/s11081-022-09716-5
![]() |
[22] |
F. J. Aragon-Artacho, R. Campoy P. T. Vuong, The boosted DC algorithm for linearly constrained DC programming, Set-Valued Var. Anal., 30 (2022), 1265–1289. https://doi.org/10.1007/s11228-022-00656-x doi: 10.1007/s11228-022-00656-x
![]() |
[23] |
F. J. Aragon-Artacho, M. T. Fleming, P. T. Vuong, Accelerating the DC algorithm for smooth functions, Math. Program., 169 (2018), 95–118. https://doi.org/10.1007/s10107-017-1180-1 doi: 10.1007/s10107-017-1180-1
![]() |
[24] |
B. Wen, X. Chen, T. K. Pong, A proximal diference-of-convex algorithm with extrapolation, Comput. Optim. Appl., 69 (2018), 297–324. https://doi.org/10.1007/s10589-017-9954-1 doi: 10.1007/s10589-017-9954-1
![]() |
[25] |
K. Bui, F. Park, Y. Lou, J. Xin. A weighted difference of anisotropic and isotropic total variation for relaxed Mumford-Shah color and multiphase image segmentation, SIAM J. Imaging Sci., 14 (2021), 1078–1113. https://doi.org/10.1137/20M1337041 doi: 10.1137/20M1337041
![]() |
[26] |
Y. Lou, T. Zeng, S, Osher, J. Xin, A weighted difference of anisotropic and isotropic total variation model for image processing, SIAM J. Imaging Sci., 8 (2015), 1798–1823. https://doi.org/10.1137/14098435X doi: 10.1137/14098435X
![]() |
[27] |
Z. Li, Y. Lou, T. Zeng, Variational multiplicative noise removal by DC programming, J. Sci. Comput., 68 (2016), 1200–1216. https://doi.org/10.1007/s10915-016-0175-z doi: 10.1007/s10915-016-0175-z
![]() |
[28] |
Y. Sun, H. Chen, J. Tao, L. Lei, Computed tomography image reconstruction from few views via Log-norm total variation minimization, Digit. Signal Process., 88 (2019), 172–181. https://doi.org/10.1016/j.dsp.2019.02.009 doi: 10.1016/j.dsp.2019.02.009
![]() |
[29] |
B. Zhang, G. Zhu, Z. Zhu, A TV-log nonconvex approach for image deblurring with impulsive noise, Signal Process., 174 (2020), 107631. https://doi.org/10.1016/j.sigpro.2020.107631 doi: 10.1016/j.sigpro.2020.107631
![]() |
[30] | H. L. Thi, T. P. Dinh, Open issues and recent advances in DC programming and DCA, J. Global Optim., 2023. https://doi.org/10.1007/s10898-023-01272-1 |
[31] |
T. Wu, Y. Zhao, Z. Mao, L. Shi, Z. Li, T. Zeng, Image segmentation via Fischer-Burmeister total variation and thresholding, Adv. Appl. Math. Mech., 14 (2022), 960–988. https://doi.org/10.4208/aamm.OA-2021-0126 doi: 10.4208/aamm.OA-2021-0126
![]() |
[32] |
M. Lustig, D. Donoho, J. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magn. Reson. Med., 58 (2007), 1182–1195. https://doi.org/10.1002/mrm.21391 doi: 10.1002/mrm.21391
![]() |
[33] |
Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE T. Image Process., 13 (2004), 600–612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
![]() |
[34] |
K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising, IEEE T. Image Process., 26 (2017), 3142–3155. https://doi.org/10.1109/TIP.2017.2662206 doi: 10.1109/TIP.2017.2662206
![]() |
[35] |
L. Huo, W. Chen, H. Ge, M. K. Ng, Stable image reconstruction using transformed total variation minimization, SIAM J. Imaging Sci., 15 (2022), 1104–1139. https://doi.org/10.1137/120868281 doi: 10.1137/120868281
![]() |
[36] |
Y. Liu, Z. Zhan, J. Cai, D. Guo, Z. Chen, X. Qu, Projected iterative soft-thresholding algorithm for tight frames in compressed sensing magnetic resonance imaging, IEEE Trans. Med. Imaging, 35 (2016), 2130–2140. https://doi.org/10.1109/TMI.2016.2550080 doi: 10.1109/TMI.2016.2550080
![]() |
[37] |
W. Wang, D. Cao, X. Li, N. Cao, Compressively sampled magnetic resonance imaging reconstruction based on split Bregman iteration with general non-uniform threshold shrinkage, Magn. Reson. Imaging, 85 (2022), 297–307. https://doi.org/10.1016/j.mri.2021.10.015 doi: 10.1016/j.mri.2021.10.015
![]() |
1. | Narendra Kumar, Munnu Sonkar, Gaurav Bhatnagar, Efficient image restoration via non-convex total variation regularization and ADMM optimization, 2024, 132, 0307904X, 428, 10.1016/j.apm.2024.04.055 | |
2. | Narendra Kumar, Gaurav Bhatnagar, 2024, Clearing Text Images: A Non-blind Deblurring with Convex Total Variation Regularization Model, 979-8-3503-5142-2, 452, 10.1109/MIPR62202.2024.00077 |
Image | σ | DnCNN | TV | L1−0.5L2 | L1−L2 | TTV | GFBTV | ||||||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | ||
Pascal | 0.05 | 34.39 | 0.9048 | 31.98 | 0.8772 | 33.18 | 0.8953 | 32.88 | 0.8829 | 33.01 | 0.8930 | 33.48 | 0.8976 |
0.1 | 30.79 | 0.8319 | 27.62 | 0.7995 | 29.04 | 0.8231 | 29.53 | 0.8283 | 29.84 | 0.8254 | 29.68 | 0.8290 |
Image | Mask | TV | L1−0.5L2 | L1−L2 | TTV | GFBTV-C | ||||||||||
PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | ||
Shepp-Logan | 7 lines | 18.44 | 0.4541 | 29.21 | 21.45 | 0.6624 | 34.45 | 20.66 | 0.5615 | 34.56 | 20.25 | 0.5643 | 33.81 | 42.17 | 0.9937 | 34.12 |
8 lines | 24.22 | 0.7148 | 28.87 | 48.82 | 0.9971 | 33.90 | 37.01 | 0.9702 | 34.46 | 53.83 | 0.9965 | 33.03 | 63.19 | 0.9995 | 34.87 | |
Brain-1 | Cartesian | 32.61 | 0.9564 | 29.89 | 35.73 | 0.9454 | 34.20 | 35.37 | 0.9465 | 33.34 | 31.59 | 0.9425 | 34.51 | 36.65 | 0.9741 | 33.95 |
Brain-2 | Random | 30.49 | 0.8614 | 29.56 | 31.57 | 0.8661 | 34.96 | 30.99 | 0.8122 | 34.17 | 30.09 | 0.8728 | 33.18 | 32.42 | 0.8922 | 33.87 |
Image | σ | DnCNN | TV | L1−0.5L2 | L1−L2 | TTV | GFBTV | ||||||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | ||
Pascal | 0.05 | 34.39 | 0.9048 | 31.98 | 0.8772 | 33.18 | 0.8953 | 32.88 | 0.8829 | 33.01 | 0.8930 | 33.48 | 0.8976 |
0.1 | 30.79 | 0.8319 | 27.62 | 0.7995 | 29.04 | 0.8231 | 29.53 | 0.8283 | 29.84 | 0.8254 | 29.68 | 0.8290 |
Image | Mask | TV | L1−0.5L2 | L1−L2 | TTV | GFBTV-C | ||||||||||
PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | PSNR | SSIM | Time | ||
Shepp-Logan | 7 lines | 18.44 | 0.4541 | 29.21 | 21.45 | 0.6624 | 34.45 | 20.66 | 0.5615 | 34.56 | 20.25 | 0.5643 | 33.81 | 42.17 | 0.9937 | 34.12 |
8 lines | 24.22 | 0.7148 | 28.87 | 48.82 | 0.9971 | 33.90 | 37.01 | 0.9702 | 34.46 | 53.83 | 0.9965 | 33.03 | 63.19 | 0.9995 | 34.87 | |
Brain-1 | Cartesian | 32.61 | 0.9564 | 29.89 | 35.73 | 0.9454 | 34.20 | 35.37 | 0.9465 | 33.34 | 31.59 | 0.9425 | 34.51 | 36.65 | 0.9741 | 33.95 |
Brain-2 | Random | 30.49 | 0.8614 | 29.56 | 31.57 | 0.8661 | 34.96 | 30.99 | 0.8122 | 34.17 | 30.09 | 0.8728 | 33.18 | 32.42 | 0.8922 | 33.87 |