In the field of ophthalmology, retinal diseases are often accompanied by complications, and effective segmentation of retinal blood vessels is an important condition for judging retinal diseases. Therefore, this paper proposes a segmentation model for retinal blood vessel segmentation. Generative adversarial networks (GANs) have been used for image semantic segmentation and show good performance. So, this paper proposes an improved GAN. Based on R2U-Net, the generator adds an attention mechanism, channel and spatial attention, which can reduce the loss of information and extract more effective features. We use dense connection modules in the discriminator. The dense connection module has the characteristics of alleviating gradient disappearance and realizing feature reuse. After a certain amount of iterative training, the generated prediction map and label map can be distinguished. Based on the loss function in the traditional GAN, we introduce the mean squared error. By using this loss, we ensure that the synthetic images contain more realistic blood vessel structures. The values of area under the curve (AUC) in the retinal blood vessel pixel segmentation of the three public data sets DRIVE, CHASE-DB1 and STARE of the proposed method are 0.9869, 0.9894 and 0.9885, respectively. The indicators of this experiment have improved compared to previous methods.
Citation: Chen Yue, Mingquan Ye, Peipei Wang, Daobin Huang, Xiaojie Lu. SRV-GAN: A generative adversarial network for segmenting retinal vessels[J]. Mathematical Biosciences and Engineering, 2022, 19(10): 9948-9965. doi: 10.3934/mbe.2022464
[1] | Xiaoxue Zhao, Zhuchun Li . Synchronization of a Kuramoto-like model for power grids with frustration. Networks and Heterogeneous Media, 2020, 15(3): 543-553. doi: 10.3934/nhm.2020030 |
[2] | Tingting Zhu . Synchronization of the generalized Kuramoto model with time delay and frustration. Networks and Heterogeneous Media, 2023, 18(4): 1772-1798. doi: 10.3934/nhm.2023077 |
[3] | Seung-Yeal Ha, Yongduck Kim, Zhuchun Li . Asymptotic synchronous behavior of Kuramoto type models with frustrations. Networks and Heterogeneous Media, 2014, 9(1): 33-64. doi: 10.3934/nhm.2014.9.33 |
[4] | Tingting Zhu . Emergence of synchronization in Kuramoto model with frustration under general network topology. Networks and Heterogeneous Media, 2022, 17(2): 255-291. doi: 10.3934/nhm.2022005 |
[5] | Seung-Yeal Ha, Jaeseung Lee, Zhuchun Li . Emergence of local synchronization in an ensemble of heterogeneous Kuramoto oscillators. Networks and Heterogeneous Media, 2017, 12(1): 1-24. doi: 10.3934/nhm.2017001 |
[6] | Seung-Yeal Ha, Jeongho Kim, Jinyeong Park, Xiongtao Zhang . Uniform stability and mean-field limit for the augmented Kuramoto model. Networks and Heterogeneous Media, 2018, 13(2): 297-322. doi: 10.3934/nhm.2018013 |
[7] | Seung-Yeal Ha, Hansol Park, Yinglong Zhang . Nonlinear stability of stationary solutions to the Kuramoto-Sakaguchi equation with frustration. Networks and Heterogeneous Media, 2020, 15(3): 427-461. doi: 10.3934/nhm.2020026 |
[8] | Seung-Yeal Ha, Se Eun Noh, Jinyeong Park . Practical synchronization of generalized Kuramoto systems with an intrinsic dynamics. Networks and Heterogeneous Media, 2015, 10(4): 787-807. doi: 10.3934/nhm.2015.10.787 |
[9] | Vladimir Jaćimović, Aladin Crnkić . The General Non-Abelian Kuramoto Model on the 3-sphere. Networks and Heterogeneous Media, 2020, 15(1): 111-124. doi: 10.3934/nhm.2020005 |
[10] | Young-Pil Choi, Seung-Yeal Ha, Seok-Bae Yun . Global existence and asymptotic behavior of measure valued solutions to the kinetic Kuramoto--Daido model with inertia. Networks and Heterogeneous Media, 2013, 8(4): 943-968. doi: 10.3934/nhm.2013.8.943 |
In the field of ophthalmology, retinal diseases are often accompanied by complications, and effective segmentation of retinal blood vessels is an important condition for judging retinal diseases. Therefore, this paper proposes a segmentation model for retinal blood vessel segmentation. Generative adversarial networks (GANs) have been used for image semantic segmentation and show good performance. So, this paper proposes an improved GAN. Based on R2U-Net, the generator adds an attention mechanism, channel and spatial attention, which can reduce the loss of information and extract more effective features. We use dense connection modules in the discriminator. The dense connection module has the characteristics of alleviating gradient disappearance and realizing feature reuse. After a certain amount of iterative training, the generated prediction map and label map can be distinguished. Based on the loss function in the traditional GAN, we introduce the mean squared error. By using this loss, we ensure that the synthetic images contain more realistic blood vessel structures. The values of area under the curve (AUC) in the retinal blood vessel pixel segmentation of the three public data sets DRIVE, CHASE-DB1 and STARE of the proposed method are 0.9869, 0.9894 and 0.9885, respectively. The indicators of this experiment have improved compared to previous methods.
Synchronization in complex networks has been a focus of interest for researchers from different disciplines[1,2,4,8,15]. In this paper, we investigate synchronous phenomena in an ensemble of Kuramoto-like oscillators which is regarded as a model for power grid. In [9], a mathematical model for power grid is given by
Pisource=I¨θi˙θi+KD(˙θi)2−N∑l=1ailsin(θl−θi),i=1,2,…,N, | (1) |
where
By denoting
(˙θi)2=ωi+KNN∑l=1sin(θl−θi),˙θi>0,i=1,2,…,N. | (2) |
Here, the setting
If
(˙θi)2=ωi+KNN∑l=1sin(θl−θi+α),˙θi>0,i=1,2,…,N. | (3) |
We will find a trapping region such that any nonstationary state located in this region will evolve to a synchronous state.
The contributions of this paper are twofold: First, for identical oscillators without frustration, we show that the initial phase configurations located in the half circle will converge to complete phase and frequency synchronization. This extends the analytical results in [5] in which the initial phase configuration for synchronization needs to be confined in a quarter of circle. Second, we consider the nonidentical oscillators with frustration and present a framework leading to the boundness of the phase diameter and complete frequency synchronization. To the best of our knowledge, this is the first result for the synchronization of (3) with nonidentical oscillators and frustration.
The rest of this paper is organized as follows. In Section 2, we recall the definitions for synchronization and summarize our main results. In Section 3, we give synchronization analysis and prove the main results. Finally, Section 4 is devoted to a concluding summary.
Notations. We use the following simplified notations throughout this paper:
νi:=˙θi,i=1,2,…,N,ω:=(ω1,ω2,…,ωN),ˉω:=max1≤i≤Nωi,ω_:=min1≤i≤Nωi,D(ω):=ˉω−ω_,θM:=max1≤i≤Nθi,θm:=min1≤i≤Nθi,D(θ):=θM−θm,νM:=max1≤i≤Nνi,νm:=min1≤i≤Nνi,D(ν):=νM−νm,θνM∈{θj|νj=νM},θνm∈{θj|νj=νm}. |
In this paper, we consider the system
(˙θi)2=ωi+KNN∑l=1sin(θl−θi+α),˙θi>0,α∈(−π4,π4),θi(0)=θ0i,i=1,2,…,N. | (4) |
Next we introduce the concepts of complete synchronization and conclude this introductory section with the main result of this paper.
Definition 2.1. Let
1. it exhibits asymptotically complete phase synchronization if
limt→∞(θi(t)−θj(t))=0,∀i≠j. |
2. it exhibits asymptotically complete frequency synchronization if
limt→∞(˙θi(t)−˙θj(t))=0,∀i≠j. |
For identical oscillators without frustration, we have the following result.
Theorem 2.2. Let
θ0∈A:={θ∈[0,2π)N:D(θ)<π}, |
then there exits
D(θ(t))≤D(θ0)e−λ1t,t≥0. | (5) |
and
D(ν(t))≤D(ν(t0))e−λ2(t−t0),t≥t0. | (6) |
Next we introduce the main result for nonidentical oscillators with frustration. For
Kc:=D(ω)√2ˉω1−√2ˉωsin|α|>0. |
For suitable parameters, we denote by
sinD∞1=sinD∞∗:=√ˉω+K(D(ω)+Ksin|α|)K√ω_−K,0<D∞1<π2<D∞∗<π. |
Theorem 2.3. Let
θ0∈B:={θ∈[0,2π)N|D(θ)<D∞∗−|α|}, |
then for any small
D(ν(t))≤D(ν(T))e−λ3(t−T),t≥T. | (7) |
Remark 1. If the parametric conditions in Theorem 2.3 are fulfilled, the reference angles
D(ω)√2ˉω1−√2ˉωsin|α|<K,1−√2ˉωsin|α|>0. |
This implies
√2ˉω(D(ω)+Ksin|α|)K<1. |
Then, by
sinD∞1=sinD∞∗:=√ˉω+K(D(ω)+Ksin|α|)K√ω_−K≤√2ˉω(D(ω)+Ksin|α|)K<1. |
Remark 2. In order to make
In this subsection we consider the system (4) with identical natural frequencies and zero frustration:
(˙θi)2=ω0+KNN∑l=1sin(θl−θi),˙θi>0,i=1,2,…,N. | (8) |
To obtain the complete synchronization, we need to derive a trapping region. We start with two elementary estimates for the transient frequencies.
Lemma 3.1. Suppose
(˙θi−˙θj)(˙θi+˙θj)=2KNN∑l=1cos(θl−θi+θj2)sinθj−θi2. |
Proof. It is immediately obtained by (8).
Lemma 3.2. Suppose
˙θi≤√ω0+K. |
Proof. It follows from (8) and
(˙θi)2=ω0+KNN∑l=1sin(θl−θi)≤ω0+K. |
Next we give an estimate for trapping region and prove Theorem 2.2. For this aim, we will use the time derivative of
Lemma 3.3. Let
Proof. For any
T:={T∈[0,+∞)|D(θ(t))<D∞,∀t∈[0,T)}. |
Since
D(θ(t))<D∞,t∈[0,η). |
Therefore, the set
T∗=∞. | (9) |
Suppose to the contrary that
D(θ(t))<D∞,t∈[0,T∗),D(θ(T∗))=D∞. |
We use Lemma 3.1 and Lemma 3.2 to obtain
12ddtD(θ(t))2=D(θ(t))ddtD(θ(t))=(θM−θm)(˙θM−˙θm)=(θM−θm)1˙θM+˙θm2KNN∑l=1cos(θl−θM+θm2)sin(θm−θM2)≤(θM−θm)1˙θM+˙θm2KNN∑l=1cosD∞2sin(θm−θM2)≤(θM−θm)1√ω0+KKNN∑l=1cosD∞2sin(θm−θM2)=−2KcosD∞2√ω0+KD(θ)2sinD(θ)2≤−KcosD∞2π√ω0+KD(θ)2,t∈[0,T∗). |
Here we used the relations
−D∞2<−D(θ)2≤θl−θM2≤0≤θl−θm2≤D(θ)2<D∞2 |
and
xsinx≥2πx2,x∈[−π2,π2]. |
Therefore, we have
ddtD(θ)≤−KcosD∞2π√ω0+KD(θ),t∈[0,T∗), | (10) |
which implies that
D(θ(T∗))≤D(θ0)e−KcosD∞2π√ω0+KT∗<D(θ0)<D∞. |
This is contradictory to
Now we can give a proof for Theorem 2.2.
Proof of Theorem 2.2.. According to Lemma 3.3, we substitute
On the other hand, by (5) there exist
˙νi=K2NνiN∑l=1cos(θl−θi)(νl−νi). |
Using Lemma 3.2, we now consider the temporal evolution of
ddtD(ν)=˙νM−˙νm=K2NνMN∑l=1cos(θl−θνM)(νl−νM)−K2NνmN∑l=1cos(θl−θνm)(νl−νm)≤Kcosδ2NνMN∑l=1(νl−νM)−Kcosδ2NνmN∑l=1(νl−νm)≤K2Ncosδ√ω0+KN∑l=1(νl−νM)−K2Ncosδ√ω0+KN∑l=1(νl−νm)=Kcosδ2N√ω0+KN∑l=1(νl−νM−νl+νm)=−Kcosδ2√ω0+KD(ν),t≥t0. |
This implies that
D(ν(t))≤D(ν(t0))e−Kcosδ2√ω0+K(t−t0),t≥t0, |
and proves (6) with
Remark 3. Theorem 2.2 shows, as long as the initial phases are confined inside an arc with geodesic length strictly less than
In this subsection, we prove the main result for nonidentical oscillators with frustration.
Lemma 3.4. Let
(˙θi−˙θj)(˙θi+˙θj)≤D(ω)+KNN∑l=1[sin(θl−θi+α)−sin(θl−θj+α)]. |
Proof. By (4) and for any
(˙θi−˙θj)(˙θi+˙θj)=(˙θi)2−(˙θj)2, |
the result is immediately obtained.
Lemma 3.5. Let
˙θi∈[√ω_−K,√ˉω+K],∀i=1,2,…,N. |
Proof. From (4), we have
ω_−K≤(˙θi)2≤ˉω+K,∀i=1,2,…,N, |
and also because
Lemma 3.6. Let
Proof. We define the set
T:={T∈[0,+∞)|D(θ(t))<D∞∗−|α|,∀t∈[0,T)},T∗:=supT. |
Since
T∗=∞. |
Suppose to the contrary that
D(θ(t))<D∞∗−|α|,t∈[0,T∗),D(θ(T∗))=D∞∗−|α|. |
We use Lemma 3.4 to obtain
12ddtD(θ)2=D(θ)ddtD(θ)=D(θ)(˙θM−˙θm)≤D(θ)1˙θM+˙θm[D(ω)+KNN∑l=1(sin(θl−θM+α)−sin(θl−θm+α))]⏟I. |
For
I=D(ω)+KcosαNN∑l=1[sin(θl−θM)−sin(θl−θm)]+KsinαNN∑l=1[cos(θl−θM)−cos(θl−θm)]. |
We now consider two cases according to the sign of
(1)
I≤D(ω)+KcosαsinD(θ)ND(θ)N∑l=1[(θl−θM)−(θl−θm)]+KsinαNN∑l=1[1−cosD(θ)]=D(ω)−K[sin(D(θ)+α)−sinα]=D(ω)−K[sin(D(θ)+|α|)−sin|α|]. |
(2)
I≤D(ω)+KcosαsinD(θ)ND(θ)N∑l=1[(θl−θM)−(θl−θm)]+KsinαNN∑l=1[cosD(θ)−1]=D(ω)−K[sin(D(θ)−α)+sinα]=D(ω)−K[sin(D(θ)+|α|)−sin|α|]. |
Here we used the relations
sin(θl−θM)θl−θM,sin(θl−θm)θl−θm≥sinD(θ)D(θ), |
and
cosD(θ)≤cos(θl−θM),cos(θl−θm)≤1,l=1,2,…,N. |
Since
I≤D(ω)−K[sin(D(θ)+|α|)−sin|α|] | (11) |
≤D(ω)+Ksin|α|−KsinD∞∗D∞∗(D(θ)+|α|). | (12) |
By (12) and Lemma 3.5 we have
12ddtD(θ)2≤D(θ)1˙θM+˙θm(D(ω)+Ksin|α|−KsinD∞∗D∞∗(D(θ)+|α|))=D(ω)+Ksin|α|˙θM+˙θmD(θ)−KsinD∞∗D∞∗(˙θM+˙θm)D(θ)(D(θ)+|α|)≤D(ω)+Ksin|α|2√ω_−KD(θ)−KsinD∞∗D∞∗2√ˉω+KD(θ)(D(θ)+|α|),t∈[0,T∗). |
Then we obtain
ddtD(θ)≤D(ω)+Ksin|α|2√ω_−K−KsinD∞∗2D∞∗√ˉω+K(D(θ)+|α|),t∈[0,T∗), |
i.e.,
ddt(D(θ)+|α|)≤D(ω)+Ksin|α|2√ω_−K−KsinD∞∗2D∞∗√ˉω+K(D(θ)+|α|)=KsinD∞∗2√ˉω+K−KsinD∞∗2D∞∗√ˉω+K(D(θ)+|α|),t∈[0,T∗). |
Here we used the definition of
D(θ(t))+|α|≤D∞∗+(D(θ0)+|α|−D∞∗)e−KsinD∞∗2D∞∗√ˉω+Kt,t∈[0,T∗), |
Thus
D(θ(t))≤(D(θ0)+|α|−D∞∗)e−KsinD∞∗2D∞∗√ˉω+Kt+D∞∗−|α|,t∈[0,T∗). |
Let
D(θ(T∗))≤(D(θ0)+|α|−D∞∗)e−KsinD∞∗2D∞∗√ˉω+KT∗+D∞∗−|α|<D∞∗−|α|, |
which is contradictory to
T∗=∞. |
That is,
D(θ(t))≤D∞∗−|α|,∀t≥0. |
Lemma 3.7. Let
ddtD(θ(t))≤D(ω)+Ksin|α|2√ω_−K−K2√ˉω+Ksin(D(θ)+|α|),t≥0. |
Proof. It follows from (11) and Lemma 3.5, Lemma 3.6 and that we have
12ddtD(θ)2=D(θ)ddtD(θ)≤D(θ)1˙θM+˙θm[D(ω)−K(sin(D(θ)+|α|)−sin|α|)]=D(ω)+Ksin|α|˙θM+˙θmD(θ)−Ksin(D(θ)+|α|)˙θM+˙θmD(θ)≤D(ω)+Ksin|α|2√ω_−KD(θ)−Ksin(D(θ)+|α|)2√ˉω+KD(θ),t≥0. |
The proof is completed.
Lemma 3.8. Let
D(θ(t))<D∞1−|α|+ε,t≥T. |
Proof. Consider the ordinary differential equation:
˙y=D(ω)+Ksin|α|2√ω_−K−K2√ˉω+Ksiny,y(0)=y0∈[0,D∞∗). | (13) |
It is easy to find that
|y(t)−y∗|<ε,t≥T. |
In particular,
D(θ(t))+|α|<D∞1+ε,t≥T, |
which is the desired result.
Remark 4. Since
sinD∞1≥D(ω)K+sin|α|>sin|α|, |
we have
Proof of Theorem 2.3. It follows from Lemma 3.8 that for any small
supt≥TD(θ(t))<D∞1−|α|+ε<π2. |
We differentiate the equation (4) to find
˙νi=K2NνiN∑l=1cos(θl−θi+α)(νl−νi),νi>0. |
We now consider the temporal evolution of
ddtD(ν)=˙νM−˙νm=K2NνMN∑l=1cos(θl−θνM+α)(νl−νM)−K2NνmN∑l=1cos(θl−θνm+α)(νl−νm)≤K2NνMN∑l=1cos(D∞1+ε)(νl−νM)−K2NνmN∑l=1cos(D∞1+ε)(νl−νm)≤Kcos(D∞1+ε)2N√ˉω+KN∑l=1(νl−νM−νl+νm)=−Kcos(D∞1+ε)2√ˉω+KD(ν),t≥T, |
where we used
cos(θl−θνM+α),cos(θl−θνm+α)≥cos(D∞1+ε),andνM,νm≤√ˉω+K. |
Thus we obtain
D(ν(t))≤D(ν(T))e−Kcos(D∞1+ε)2√ˉω+K(t−T),t≥T, |
and proves (7) with
In this paper, we presented synchronization estimates for the Kuramoto-like model. We show that for identical oscillators with zero frustration, complete phase synchronization occurs exponentially fast if the initial phases are confined inside an arc with geodesic length strictly less than
We would like to thank the anonymous referee for his/her comments which helped us to improve this paper.
[1] |
C. Y. Cheung, D. Xu, C. Y. Cheng, C. Sabanayagam, T. Y. Wong, A deep-learning system for the assessment of cardiovascular disease risk via the measurement of retinal-vessel calibre, Nat. Biomed. Eng., 5 (2021), 498–508. https://doi.org/10.1038/s41551-020-00626-4 doi: 10.1038/s41551-020-00626-4
![]() |
[2] |
A. M. Mendonca, A. Campilho, Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction, Ieee. T. Med. Imaging., 25 (2006), 1200–1213. https://doi.org/10.1109/tmi.2006.879955 doi: 10.1109/tmi.2006.879955
![]() |
[3] |
Y. Yin, M. Adel, S. Bourennane, Automatic segmentation and measurement of vasculature in retinal fundus images using probabilistic formulation, Comput. Math. Methods. Med., 2013 (2013), 260410. https://doi.org/10.1155/2013/260410 doi: 10.1155/2013/260410
![]() |
[4] | D. H. Ye, D. Kwon, I. D. Yun, S. U. Lee, Fast multiscale vessel enhancement filtering, in Proceedings of SPIE - The International Society for Optical Engineering, 6914 (2008), 691423. https://doi.org/10.1117/12.770038 |
[5] |
I. Lázár, A. Hajdu, Segmentation of retinal vessels by means of directional response vector similarity and region growing, Comput. Biol. Med., 66 (2015), 209–221. https://doi.org/10.1016/j.compbiomed.2015.09.008 doi: 10.1016/j.compbiomed.2015.09.008
![]() |
[6] |
L. C. Neto, G. Ramalho, J. Neto, R. Veras, F. Medeiros, An unsupervised coarse-to-fine algorithm for blood vessel segmentation in fundus images, Expert. Syst. Appl, 78 (2017), 182–192. https://doi.org/10.1016/j.eswa.2017.02.015 doi: 10.1016/j.eswa.2017.02.015
![]() |
[7] |
U. Nguyen, A. Bhuiyan, L. Park, K. Ramamohanarao, An effective retinal blood vessel segmentation method using multi-scale line detection, Pattern. Recogn., 46 (2013), 703–715. https://doi.org/10.1016/j.patcog.2012.08.009 doi: 10.1016/j.patcog.2012.08.009
![]() |
[8] |
J. Staal, M.D. Abramoff, M. Niemeijer, M.A. Viergever, B. van Ginneken, Ridge-based vessel segmentation in color images of the retina, IEEE. T. Med. Imaging., 23 (2004), 501–509. https://doi.org/10.1109/TMI.2004.825627 doi: 10.1109/TMI.2004.825627
![]() |
[9] |
E. Ricci, R. Perfetti, Retinal blood vessel segmentation using line operators and support vector classification. IEEE. T. Med. Imaging., 26 (2007), 1357–1365. https://doi.org/10.1109/TMI.2007.898551 doi: 10.1109/TMI.2007.898551
![]() |
[10] |
S. Franklin, S. Rajan, Computerized screening of diabetic retinopathy employing blood vessel segmentation in retinal images, Biocybern. Biomed. Eng., 34 (2014), 117–124. https://doi.org/10.1016/j.bbe.2014.01.004 doi: 10.1016/j.bbe.2014.01.004
![]() |
[11] |
A. Krizhevsky, I. Sutskever, G. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
![]() |
[12] | K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, Comput. Sci, 2014. https://doi.org/10.48550/arXiv.1409.1556 |
[13] | C. Szegedy, W. Liu, Y. Jia, P. Sermanet, A. Rabinovich, Going deeper with convolutions, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 1–9. https://doi.org/10.1109/CVPR.2015.7298594 |
[14] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90 |
[15] | G. Huang, Z. Liu, V. Laurens, K. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2261–2269. https://doi.org/10.1109/CVPR.2017.243 |
[16] | O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in International Conference on Medical image computing and computer-assisted intervention, (2015), 234–241. https://doi.org/10.1007/978-3-319-24574-4_28 |
[17] | Z. Zhou, M. Siddiquee, N. Tajbakhsh, J. Liang, UNet++: A nested U-Net architecture for medical image segmentation, Lecture Notes in Computer Science, Springer, Cham, 11045 (2018). https://doi.org/10.1007/978-3-030-00889-5_1 |
[18] |
Q. Jin, Z. Meng, T. Pham, Q. Chen, L. Wei, R. Su, DUNet: A deformable network for retinal vessel segmentation, Know.-Based Syst., 178 (2019), 149–162. https://doi.org/10.1016/j.knosys.2019.04.025 doi: 10.1016/j.knosys.2019.04.025
![]() |
[19] | O. Oktay, J. Schlemper, L. Folgoc, M. Lee, M. Heinrich, K. Misawa, et al., Attention U-Net: Learning where to look for the pancreas, 2018. https://doi.org/10.48550/arXiv.1804.03999 |
[20] |
J. Ding, Z. Zhang, J. Tang, F. Guo, A multichannel deep neural network for retina vessel segmentation via a fusion mechanism, Front. Bioeng. Biotechnol., 9 (2021), 663. https://doi.org/10.3389/fbioe.2021.697915 doi: 10.3389/fbioe.2021.697915
![]() |
[21] | X. Sun, X. Cao, Y. Yang, L. Wang, Y. Xu, Robust retinal vessel segmentation from a data augmentation perspective, Ophthalmic Medical Image Analysis, Lecture Notes in Computer Science, Springer, Cham, 12970 (2021), 189–198. https://doi.org/10.1007/978-3-030-87000-3_20 |
[22] |
Z. Li, M. Jia, X. Yang, M. Xu, Blood vessel segmentation of retinal image based on Dense-U-Net Network, Micromachines, 12 (2021), 1478. https://doi.org/10.3390/mi12121478 doi: 10.3390/mi12121478
![]() |
[23] | M. Alom, M. Hasan, C. Yakopcic, T. Taha, V. K. Asari, Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation, preprint, arXiv: 1802.06955. |
[24] |
W. Liu, Y. Jiang, J. Zhang, Z. Ma, RFARN: Retinal vessel segmentation based on reverse fusion attention residual network, PLoS ONE, 16 (2021). https://doi.org/10.1371/journal.pone.0257256 doi: 10.1371/journal.pone.0257256
![]() |
[25] | Q. Yang, B. Ma, H. Cui, J. Ma, AMF-NET: Attention-aware multi-scale fusion network for retinal vessel segmentation, in 2021 43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (2021), 3277–3280. https://doi.org/10.1109/EMBC46164.2021.9630756 |
[26] | J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, et al., Dual attention network for scene segmentation, in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (2019), 3141–3149. https://doi.org/10.1109/CVPR.2019.00326 |
[27] | I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, et al., Generative adversarial nets, in Proceedings of the 27th International Conference on Neural Information Processing Systems, 27 (2014), 2672–2680. https://doi.org/10.48550/arXiv.1406.2661 |
[28] | M. Mirza, S. Osindero, Conditional generative adversarial nets, Comput. Therm. Sci., (2014), 2672–2680. https://doi.org/10.48550/arXiv.1411.1784 |
[29] |
B. Lei, Z. Xia, F. Jiang, X. Jiang, S. Wang, Skin lesion segmentation via generative adversarial networks with dual discriminators, Med. Image. Anal., 64 (2020), 101716, https://doi.org/10.1016/j.media.2020.101716 doi: 10.1016/j.media.2020.101716
![]() |
[30] | A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, preprint, arXiv: 1511.06434. |
[31] | P. Isola, JY. Zhu, T. Zhou, AA. Efros, Image-to-Image translation with conditional adversarial networks, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), 5967–5976. https://doi.org/10.1109/CVPR.2017.632 |
[32] |
T. Yang, T. Wu, L. Li, C. Zhu, SUD-GAN: Deep convolution generative adversarial network combined with short connection and dense block for retinal vessel segmentation, J. Digit. Imaging., 33 (2020), 946–957. https://doi.org/10.1007/s10278-020-00339-9. doi: 10.1007/s10278-020-00339-9
![]() |
[33] |
J. Son, S. Park, K. Jung, Towards accurate segmentation of retinal vessels and the optic disc in fundoscopic images with generative adversarial networks, J. Digit. Imaging., 32 (2019), 499–512. https://doi.org/10.1007/s10278-018-0126-3 doi: 10.1007/s10278-018-0126-3
![]() |
[34] |
X. Dong, Y. Lei, T. Wang, M. Thomas, L. Tang, W. J. Curran, et al., Automatic multiorgan segmentation in thorax CT images using U-Net-GAN, Med. Phys., 46 (2019), 2157–2168. https://doi.org/10.1002/mp.13458 doi: 10.1002/mp.13458
![]() |
[35] |
J. Zhang, L. Yu, D. Chen, W. Pan, C. Shi, Y. Niu, et al., Dense GAN and multi-layer attention based lesion segmentation method for COVID-19 CT images, Biomed. Signal. Process. Control., 69 (2021), 102901. https://doi.org/10.1016/j.bspc.2021.102901 doi: 10.1016/j.bspc.2021.102901
![]() |
[36] |
A. You, J. Kim, I. Ryu, T. Yoo, Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey, Eye. Vis. (Lond), 9 (2022), 1–19, https://doi.org/10.1186/s40662-022-00277-3 doi: 10.1186/s40662-022-00277-3
![]() |
[37] | V. Bellemo, P. Burlina, Y. Liu, T. Wong, D. Ting, Generative adversarial networks (GANs) for retinal fundus image synthesis, in Computer Vision – ACCV 2018 Workshops, Lecture Notes in Computer Science, Springer, Cham, 11367 (2018), 289–302. https://doi.org/10.1007/978-3-030-21074-8_24 |
[38] | S. Kamran, K. Hossain, A. Tavakkoli, S. Zuckerbrod, K. Sanders, S. Baker, RV-GAN: Segmenting retinal vascular structure in fundus photographs using a novel multi-scale generative adversarial network, in MICCAI 2021: Medical Image Computing and Computer Assisted Intervention, Lecture Notes in Computer Science, Springer, 12908 (2021), 34–44. https://doi.org/10.1007/978-3-030-87237-3_4 |
[39] | M. Alom, M. Hasan, C. Yakopcic, T. Taha, Inception recurrent convolutional neural network for object recognition, preprint, arXiv: 1704.07709. |
[40] | M. Alom, M. Hasan, C. Yakopcic, T. Taha, V. Asari, Improved inception-residual convolutional neural network for object recognition, preprint, arXiv: 1712.09888. |
[41] |
C. Owen, A. Rudnicka, R. Mullen, S. Barman, D. Monekosso, P. Whincup, et al., Measuring retinal vessel tortuosity in 10-year-old children: validation of the computer-assisted image analysis of the retina (CAIAR) program, Invest. Ophth. Vis. Sci., 50 (2009), 2004–2010. https://doi.org/10.1167/iovs.08-3018 doi: 10.1167/iovs.08-3018
![]() |
[42] |
A. D. Hoover, V. Kouznetsova, M. Goldbaum, Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE. T. Med. Imaging., 19 (2000), 203–210. https://doi.org/10.1109/42.845178 doi: 10.1109/42.845178
![]() |
[43] | C. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen, C. Fan, SA-UNet: Spatial attention U-Net for retinal vessel segmentation, in 2020 25th International Conference on Pattern Recognition (ICPR), (2021), 1236–1242. https://doi.org/10.48550/arXiv.2004.03696 |
[44] | J. Zhuang, LadderNet: Multi-path networks based on U-Net for medical image segmentation, preprint, arXiv: 1810.07810 |
[45] | L. Li, M. Verma, Y. Nakashima, H. Nagahara, R. Kawasaki, Iternet: Retinal image segmentation utilizing structural redundancy in vessel networks, in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), (2020), 3645–3654. https://doi.org/10.1109/WACV45572.2020.9093621 |
[46] |
H. Ding, X. Cui, L. Chen, K. Zhao, MRU-Net: A U-shaped network for retinal vessel segmentation, Appl. Sci., 10 (2020), 6823. https://doi.org/10.3390/app10196823 doi: 10.3390/app10196823
![]() |
[47] |
D. Huang, L. Yin, H. Guo, W. Tang, T. Wan, FAU-Net: Fixup initialization channel attention neural network for complex blood vessel segmentation, Appl. Sci., 10 (2020), 6280. https://doi.org/10.3390/app10186280 doi: 10.3390/app10186280
![]() |
1. | Sha Xu, Xiaoyue Huang, Hua Zhang, 2024, Synchronization of a Kuramoto-like Model with Time Delay and Phase Shift, 978-9-8875-8158-1, 5299, 10.23919/CCC63176.2024.10662837 | |
2. | Sun-Ho Choi, Hyowon Seo, Inertial power balance system with nonlinear time-derivatives and periodic natural frequencies, 2024, 129, 10075704, 107695, 10.1016/j.cnsns.2023.107695 |