Processing math: 100%
Research article Special Issues

DAFNet: A dual attention-guided fuzzy network for cardiac MRI segmentation

  • † These two authors contributed equally.
  • Received: 05 December 2023 Revised: 07 February 2024 Accepted: 26 February 2024 Published: 01 March 2024
  • MSC : 68T07, 60A86

  • Background 

    In clinical diagnostics, magnetic resonance imaging (MRI) technology plays a crucial role in the recognition of cardiac regions, serving as a pivotal tool to assist physicians in diagnosing cardiac diseases. Despite the notable success of convolutional neural networks (CNNs) in cardiac MRI segmentation, it remains a challenge to use existing CNNs-based methods to deal with fuzzy information in cardiac MRI. Therefore, we proposed a novel network architecture named DAFNet to comprehensively address these challenges.

    Methods 

    The proposed method was used to design a fuzzy convolutional module, which could improve the feature extraction performance of the network by utilizing fuzzy information that was easily ignored in medical images while retaining the advantage of attention mechanism. Then, a multi-scale feature refinement structure was designed in the decoder portion to solve the problem that the decoder structure of the existing network had poor results in obtaining the final segmentation mask. This structure further improved the performance of the network by aggregating segmentation results from multi-scale feature maps. Additionally, we introduced the dynamic convolution theory, which could further increase the pixel segmentation accuracy of the network.

    Result 

    The effectiveness of DAFNet was extensively validated for three datasets. The results demonstrated that the proposed method achieved DSC metrics of 0.942 and 0.885, and HD metricd of 2.50mm and 3.79mm on the first and second dataset, respectively. The recognition accuracy of left ventricular end-diastolic diameter recognition on the third dataset was 98.42%.

    Conclusion 

    Compared with the existing CNNs-based methods, the DAFNet achieved state-of-the-art segmentation performance and verified its effectiveness in clinical diagnosis.

    Citation: Yuxin Luo, Yu Fang, Guofei Zeng, Yibin Lu, Li Du, Lisha Nie, Pu-Yeh Wu, Dechuan Zhang, Longling Fan. DAFNet: A dual attention-guided fuzzy network for cardiac MRI segmentation[J]. AIMS Mathematics, 2024, 9(4): 8814-8833. doi: 10.3934/math.2024429

    Related Papers:

    [1] Yeyang Jiang, Zhihua Liao, Di Qiu . The existence of entire solutions of some systems of the Fermat type differential-difference equations. AIMS Mathematics, 2022, 7(10): 17685-17698. doi: 10.3934/math.2022974
    [2] Hong Li, Keyu Zhang, Hongyan Xu . Solutions for systems of complex Fermat type partial differential-difference equations with two complex variables. AIMS Mathematics, 2021, 6(11): 11796-11814. doi: 10.3934/math.2021685
    [3] Minghui Zhang, Jianbin Xiao, Mingliang Fang . Entire solutions for several Fermat type differential difference equations. AIMS Mathematics, 2022, 7(7): 11597-11613. doi: 10.3934/math.2022646
    [4] Zhenguang Gao, Lingyun Gao, Manli Liu . Entire solutions of two certain types of quadratic trinomial q-difference differential equations. AIMS Mathematics, 2023, 8(11): 27659-27669. doi: 10.3934/math.20231415
    [5] Wenju Tang, Keyu Zhang, Hongyan Xu . Results on the solutions of several second order mixed type partial differential difference equations. AIMS Mathematics, 2022, 7(2): 1907-1924. doi: 10.3934/math.2022110
    [6] Zhiyong Xu, Junfeng Xu . Solutions to some generalized Fermat-type differential-difference equations. AIMS Mathematics, 2024, 9(12): 34488-34503. doi: 10.3934/math.20241643
    [7] Guowei Zhang . The exact transcendental entire solutions of complex equations with three quadratic terms. AIMS Mathematics, 2023, 8(11): 27414-27438. doi: 10.3934/math.20231403
    [8] Hong Yan Xu, Zu Xing Xuan, Jun Luo, Si Min Liu . On the entire solutions for several partial differential difference equations (systems) of Fermat type in C2. AIMS Mathematics, 2021, 6(2): 2003-2017. doi: 10.3934/math.2021122
    [9] Nan Li, Jiachuan Geng, Lianzhong Yang . Some results on transcendental entire solutions to certain nonlinear differential-difference equations. AIMS Mathematics, 2021, 6(8): 8107-8126. doi: 10.3934/math.2021470
    [10] Fengrong Zhang, Linlin Wu, Jing Yang, Weiran Lü . On entire solutions of certain type of nonlinear differential equations. AIMS Mathematics, 2020, 5(6): 6124-6134. doi: 10.3934/math.2020393
  • Background 

    In clinical diagnostics, magnetic resonance imaging (MRI) technology plays a crucial role in the recognition of cardiac regions, serving as a pivotal tool to assist physicians in diagnosing cardiac diseases. Despite the notable success of convolutional neural networks (CNNs) in cardiac MRI segmentation, it remains a challenge to use existing CNNs-based methods to deal with fuzzy information in cardiac MRI. Therefore, we proposed a novel network architecture named DAFNet to comprehensively address these challenges.

    Methods 

    The proposed method was used to design a fuzzy convolutional module, which could improve the feature extraction performance of the network by utilizing fuzzy information that was easily ignored in medical images while retaining the advantage of attention mechanism. Then, a multi-scale feature refinement structure was designed in the decoder portion to solve the problem that the decoder structure of the existing network had poor results in obtaining the final segmentation mask. This structure further improved the performance of the network by aggregating segmentation results from multi-scale feature maps. Additionally, we introduced the dynamic convolution theory, which could further increase the pixel segmentation accuracy of the network.

    Result 

    The effectiveness of DAFNet was extensively validated for three datasets. The results demonstrated that the proposed method achieved DSC metrics of 0.942 and 0.885, and HD metricd of 2.50mm and 3.79mm on the first and second dataset, respectively. The recognition accuracy of left ventricular end-diastolic diameter recognition on the third dataset was 98.42%.

    Conclusion 

    Compared with the existing CNNs-based methods, the DAFNet achieved state-of-the-art segmentation performance and verified its effectiveness in clinical diagnosis.



    Throughout this paper, let p be an odd prime and q=pn for some positive integer n. Codebooks (also known as signal sets) with low coherence are typically used to distinguish signals of different users in code division multiple access (CDMA) systems. An (N,K) codebook C is a finite set {c0,c1,,cN1}, where the codeword ci, 0iN1, is a unit norm 1×K complex vector over an alphabet A. The maximum inner-product correlation Imax(C) of C is defined by

    Imax(C)=max0ijN1|cicHj|,

    where cHj denotes the conjugate transpose of cj. The maximal cross-correlation amplitude Imax(C) of C is an important index of C, as it can approximately optimize many performance metrics such as outage probability and average signal-to-noise ratio. For a fixed K, researchers are highly interested in designing a codebook C with the parameter N being as large as possible and Imax(C) being as small as possible simultaneously. Unfortunately, there exists a bound between the parameters N, K and Imax(C).

    Lemma 1. ([1]) For any (N,K) codebook C with NK,

    Imax(C)NK(N1)K. (1.1)

    The bound in (1.1) is called the Welch bound of C and is denoted by Iw(C). If the codebook C achieves Iw(C), then C is said to be optimal with respect to the Welch bound. However, constructing codebooks achieving the Welch bound is extremely difficult. Hence, many researchers have focused their main energy on constructing asymptotically optimal codebooks, i.e., Imax(C) asymptotically meets the Welch bound Iw(C) for sufficiently large N [2,3,4,5,6,7].

    The objective of this paper is to construct a class of complex codebooks and investigate their maximum inner-product correlation. Results show that these constructed complex codebooks are nearly optimal with respect to the Welch bound, i.e., the ratio of their maximal cross-correlation amplitude to the Welch bound approaches 1. These codebooks may have applications in strongly regular graphs [8], combinatorial designs [9,10], and compressed sensing [11,12].

    This paper is organized as follows. In Section 2, we review some essential mathematical concepts regarding characters and Gauss sums over finite fields. In Section 3, we present a class of asymptotically optimal codebooks using the trace functions and multiplicative characters over finite fields. Finally, we make a conclusion in Section 4.

    In this section, we review some essential mathematical concepts regarding characters and Gauss sums over finite fields. These concepts will play significant roles in proving the main results of this paper.

    Let n be a positive integer and p an odd prime. Denote the finite field with pn elements by Fpn. The trace function Trn from Fpn to Fp is defined by

    Trn(x)=n1i=0xpi.

    Let ζp denote a primitive p-th root of complex unity and Trn denote the trace function from Fpn to Fp. For xFpn, it can be checked that χn given by χn(x)=ζTrn(x)p is an additive character of Fpn, and χn is called the canonical additive character of Fpn. Assume aFpn, then every additive character of Fpn can be obtained by μa(x)=χn(ax) where  xFpn. The orthogonality relation of μa is given by

    xFpnμa(x)={pn,if a=0,0,otherwise. (2.1)

    Let q=pn and α be a primitive element of Fq, then all multiplicative characters of Fq are given by φj(αi)=ζijq1, where ζq1 denotes a primitive (q1)-th root of unity and 0i,jq2. The quadratic character of Fq is the character φ(q1)/2, which will be denoted by ηn in the sequel, and ηn is extended by setting ηn(0)=0. For φj, its orthogonality relation is given by

    xFpnφj(x)={q1,if j=0,0,otherwise.

    The Gauss sum G(ηn) over Fpn is defined by

    G(ηn)=xFpnηn(x)χn(x).

    The explicit value of G(ηn) is given in the following lemma.

    Lemma 2 ([13], Theorem 5.15). With symbols and notations above, we have

    G(ηn)=(1)n1(1)(p1)n4q12.

    The following results on exponential sums will play an important role in proving the main results of this paper.

    Lemma 3 ([13], p.195). With symbols and notations above, we have

    η1(x)=1paFpG(η1)η1(a)χ1(ax),

    where η1 denotes the quadratic character and χ1 the canonical additive character of Fp.

    Lemma 4 ([13], Theorem 5.33). If f(x)=a2x2+a1x+a0Fpn[x] with a20, then

    xFpnζTrn(f(x))p=ηn(a2)G(ηn)ζTrn(a0a21(4a2)1)p.

    Lemma 5 ([14], Theorem 2). Let n=2m be an even integer and zFp, then

    xFpnζzTrn(xpm+1)p=pm.

    Lemma 6 ([15]). Let n=2m be an even integer, aFpm, and bFpn, then

    xFpnζTrm(axpm+1)+Trn(bx)p=pmζTrm(bpm+1a)p.

    Lemma 7 ([16], Lemma 3.12). If A and B are finite abelian groups, then there is an isomorphism

    ^A×BˆA׈B,

    where ˆA consists of all characters of A.

    By this lemma, we know that

    ^F+pn×F+pn={μa,b:a,bFpn}

    where

    μa,b(x,y)=ζTrn(ax+by)p

    for x,yFpn.

    In this section, we always suppose that n=2m is an even integer and p is an odd prime. The set D is defined as follows:

    D={(x,y)Fpn×Fpn:η1(Trn(x2+ypm+1))=1},

    where η1 is the quadratic character of Fp. A codebook C is constructed by

    C={ca,b:a,bFpn}, (3.1)

    where ca,b=1|D|(μa,b(x,y))(x,y)D, μa,b(x,y)=ζTrn(ax+by)p for (x,y)D and |D| denotes the cardinality of the set D.

    Lemma 8. With symbols and notations as above, we have

    |D|=p12(p2n1(1)n(p1)4pn1).

    Proof. Let

    A1=x,yFpnTrn(x2+ypm+1)=01, A2=x,yFpnTrn(x2+ypm+1)0η1(Trn(x2+ypm+1)).

    Note that

    x,yFpnTrn(x2+ypm+1)=01+x,yFpnTrn(x2+ypm+1)01=p2n.

    Together with the definition of D, we have

    |D|=x,yFpnTrn(x2+ypm+1)0η1(Trn(x2+ypm+1))+12=12x,yFpnTrn(x2+ypm+1)0η1(Trn(x2+ypm+1))+12x,yFpnTrn(x2+ypm+1)01=A22+p2n212x,yFpnTrn(x2+ypm+1)=01=12(p2nA1+A2). (3.2)

    By definition, we have

    A1=1px,yFpnzFpζzTrn(x2+ypm+1)p=p2n1+1pzFpx,yFpnζTrn(zx2)+Trn(zypm+1)p=p2n1+(1)n(p1)4pn1(p1). (3.3)

    where the last equality follows from Lemmas 2, 4, and 5. Note that ηn(z)=1 for zFp if n is even. By Lemma 3, we obtain

    A2=G(ηn)paFpη1(a)xFpnζTrn(ax2)pyFpnζTrn(aypm+1)p

    Using Lemmas 4 and 5, we get

    A2=pm1G2(ηn)aFpη1(a)=0.

    The desired conclusion follows from (3.2) and (3.3).

    Example 1. Let p=5 and n=2. By the Magma program, we know that |D|=240, which is consistent with Lemma 8.

    Theorem 9. Let symbols and notations be the same as before, then the codebook C defined in (3.1) has parameters [p2n,K],

    K=p12(p2n1(1)n(p1)4pn1),

    and

    Imax(C)=(p+1)pn1/(2K).

    Proof. By the definition of the set C and Lemma 8, we deduce that C is a [p2n,K] codebook. If a,bFpn and (a,b)(0,0), then we have

    x,yFpnζTrn(ax+by)p=x,yFpnTrn(x2+ypm+1)=0ζTrn(ax+by)p+x,yFpnTrn(x2+ypm+1)0ζTrn(ax+by)p=0

    This implies that

    x,yFpnTrn(x2+ypm+1)=0ζTrn(ax+by)p=x,yFpnTrn(x2+ypm+1)0ζTrn(ax+by)p.

    For a,bFpn and (a,b)(0,0), we have that

    (x,y)Dμa,b(x,y)=x,yFpnTrn(x2+ypm+1)0ζTrn(ax+by)pη1(Trn(x2+ypm+1))+12=12x,yFpnTrn(x2+ypm+1)=0ζTrn(ax+by)p+12x,yFpnTrn(x2+ypm+1)0ζTrn(ax+by)pη1(Trn(x2+ypm+1))=12(B1+B2), (3.4)

    where

    B1=x,yFpnTrn(x2+ypm+1)=0ζTrn(ax+by)p, B2=x,yFpnζTrn(ax+by)pη1(Trn(x2+ypm+1)).

    By (2.1), we derive that

    zFpζzTrn(x2+ypm+1)p={p,if Trn(x2+ypm+1)=0,0,otherwise.

    Combining Lemmas 4 and 6, we get that

    B1=1px,yFpnzFpζTrn(ax+by)pζzTrn(x2+ypm+1)p=1pzFpx,yFpnζTrn(zx2+ax)pζTrn(zypm+1+by)p=pm1G(ηn)zFpηn(z)ζTrn(a2+bpm+1)zp={pm1(p1)G(ηn),if Trn(a2+bpm+1)=0,pm1G(ηn),if Trn(a2+bpm+1)0, (3.5)

    where G(ηn) is given in Lemma 2. By Lemma 3, we have that

    B2=G(η1)pzFpη1(z)xFpnζTrn(zx2+ax)pyFpnζTrn(zypm+1+by)p.

    Moreover, by Lemmas 4 and 6, we obtain

    B2=pm1G(η1)G(ηn)zFpη1(z)ζzTrn(a2+bpm+1)p={0,if Trn(a2+bpm+1)=0,pm(1p)G(ηn)η1(Trn(a2+bpm+1)),if Trn(a2+bpm+1)0. (3.6)

    It follows from Lemma 2 and (3.4) that

    (x,y)Dμa,b(x,y){12(p1)pm1G(ηn),12(p+1)pm1G(ηn)}. (3.7)

    For any two distinct codewords cz1,z2, cz1,z2C, i.e., (z1,z2)(z1,z2), it is easy to check that

    |cz1,z2cHz1,z2|=1K|(x,y)Dμz1z1,z2z2(x,y)|. (3.8)

    Combining (3.7) and (3.8), we get that Imax(C)=(p+1)pn1/(2K).

    Example 2. Let f(x) be an irreducible polynomial over the field F3 and f(x)=x2+x+2 in F3[x]. Suppose that p=3, n=2, and α is a root of f(x) over F3, then m=1, q=32, and F9=F3(α). It can be verified that the set D consists of the following 30 elements:

    D={(x,y)F9×F9:Tr2(x2+y4)=1}={(1+2α,0),(2+α,0),(1,1),(1,2),(1,1+2α),(1,2+α),(2,1),(2,2),(2,1+2α),(2,2+α),(α,α),(α,2α),(α,1+α),(α,2+2α),(2α,α),(2α,2α),(2α,1+α),(2α,2+2α),(0,α),(0,2α),(0,1+α),(0,2+2α),(1+α,α),(1+α,2α),(1+α,1+α),(1+α,2+2α),(2+2α,α),(2+2α,2α),(2+2α,1+α),(2+2α,2+2α)}.

    The corresponding codebook C is given by

    C={130(ζTr2(ax+by)3)(x,y)D:a,bF9},

    where ζ3=e2π13 and Tr2 denotes the trace function from F9 to F3.

    Corollary 10. The codebook C constructed in (3.1) is asymptotically optimal with respect to the Welch bound.

    Proof. The corresponding Welch bound is

    Iw(C)=p2n1(p+1)+(1)n(p1)4pn1(p1)(p2n1)(p1)(p2n1(1)n(p1)4pn1).

    We deduce that

    limpn+Iw(C)Imax(C)=limpn+4K(p2nK)(p2n1)(p+1)2p2n2=1,

    which implies that C asymptotically meets the Welch bound.

    In Table 1, we show some parameters of some specific codebooks defined in (3.1). From this table, we conclude that Imax(C) is very close to Iw(C) for largely enough p, which ensures the correctness of Theorem 9 and Corollary 1.

    Table 1.  The parameters of the codebook C in (3.1) for n=4.
    p N K Imax(C) Iw(C) Imax(C)/Iw(C)
    3 38 2160 1/40 1.762×102 1.4185
    7 78 2469600 1/1800 4.811×104 1.1548
    11 118 97429200 1/12200 7.483×105 1.0955
    13 138 376477920 1/24480 3.782×105 1.0801
    17 178 3282670080 1/74240 1.2699×105 1.0607

     | Show Table
    DownLoad: CSV

    This paper presented a family of codebooks by the combination of additive characters and multiplicative characters over finite fields. Results show that the constructed codebooks are asymptotically optimal in the sense that the maximum cross correlation amplitude of the codebooks asymptotically achieves the Welch bound. As a comparison, parameters of some known nearly optimal codebooks and the constructed ones are listed in Table 2. From this table, we can conclude that the parameters of C are not covered by those in [2,3,4,5,6,17,18,19]. This means the presented codebooks have new parameters.

    Table 2.  The parameters of codebooks asymptotically meeting the Welch bound.
    Ref. Parameters (N,K) Constraints
    [2] ((q1)+M,M), M=(q1)+(1)+1q. q is a prime power, >2.
    [3] (2K+(1)ln,K), K=(q11)n(ql1)n(1)ln2. 1il, si>1, qi=2si, l>1 and n>1.
    [4] ((qs1)m+qsm1,qsm1) s>1, m>1, q is a prime power.
    [5] ((pmin+1)Q2,Q2) Q>1 is an integer, pmin is the smallest prime factor of Q.
    [6] (pminN1N2,N1N2) N11, N2=N1+o(N1), pmin is the smallest prime factor of N2.
    [6] (pminN1N2,N1N2) N11, N2=N1+o(N1), pmin is the smallest prime factor of N2.
    [17] (q1q2ql,(q1q2ql1)/2) 1il, qi is a prime power, qi3(mod4)
    [18] (q,q+12) q is a prime power.
    [19] (q3+q2,q2) or (q3+q2q,q2q) q is a prime power.
    Thm. 3.1 (p2n,p12(p2n1(1)n(p1)4pn1)) p is an odd prime, n=2m, m is a positive integer.

     | Show Table
    DownLoad: CSV

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the Innovation Project of Engineering Research Center of Integration and Application of Digital Learning Technology (No.1221003), Humanities and Social Sciences Youth Foundation of Ministry of Education of China (No. 22YJC870018), the Science and Technology Development Fund of Tianjin Education Commission for Higher Education (No. 2020KJ112, 2022KJ075, KYQD1817), the National Natural Science Foundation of China (Grant No. 12301670), the Natural Science Foundation of Tianjin (Grant No. 23JCQNJC00050), Haihe Lab. of Information Technology Application Innovation (No. 22HHXCJC00002), Fundamental Research Funds for the Central Universities, China (Grant No. ZY2301, BH2316), the Open Project of Tianjin Key Laboratory of Autonomous Intelligence Technology and Systems (No. TJKL-AITS-20241004, No. TJKL-AITS-20241006).

    The authors declare no conflicts of interest.



    [1] World health statistics 2022: Monitoring health for the SDGs, Sustainable development goals, Geneva: World Health Organization, 2022. Available from: https://www.who.int/data/gho/publications/world-health-statistics.
    [2] A. F. Frangi, W. J. Niessen, M. A. Viergever, Three-dimensional modeling for functional analysis of cardiac images, a review, IEEE T. Med. Imaging, 20 (2001), 2−5. https://doi.org/10.1109/42.906421 doi: 10.1109/42.906421
    [3] S. J. Al'Aref, K. Anchouche, G. Singh, P. J. Slomka, K. K. Kolli, A. Kumar, et al., Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging, Eur. Heart J., 40 (2019), 1975−1986. https://doi.org/10.1093/eurheartj/ehy404 doi: 10.1093/eurheartj/ehy404
    [4] G. I. Sanchez-Ortiz, A. Noble, Fuzzy clustering driven anisotropic diffusion: Enhancement and segmentation of cardiac MR images, In: 1998 IEEE Nuclear Science Symposium Conference Record, IEEE Nuclear Science Symposium and Medical Imaging Conference, 1998, 1873−1874. https://doi.org/10.1109/NSSMIC.1998.773901
    [5] N. Paragios. A level set approach for shape-driven segmentation and tracking of the left ventricle, IEEE T. Med. Imaging, 22 (2003), 773−776. https://doi.org/10.1109/TMI.2003.814785 doi: 10.1109/TMI.2003.814785
    [6] P. Tran, A fully convolutional neural network for cardiac segmentation in short-axis MRI, arXiv preprint, 2016. Available from: https://arXiv.org/abs/1604.00494.
    [7] E. Shelhamer, J. Long, T. Darrell, Fully convolutional networks for semantic segmentation, IEEE T. Pattern Anal., 39 (2017), 3431−3440. https://doi.org/10.1109/TPAMI.2016.2572683 doi: 10.1109/TPAMI.2016.2572683
    [8] M. Khened, V. Kollerathu, G. Krishnamurthi, Fully convolutional multi-scale residual DenseNets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers, Med. Image Anal., 51 (2019), 21−45. https://doi.org/10.1016/j.media.2018.10.004 doi: 10.1016/j.media.2018.10.004
    [9] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, 2015,234−241. https://doi.org/10.1007/978-3-319-24574-4_28
    [10] N. Painchaud, Y. Skandarani, T. Judge, O. Bernard, A. Lalande, P. M. Jodoin, Cardiac segmentation with strong anatomical guarantees, IEEE T. Med. Imaging, 39 (2020), 3703−3713. https://doi.org/10.1109/TMI.2020.3003240 doi: 10.1109/TMI.2020.3003240
    [11] F. Cheng, C. Chen, Y. Wang, H. Shi, Y. Cao, D. Tu, et al., Learning directional feature maps for cardiac MRI segmentation, In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, 2020,108−117. https://doi.org/10.1007/978-3-030-59719-1_11
    [12] Q. Tong, C. Li, W. Si, X. Liao, Y. Tong, Z. Yuan, et al., RIANet: Recurrent interleaved attention network for cardiac MRI segmentation, Comput. Biol. Med., 109 (2019), 290−302. https://doi.org/10.1016/j.compbiomed.2019.04.042 doi: 10.1016/j.compbiomed.2019.04.042
    [13] W. Wang, Q. Xia, Z. Hu, Z. Yan, Z. Li, Y. Wu, et al., Few-shot learning by a cascaded framework with shape-constrained pseudo label assessment for whole heart segmentation, IEEE T. Med. Imaging, 40 (2021), 2629−2641. https://doi.org/10.1109/TMI.2021.3053008 doi: 10.1109/TMI.2021.3053008
    [14] Y. Gao, M. Zhou, D. N. Metaxas, UTNet: A hybrid transformer architecture for medical image segmentation, In: Medical Image Computing and Computer Assisted Intervention, 2021, 61−71. https://doi.org/10.1007/978-3-030-87199-4_6
    [15] A. Rahman, J. Valanarasu, I. Hacihaliloglu, V. M. Patel, Ambiguous medical image segmentation using diffusion models, In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, 11536−11546. https://doi.org/10.1109/CVPR52729.2023.01110
    [16] J. Schlemper, O. Oktay, M. Schaap, M. Heinrich, B. Kainz, B. Glocker, et al., Attention gated networks: Learning to leverage salient regions in medical images, Med. Image Anal., 53 (2019), 197−207. https://doi.org/10.1016/j.media.2019.01.012 doi: 10.1016/j.media.2019.01.012
    [17] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, IEEE T. Pattern Anal., 42 (2020), 2011−2023. https://doi.org/10.1109/TPAMI.2019.2913372 doi: 10.1109/TPAMI.2019.2913372
    [18] S. Woo, J. Park, J. Y. Lee, I. S. Kweon, Cbam: Convolutional block attention module, In: European Conference on Computer Vision-ECCV 2018, Lecture Notes in Computer Science, 2018, 11211. https://doi.org/10.1007/978-3-030-01234-2_1
    [19] Y. Chen, X. Dai, M. Liu, D. Chen, L. Yuan, Z. Liu, Dynamic convolution: Attention over convolution kernels, In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, 11027−11036. https://doi.org/10.1109/CVPR42600.2020.01104
    [20] O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang, P. A. Heng, et al., Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE T. Med. Imaging, 37 (2018), 2514−2525. https://doi.org/10.1109/TMI.2018.2837502 doi: 10.1109/TMI.2018.2837502
    [21] V. Campello, P. Gkontra; C. Izquierdo, C. Martin-Isla, A. Sojoudi, P. M. Full, et al., Multi-centre, multi-vendor and multi-disease cardiac segmentation: The M&Ms challenge, IEEE T. Med. Imaging, 12 (2021), 3543−3554. https://doi.org/10.1109/TMI.2021.3090082 doi: 10.1109/TMI.2021.3090082
    [22] T. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, In: IEEE Conference on Computer Vision and Pattern Recognition, 2017, 2117−2125. https://doi.org/10.1109/CVPR.2017.106
    [23] Y. Ioannou, D. Robertson, R. Cipolla, A. Criminisi, Deep roots: Improving CNN efficiency with hierarchical filter groups, In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, 1231−1240. https://doi.org/10.1109/CVPR.2017.633
    [24] A. Ammari, R. Mahmoudi, B. Hmida, R. Saouli, M. H. Bedoui, A review of approaches investigated for right ventricular segmentation using short-axis cardiac MRI, IET Image Process., 15 (2021), 1845−1868. https://doi.org/10.1049/ipr2.12165 doi: 10.1049/ipr2.12165
    [25] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016,770−778. https://doi.org/10.1109/CVPR.2016.90
    [26] G. Simantiris, G. Tziritas, Cardiac MRI segmentation with a dilated CNN incorporating domainspecific constraints, IEEE J.-STSP, 14 (2020), 1235−1243. https://doi.org/10.1109/JSTSP.2020.3013351 doi: 10.1109/JSTSP.2020.3013351
    [27] J. M. Wolterink, T. Leiner, M. A. Viergever, I. Išgum, Automatic segmentation and disease classification using cardiac cine MR images, In: Statistical Atlases and Computational Models of the Heart & ACDC and MMWHS Challenges, 2017, 1235−1243. https://doi.org/10.1007/978-3-319-75541-0_11
    [28] C. Baumgartner, L. Koch, M. Pollefeys, E. Konukoglu, An exploration of 2D and 3D deep learning techniques for cardiac MR image segmentation, In: Statistical Atlases and Computational Models the Heart & ACDC and MMWHS Challenges, 2018,111−119. https://doi.org/10.1007/978-3-319-75541-0_12
    [29] F. Isensee, P. F. Jaeger, P. M. Full, I. Wolf, S. Engelhardt, K. H. Maier-Hein, Automatic cardiac disease assessment on cine-MRI via time-series segmentation and domain specific features, In: Statistical Atlases and Computational Models of the Heart, ACDC and MMWHS Challenges, 2018,120−129. https://doi.org/10.1007/978-3-319-75541-0_13
    [30] P. Full, F. Isensee, P. Jager, K. Maier-Hein, Studying robustness of semantic segmentation under domain shift in cardiac MRI, In: Statistical Atlases and Computational Models of the Heart, M&Ms and EMIDEC Challenges, 2021,238−249. https://doi.org/10.1007/978-3-030-68107-4_24
    [31] Y. Zhang, J. Yang, F. Hou, Y. Liu, Y. Wang, J. Tian, et al., Semi-supervised cardiac image segmentation via label propagation and style transfer, In: Statistical Atlases and Computational Models of the Heart, M&Ms and EMIDEC Challenges, 2021,219−227. https://doi.org/10.1007/978-3-030-68107-4_22
    [32] J. Ma, Histogram matching augmentation for domain adaptation with application to multi-centre, multi-vendor and multi-disease cardiac image segmentation, In: Statistical Atlases and Computational Models of the Heart, M&Ms and EMIDEC Challenges, 2021,177−186. https://doi.org/10.1007/978-3-030-68107-4_18
    [33] F. Isensee, P. Jaeger, S. Kohl, J. Petersen, K. H. Maier-Hein, nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation, Nat. Methods, 18 (2021), 203−211. https://doi.org/10.1038/s41592-020-01008-z doi: 10.1038/s41592-020-01008-z
    [34] M. Forouzanfar, N. Forghani, M. Teshnehlab, Parameter optimization of improved fuzzy c-means clustering algorithm for brain MR image segmentation, Eng. Appl. Artif. Intel., 23 (2010), 160−168. https://doi.org/10.1016/j.engappai.2009.10.002 doi: 10.1016/j.engappai.2009.10.002
    [35] N. Tajbakhsh, J. Shin; R. Suryakanth, R. T. Hurst, C. B. Kendall, M. B. Gotway, et al., Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE T. Med. Imaging, 35 (2016), 1299−1312. https://doi.org/10.1109/TMI.2016.2535302 doi: 10.1109/TMI.2016.2535302
    [36] T. Lossau, H. Nickisch, T. Wissel, M. Morlock, M. Grass, Learning metal artifact reduction in cardiac CT images with moving pacemakers, Med. Image Anal., 61 (2020), 101655. https://doi.org/10.1016/j.media.2020.101655 doi: 10.1016/j.media.2020.101655
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1316) PDF downloads(111) Cited by(1)

Figures and Tables

Figures(12)  /  Tables(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog