
Coronary artery centerline extraction in cardiac computed tomography angiography (CTA) is an effectively non-invasive method to diagnose and evaluate coronary artery disease (CAD). The traditional method of manual centerline extraction is time-consuming and tedious. In this study, we propose a deep learning algorithm that continuously extracts coronary artery centerlines from CTA images using a regression method. In the proposed method, a CNN module is trained to extract the features of CTA images, and then the branch classifier and direction predictor are designed to predict the most possible direction and lumen radius at the given centerline point. Besides, a new loss function is developed for associating the direction vector with the lumen radius. The whole process starts from a point manually placed at the coronary artery ostia, and terminates until tracking the vessel endpoint. The network was trained using a training set consisting of 12 CTA images and the evaluation was performed using a testing set consisting of 6 CTA images. The extracted centerlines had an average overlap (OV) of 89.19%, overlap until first error (OF) of 82.30%, and overlap with clinically relevant vessel (OT) of 91.42% with manually annotated reference. Our proposed method can efficiently deal with multi-branch problems and accurately detect distal coronary arteries, thereby providing potential help in assisting CAD diagnosis.
Citation: Xintong Wu, Yingyi Geng, Xinhong Wang, Jucheng Zhang, Ling Xia. Continuous extraction of coronary artery centerline from cardiac CTA images using a regression-based method[J]. Mathematical Biosciences and Engineering, 2023, 20(3): 4988-5003. doi: 10.3934/mbe.2023231
[1] | Ziyu Jin, Ning Li . Diagnosis of each main coronary artery stenosis based on whale optimization algorithm and stacking model. Mathematical Biosciences and Engineering, 2022, 19(5): 4568-4591. doi: 10.3934/mbe.2022211 |
[2] | Li Cai, Qian Zhong, Juan Xu, Yuan Huang, Hao Gao . A lumped parameter model for evaluating coronary artery blood supply capacity. Mathematical Biosciences and Engineering, 2024, 21(4): 5838-5862. doi: 10.3934/mbe.2024258 |
[3] | Nattawan Chuchalerm, Wannika Sawangtong, Benchawan Wiwatanapataphee, Thanongchai Siriapisith . Study of Non-Newtonian blood flow - heat transfer characteristics in the human coronary system with an external magnetic field. Mathematical Biosciences and Engineering, 2022, 19(9): 9550-9570. doi: 10.3934/mbe.2022444 |
[4] | Benchawan Wiwatanapataphee, Yong Hong Wu, Thanongchai Siriapisith, Buraskorn Nuntadilok . Effect of branchings on blood flow in the system of human coronary arteries. Mathematical Biosciences and Engineering, 2012, 9(1): 199-214. doi: 10.3934/mbe.2012.9.199 |
[5] | Feng Li, Mingfeng Jiang, Hongzeng Xu, Yi Chen, Feng Chen, Wei Nie, Li Wang . Data governance and Gensini score automatic calculation for coronary angiography with deep-learning-based natural language extraction. Mathematical Biosciences and Engineering, 2024, 21(3): 4085-4103. doi: 10.3934/mbe.2024180 |
[6] | Martina Bukač, Sunčica Čanić . Longitudinal displacement in viscoelastic arteries:A novel fluid-structure interaction computational model, and experimental validation. Mathematical Biosciences and Engineering, 2013, 10(2): 295-318. doi: 10.3934/mbe.2013.10.295 |
[7] | Chenkai Chang, Fei Qi, Chang Xu, Yiwei Shen, Qingwu Li . A dual-modal dynamic contour-based method for cervical vascular ultrasound image instance segmentation. Mathematical Biosciences and Engineering, 2024, 21(1): 1038-1057. doi: 10.3934/mbe.2024043 |
[8] | Honghui Zhang, Jun Xia, Yinlong Yang, Qingqing Yang, Hongfang Song, Jinjie Xie, Yue Ma, Yang Hou, Aike Qiao . Branch flow distribution approach and its application in the calculation of fractional flow reserve in stenotic coronary artery. Mathematical Biosciences and Engineering, 2021, 18(5): 5978-5994. doi: 10.3934/mbe.2021299 |
[9] | B. Wiwatanapataphee, D. Poltem, Yong Hong Wu, Y. Lenbury . Simulation of Pulsatile Flow of Blood in Stenosed Coronary Artery Bypass with Graft. Mathematical Biosciences and Engineering, 2006, 3(2): 371-383. doi: 10.3934/mbe.2006.3.371 |
[10] | Huan Cheng, Jucheng Zhang, Yinglan Gong, Zhaoxia Pu, Jun Jiang, Yonghua Chu, Ling Xia . Semantic segmentation method for myocardial contrast echocardiogram based on DeepLabV3+ deep learning architecture. Mathematical Biosciences and Engineering, 2023, 20(2): 2081-2093. doi: 10.3934/mbe.2023096 |
Coronary artery centerline extraction in cardiac computed tomography angiography (CTA) is an effectively non-invasive method to diagnose and evaluate coronary artery disease (CAD). The traditional method of manual centerline extraction is time-consuming and tedious. In this study, we propose a deep learning algorithm that continuously extracts coronary artery centerlines from CTA images using a regression method. In the proposed method, a CNN module is trained to extract the features of CTA images, and then the branch classifier and direction predictor are designed to predict the most possible direction and lumen radius at the given centerline point. Besides, a new loss function is developed for associating the direction vector with the lumen radius. The whole process starts from a point manually placed at the coronary artery ostia, and terminates until tracking the vessel endpoint. The network was trained using a training set consisting of 12 CTA images and the evaluation was performed using a testing set consisting of 6 CTA images. The extracted centerlines had an average overlap (OV) of 89.19%, overlap until first error (OF) of 82.30%, and overlap with clinically relevant vessel (OT) of 91.42% with manually annotated reference. Our proposed method can efficiently deal with multi-branch problems and accurately detect distal coronary arteries, thereby providing potential help in assisting CAD diagnosis.
Coronary artery disease (CAD) is one of the major leading causes of death all over the world [1]. Currently, the gold standard for evaluating CAD is coronary angiography, while it is limited to invasiveness, operational complexity, and high cost [2]. The three-dimensional (3D) model reconstructed from cardiac computed tomography angiography (CTA) can provide intuitionistic and accurate information on the geometry of the coronary artery, which crucially contributes to the non-invasive diagnosis of CAD [3,4]. There are already some visualized vessel structure techniques in clinical practice, such as curved planar Reformation (CPR) [5]. CPR can obtain straightened coronary artery images, helping planning intervention procedures and choosing stent size [6,7]. The 3D reconstruction of the coronary artery typically involves coronary centerline extraction and vessel lumen segmentation. Coronary centerline extraction as the first step can facilitate branch detection and lumen radius identification, which partly resists interference from nearby veins and myocardium [8]. This was in accordance with Shen et al.'s observation in which the model with centerline as prior knowledge could obtain a higher accuracy of vessel segmentation [9]. Whereas, manual extraction of coronary centerlines is a challenging task because the coronary arteries structure is complex and the annotation process is time-consuming.
In recent years, deep learning, especially U-Net and its variants, has been widely used in medical imaging segmentation tasks [10]. The U-Net and its variants have demonstrated the ability of blood vessel segmentation, which achieved promising results in medical image analysis [11,12]. Convolutional neural network (CNN) showed advantages in extracting high-level features of vascular structures [13]. Dorobanțiu et al. [14] adopted 3D U-Net architecture to segment the coronary artery centerline with voxel precision, estimating whether each voxel was the centerline point. Rjiba et al. [15] employed a local vessel filter based on a binary classification network, successfully extracting centerlines by determining whether the center point of the current patch was located on the centerline. However, these coronary artery structure extraction tasks still face numerous challenges. To start, the full CTA images are usually broken into small patches for training, which carry a great deal of non-vascular information. Accordingly, the process is time-consuming and tedious. Besides, these methods require complex post-processing techniques that can deal with the problem of extracted vessel discontinuities, preserving the completeness of the coronary centerline. Moreover, the coordinate of centerline point obtained by the binary classification network was imprecise, as there exist differences between the center point of patch and the centerline point.
Consequently, the path tracing method, a continuous process, has introduced great precision and efficiency to coronary artery segmentation. Wolterink et al. [16] proposed a CNN-based orientation classifier to determine the artery direction and subsequently extracted the centerline. Specifically, there were various points corresponding to different directions distributed on the sphere, and then the direction classification detected the possible point for direction. Combined with their research, Mostafa et al. [17] optimally developed a fully automated workflow. They employed a regression U-Net to identify the coronary ostia automatically and applied the improved CNNTracker to track it until extracting its endpoint. This method added a weight term to the loss function, which could calculate the angle between the predicted vector and each sampling vector, significantly improving the performance of the direction classifier. Yang et al. [18] presented a discriminative coronary artery tracking method by alternately training a tracker and discriminator. Their discriminator could provide a more robust learning-based stop criterion, effectively distinguishing coronary arteries and other cardiac tissues. However, the classification method used in these studies has relatively limited accuracy. The proposed models must increase the number of sampling points to achieve higher accuracy, undoubtedly causing large memory consumption in calculating training parameters. Besides, the results of multiple branches and distal parts extraction are not satisfying.
In this work, we proposed a regression method that learned to track centerlines continuously from coronary ostia. The regression method can calculate the deviation between the actual direction and the predicted direction, learning the transformation of the network output to the centerline direction. First, the branch classifier in this model could detect the endpoint, bifurcation point, and single-vessel point. Besides, the direction predictor was performed to predict the coronary artery centerline direction and lumen radius. Then, we developed a new loss function that associated the coordinates of the centerline points with the lumen radius, which achieved high accuracy extraction.
The overall workflow of our proposed network is illustrated in Figure 1. First, the acquired medical images were cropped into computed tomography (CT) patches for the input of the network. The CT patch was centered on Ci, where located on the centerline. Afterward, the CNN module extracted image features which were fed to branch classifier and path predictor. The branch classifier can predict the likelihood of single-vessel point, endpoint, and bifurcation point. Meanwhile, the output of path predictor consisted of direction vector (x,y,z), the vessel radius r, and step. Subsequently, the coordinate of Ci+1 on the centerline can be calculated by Eq (1). If Ci+1 was found, it will be queued as the new current centerline point until tracking coronary artery centerline endpoint.
Ci+1=Ci+(x,y,z)||(x,y,z)||2×|step| | (1) |
where Ci+1 is the next centerline point; Ci is the current centerline point; (x,y,z) is the direction vector; step is the spatial distance between Ci and Ci+1.
The feature extraction network was composed of two convolution blocks, namely DimConv and DownConv. DimConv first implemented convolution, batch normalization, and ReLU activation function in sequence. The batch normalization was set before the ReLU activation function and limited the passing value between layers within a certain range, which significantly mitigated vanishing or exploding gradient problem during backpropagation. Afterwards, dilated convolution, batch normalization, and ReLU activation function were placed twice. The operation of dilated convolution instead of basic convolution contributed to obtain a broader receptive field without increasing the number of parameters. Besides, the patch size was about three times of the maximum coronary lumen diameter, which contained more neighborhood information to help locate the spatial location of the current vascular segment. DownConv was made up of convolution, batch normalization, and ReLU activation function. It achieved down sampling operation by using convolution with a step size of 2, so that the output image of DownConv module could be reduced by half to obtain the condensed feature expression.
The feature extraction network repeated DimConv and DownConv operations three times. The feature map was the output of the previous layer, by superimposing the channels of feature maps, multi-level features extracted from shallow network and deep network can be fused. In this case, the low-level feature information will not easily lose in the process of down sampling. DimConv can maintain the size of the input and output images while only changing the number of channels. Therefore, the output of DimConv had the same size as the output of the previous DownConv, whose channels can be superimposed and then input into DownConv for further extraction.
A branch classifier adopting spatial attention was developed, which can aggregate feature information and provide a terminal criterion for a continuous tracking process. In this part, the dimension of the input feature map was (64, 4, 4, 4), where 64 represented the number of channels and (4, 4, 4) was the size of 3D feature map. First, we calculated the mean and maximum values of per-pixel across all channels in feature maps respectively, generating two channels. Then, the convolution operation was performed on these two channels, effectively realizing multi-level feature fusion. The dimension of the output of convolution operation was (1, 4, 4, 4) and finally a fully connected layer was added to reach the final classification. The advantage of spatial attention was that dimensionality reduction could be rapidly achieved when integrating global features, which helps the number of input neurons of the fully connected layer more accurately close to the number of output classification. Each output channel of the branch classifier was considered as a feature classification, purely indicating the probability of the endpoint, bifurcation point, and single vessel point. Branch 0 and 2 indicated the endpoint and bifurcation point, respectively. When the output was branch 1, it represented that the current point is on the path of a single vessel, with only one possible direction.
For path prediction, the feature maps were first fed into the global average pooling layer for information integration and the dimension of the output was (64, 1, 1, 1). Then, we input it into the ReLU activation layer and obtained nonlinearity. The results were sent to convolution with 1 × 1 × 1 kernel, which could customize dimensionality to satisfy the calculation requirement of the regression loss with different number of parameters. The number of output channels was 2×5, where the maximum number of branches was 2 and each branch contained 5 parameters including the direction vector (x,y,z), the vessel radius r, and step. The path predictor can predict three branch conditions including no branch, one branch, and two branches. It was worth noting that only when the judgment of branch classifier was valid can the regression loss operation be performed next.
The classification loss of branch classifier used cross entropy (CE) function, which was defined as follows:
LossCE=−1N∑i∑Mc=1yiclog(pic) | (2) |
where M is the number of classes; pic represents the probability that sample i belongs to class c; When the actual class of sample i is equal to c, the value of yic is 1, otherwise is 0.
To accurately segment coronary artery in CTA images, numerous regression methods have been proposed, including mean square error (MSE), mean absolute error (MAE), and other optimizations such as smooth L1 Loss piecewise function [19]. The core idea of vascular path regression is precisely learning the spatial coordinates of unit direction vector (x,y,z) in the Cartesian coordinate system. Regression loss can be defined as the transformation of the predicted direction vector to the ground truth. However, when separately calculating the losses of parameters (e.g., x, y, z, r, and step), it is ultimately easy to ignore the correlation between these parameters (e.g., x, y, z, r, and step). This phenomenon is particularly serious in dealing with the centerline extraction problem of the coronary artery with extreme tortuosity, which increases the difficulty of model convergence.
Currently, the maximum endovascular sphere model has made significant achievements in the field of lumen centerline extraction [20,21]. This approach is based on the assumption that a sphere, whose diameter is adaptively varying to enclose the diameter of the vessel, moves in the direction of blood flow. When the sphere encounters the bifurcation, it will split into two spheres and continues tracking each branch parallelly. After tracking the current vessel end, the coronary centerline is formed by connecting the center of the sphere. Based on the idea of endovascular sphere model, we have designed a new loss function method, referred to as the spherical cross volume method, for vascular path tracking. While learning the deviation of the output parameters (e.g., x, y, z, r, and step), this suggested method can ensure the correlation among these parameters (e.g., x, y, z, r, and step).
Figure 2 shows two spheres centered on O and Ot, respectively. The volume of the intersecting part of the two spheres is defined by the following formula:
V∩=∫h0π(rt2−(rt⋅cosβ+x)2)dx+∫ht0π(r2−(r⋅cosα+x)2)dx | (3) |
where r and rt are the radiuses; h and ht are obtained by taking radius r and rt minus the distance between the center of the sphere to the intersecting plane, respectively; cosα and cosβ can be obtained as the following:
cosα=r2+d2−rt22rd,ht=r−r⋅cosα | (4) |
cosβ=rt2+d2−r22rtd,h=rt−rt⋅cosβ | (5) |
where d is the distance between O and Ot. Combining Newton-Leibniz formula and Eqs (3)–(5), V∩ can be simplified as follows:
V∩=πh2(rt−13h)+πht2(r−13ht) | (6) |
We formulated the sphere dice loss function similar to the Dice coefficient and extended it to 3D space. Dice Loss is a typical tool using in image segmentation tasks, which can count the number of pixel points in the segmentation area and target area. Then the similarity of the segmentation area and target area can be described by the ratio of the number of points in the intersecting area to the total number of pixel points. We applied the concept of Diss Loss to geometry. Correspondingly, the similarity of two spheres was illustrated as the ratio of the volume of the intersecting parts to the total volume of two spheres. The value of similarity ranged from 0 to 1, while 0 and 1 denoted the two spheres non-intersection and perfectly coincidence, respectively. When two spheres partially intersected, their deviation can be obtained by subtracting the actual similarity from 1. Specifically, the regression loss can be calculated by Eq (7):
LossSphereDice=1−2V∩43πr3+43πrt3=1−32⋅h2(rt−13h)+ht2(r−13ht)r3+rt3 | (7) |
It is acknowledged that a sphere can be constructed based on the coordinates (x, y, z) and radius r in 3D space. Thereby, the maximal inscribed sphere located at the current coronary lumen can be characterized by the annotated coordinates of coronary centerline points and the actual vessel radius. As described above, we constructed two sphere, one centered on the current centerline point and another sphere centered on the predicted centerline point, relying on the output parameters (e.g., x, y, z, r, and step) by the path predictor. Subsequently, the sphere dice loss was calculated according to the location relationship between the two spheres, which achieved outstanding performance in evaluating the deviation between the predicted direction and the actual direction.
The combined loss was the sum of the CE loss (Eq (2)) and the sphere dice loss (Eq (7)). Note that the diameter of the vessel end was small, which easily resulted in a larger loss even from a small position deviation of the sphere.
The dataset consisted of 18 CTA scans provided by The Second Affiliated Hospital of Zhejiang University School of Medicine, whose resolution and voxel size were described in Table 1. We randomly picked 12 CTA scans for training and 6 CTA scans for testing. The requirement of written informed consent for participation was waived by the local ethics committee due to the retrospective nature of this study. The coronary centerlines of all images were manually annotated using Mimics 21.0 software (Materialise N·V., Belgium), which were corrected and confirmed by an experienced radiologist. Then, the manually marked centerlines were smoothed and the centerline points were resampled equidistantly. In addition, a resampling operation was performed on the CTA images, with uniform voxel spacing of 0.5 mm. Afterward, the CTA images were cropped into several CTA patches with the size of 32 × 32 × 32, which were centered at the centerline points. To achieve data augmentation, we made random offset for the location of centerline point and over-sampled nearby the bifurcation and the vessel end. As a result, in the training set and the testing set, 96,834 and 18,072 patches were extracted respectively.
CTA Data | Size (Voxels) | Resolution (mm3) |
Train01 | (512,512,275) | (0.422, 0.422, 0.500) |
Train02 | (512,512,275) | (0.391, 0.391, 0.500) |
Train03 | (512,512,275) | (0.422, 0.422, 0.500) |
Train04 | (512,512,297) | (0.447, 0.447, 0.500) |
Train05 | (512,512,275) | (0.367, 0.367, 0.500) |
Train06 | (512,512,275) | (0.422, 0.422, 0.500) |
Train07 | (512,512,275) | (0.422, 0.422, 0.500) |
Train08 | (512,512,275) | (0.422, 0.422, 0.500) |
Train09 | (512,512,275) | (0.422, 0.422, 0.500) |
Train10 | (512,512,256) | (0.488, 0.488, 0.625) |
Train11 | (512,512,256) | (0.488, 0.488, 0.625) |
Train12 | (512,512,256) | (0.488, 0.488, 0.625) |
Test01 | (512,512,275) | (0.422, 0.422, 0.500) |
Test02 | (512,512,275) | (0.422, 0.422, 0.500) |
Test03 | (512,512,256) | (0.488, 0.488, 0.625) |
Test04 | (512,512,256) | (0.488, 0.488, 0.625) |
Test05 | (512,512,275) | (0.367, 0.367, 0.500) |
Test06 | (512,512,275) | (0.391, 0.391, 0.500) |
We conducted the experiments on an NVIDIA GeForce GTX 1660 GPU. The algorithm was implemented in Python using PyTorch, an open-source deep neural network library. The continuous centerline extraction started from coronary ostia. The angle between the predicted direction vector and the previous direction vector was computed to prevent the following tracking path from overlapping with the already extracted centerline.
To verify the accuracy of the loss function, we respectively applied smooth L1 Loss function and sphere dice loss function to calculate regression loss, as shown in Figure 3. We can observe that the loss curve using smooth L1 Loss function oscillates obviously and is difficult to converge. This is probably attributed to the fact that the process for calculating deviations in the x, y, and z directions using the smooth L1 Loss function is separate. As the coronary artery structure is complex, the deviations in the x, y, and z directions are uncertain, resulting in poor performance. Additionally, the sensitivity of various sizes of coronary artery segments to deviation is different, which increases the difficulty of model convergence. Compared to the smooth L1 Loss function, the sphere dice loss function combines the deviations in three directions and relates deviations to the lumen radius, which can be effectively adapted to the complex coronary artery structure. As expected, the loss curve applying the sphere dice loss function is observed to converge quickly and smoothly.
We evaluated the accuracy of centerline extraction by adopting the Rotterdam algorithm evaluation framework rules [22]. Four evaluation metrics including overlap (OV), overlap until first error (OF), overlap with clinically relevant vessel (OT), and average inside (AI) were measured, which can be computed by true positive method (TPM) and true positive reference (TPR). The TPR point refers to the point of the reference centerlines, whose defined radius is no less than the distance between this point and at least one connected point on the extracted centerlines. Then, The TPM point refers to the point of the extracted centerlines, whose defined radius is no less than the distance between this point and at least one connected point on the reference centerlines. OV stands for the completeness of the predicted coronary artery centerline. OF reveals the tracking ability of the model before making the first error. OT indicates the degree of overlaps between the extracted centerlines and the clinically relevant centerlines (radius ≥ 0.75 mm). AI is an accuracy index, which measures the average distance between the extracted point on predicted centerline and the corresponding extracted point on reference centerline within the radius of the reference centerline.
Table 2 shows the average overlap and accuracy results for centerline extraction in all testing datasets. For each dataset, the OV, OF, OT, and AI results are listed. In terms of overlap, our method obtained an average OV of 89.19%, an average OF of 82.30%, and an average OT of 91.42%. In terms of accuracy, our method obtained an average AI of 0.32mm. Besides, we found that errors mainly occurred at bifurcations or sharply curved coronary arteries.
Dataset | OV (%) | OF (%) | OT (%) | AI (mm) |
Test01 | 91.87 | 87.41 | 90.82 | 0.32 |
Test02 | 94.53 | 90.47 | 95.18 | 0.26 |
Test03 | 88.64 | 89.58 | 94.55 | 0.28 |
Test04 | 82.67 | 63.45 | 86.84 | 0.40 |
Test05 | 85.04 | 73.72 | 87.93 | 0.36 |
Test06 | 92.36 | 89.18 | 93.21 | 0.29 |
Average | 89.19 | 82.30 | 91.42 | 0.32 |
To analyze the advantages of our proposed method, we conducted the ablation experiments. Three conditions were performed: (i) removing the Spatial Attention module and replacing it with basic convolutional layer; (ii) removing the global average pooling layer in path predictor; (iii) simultaneously removing the Spatial Attention module and the global average pooling layer in path predictor. The average OV, OF, OT, and AI results were calculated and illustrated in Table 3. Significant decreases in performance were observed in the ablation experiments compared with our proposed network, which proved that the specific feature integration method designed for the output task was effective.
Methods | Av.OV (%) | Av.OF (%) | Av.OT (%) | AI (mm) |
proposed network | 89.19 | 82.30 | 91.42 | 0.32 |
-Spatial Attention | 85.83 | 72.67 | 90.55 | 0.36 |
-Avg Pool | 88.26 | 75.41 | 89.97 | 0.39 |
-Spatial Attention & Avg Pool | 84.45 | 70.86 | 89.23 | 0.38 |
According to the guidelines of the American Heart Association (AHA) [23], the coronary arteries with a diameter ≥ 1.5 mm can be divided into 17 segments. Figure 4 shows the sensitivity of the proposed method for each of these 17 segments in the testing datasets. The testing sensitivity for right coronary artery (RCA), left anterior descending artery (LAD), left circumflex artery (LCX), right and left posterior descending arteries were 83.3–100%, while for the second obtuse marginal branch was 66.7%.
Table 4 shows the comparison results of our proposed method and the CNNTracker approach [16]. We reproduced the CNNTracker approach using our dataset, while the performance was not ideal. This may be due to the small size of our dataset. However, there was evidence that our method can improve the sensitivity of detecting distal segment of coronary arteries, especially for margin branches, left posterior descending branch, and other small vessels. Consequently, our proposed method has advantages in extracting more complete coronary artery structures.
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | |
CNNTracker | 1 | 1 | 1 | 0.9 | 1 | 1 | 1 | 0.9 | 0.9 | 0.8 | 1 | 0.8 | 0.9 | 0.6 | 0.7 | 0.7 | - |
proposed method | 1 | 1 | 1 | 0.8 | 1 | 1 | 1 | 0.8 | 1 ↑ |
0.8 | 1 | 1 ↑ |
1 ↑ |
0.6 | 0.8 ↑ |
1 ↑ |
0.8 |
Except for the 17 reference segments, our method can also detect acute marginal branches and conus arteriae branches, clearly reflecting the strength of our method. As shown in Figure 5, the green and red coronary arteries both started from coronary ostium and extracted to the end of blood vessel. It was evident that the computed coronary artery centerlines were longer than that obtained merely by manual annotations.
Figure 6 provides the visual comparison between the results of manual annotations (Figure 6(a)) and our approach (Figure 6(b)), which verifies the effectiveness of our approach in predicting distal vessel branches without annotations. Additionally, our approach was able to successfully extract coronary artery centerlines in cases where intravascular stents (Figure 6(c)) or coronary artery calcifications (Figure 6(d)) were present.
In this paper, we have presented a new method for continuously tracking robust coronary arteries using a regression-based network. The proposed network first uses the CNN module to extract CTA image features, and then enters the branch classifier and the path predictor. There, the direction of the coronary centerline and the radius of lumen can be identified. Previous centerline extraction methods based on direction classification have certain limitations [16,17,18]. If the directions of sampling are too few, the direction deviation will accumulate with the continuously tracking process, and the tracking process will stop when a large deviation occurs at the distal coronary artery. This phenomenon results in the difficulty to the centerline extraction of small vessels. Whereas, dense sampling directions likely reflect the imbalance issue between positive and negative samples associated with direction classification, resulting in extraction failure. Besides, the direction classification method can only predict a single direction, which need to train additional seed points to deal with multi-branch problems. Due to the insufficient learning of effective information for bifurcation, some extracted centerlines of small coronary vessels cannot cross the bifurcation instead tracking back to the coronary artery ostia. While in our proposed network, the branch classifier can creatively handle multi-branch problems and provide a termination criterion for continuous extraction. Moreover, the path predictor can learn the transformation of the network output to the actual centerline direction based on the regression method, significantly improving the accuracy of direction prediction. Further, the proposed new loss function comprehensively considers the relationship between coordinates and lumen radius, which can adapt to the tortuous coronary structures, enhancing the ability to track small coronary vessels.
We conducted the tests using 6 CTA images to validate the reliability of the proposed network. The testing results demonstrated that the suggested method can extract the whole coronary artery tree structures, with a high performance of OV 89.19%, OF 82.30%, and OT 91.42%. While the manually annotated centerline cannot reach the most distal point of coronary artery due to the presence of nearby myocardium and adjacent coronary veins, our method allows more complete tracking of coronary artery structures. There is evidence that distal coronary arteries or small vessels also play an important role in coronary circulation, which can lead to myocardial ischemia [24]. Therefore, the extraction of distal coronary arteries and small vessels in our study could provide more comprehensive information for clinical diagnosis or operation. However, our suggested method still requires the starting point, whose coordinate should be set manually at coronary artery ostia. In future work, some interactive operations such as manually adding the starting points of the missing branches can be considered when the tracker drifts from the centerline or the extraction is incomplete.
The hyper-parameters directly affect computational complexity and method performance, which should be modified to achieve effectiveness. During the hyper-parameter exploration, we trained the CNN models for 200 epochs with a 2e-5 initial learning rate and 0.8 decay rate per 50 epochs. The model was trained using Adam optimizer with a batch size of 32 in which the model can achieve better performance by observing accuracy metrics (e.g., OV, OF, OT) and visualization results. After experimenting with various batch size, the results reveal that the training curve would oscillate when the batch size was too small, which may be due to the individual coronary artery structures and variable vessel sizes. Whereas larger batch size will cause the gradient descent to be too slow, which may be attributed to the calculation of the average loss of large amounts of data affecting the learning efficiency. As a consequence, the batch with 32 patches was found suitable as it could reflect conditions such as different numbers of branches, various coronary artery sizes, and whether deviation existed or not.
Despite promising results in extracting coronary centerlines, our proposed framework suffers from bias, which is caused by the generation inconsistency between the training and inference process. To be specific, at training time, the input of the path prediction network is the CT patch, and the output is the parameters (e.g., x, y, z, r, and step) of the predicted centerline point. Overall, the training process is to learn features and optimize model from the image data. While at inference the trained model is constantly used to estimate outcomes until a terminal criterion is fulfilled. Hence, as the training grows, the errors accumulate. To address this problem, this work uses data augmentation which allows the tracker to make corrections to process off-centerline points. However, considering that the coronary artery has delicate tubular structures and complex tree shapes, the continuous path tracking method remains challenging as the contextual information of multiple patches is independent. In this case, some recent studies combined the centerline topological structure. For example, Gao et al. [25] constructed a hybrid model of CNNTracker and graph convolutional network (GCN), with the advantage of combining coronary centerline extraction and lumen segmentation. This approach has achieved state-of-the-art performance in capturing the geometry of the RCA, LAD, and LCX. Besides, Jeon et al. [26] presented a deep regression Bayesian network for extracting the main coronary artery centerline, creatively detecting branching sites by adopting a particle filter. The proposed method performed more effectively when extracting coronary arteries with sharp curvature. However, the extraction process combined with topological structure is complicated and needs large amounts of training data. In the future, based on more-powerful computers and large-scale data, the topological structure can be considered to improve the accuracy of coronary centerline extraction.
Furthermore, our proposed method still has some limitations. Firstly, all cases in this work had nonsignificant coronary stenosis, whose Fractional Flow Reserve (FFR) was greater than 0.80 [27,28]. It is difficult to extract coronary centerline in the presence of severe epicardial stenosis (diameter severity [DS] ≥ 70%) [29], hence, the multi-scale model would be an interesting exploration in the future. Secondly, a major limitation is the low number of cases included in this study. Generally, the CTA examination is more suitable for early screening of CAD and the prognosis of interventional treatment. A large number of nonsignificant coronary stenosis patients have not received sufficient attention and complete examination in clinical practice. Therefore, it is difficult to collect large amounts of data in a short period. Besides, to more accurately identify coronary small vessels, we used 256-slice spiral CT images in this study, whose high cost also increased difficulties in data collection. In future studies, large-scale studies can be considered to improve the accuracy of coronary centerline extraction and optimize our method for severe coronary artery stenosis. However, a large training set requires time-consuming manual annotations. In this case, semi-automatic supervision method is recently used in medical image analysis, which can effectively address the insufficient data problem and has achieved significant performance [30,31,32,33]. Accordingly, our approach can be further investigated by combining the semi-automatic supervision method.
In summary, we proposed a regression-based tracking method for coronary artery centerline extraction. Our approach can effectively deal with multi-branch problems and realize further coronary artery extraction. Future work can combine multi-scale model or semi-automatic supervision method for improvement, powerfully assisting CAD diagnoses.
This work is supported by the Natural Science Foundation of China (NSFC) under grant number 62171408, and the Key Research and Development Program of Zhejiang Province (2020C03060, 2020C03016, 2022C03111).
The authors declare there is no conflict of interest.
[1] | World Health Organization, The top 10 causes of death, 2020. Available from: https://www.who.int/en/news-room/fact-sheets/detail/the-top-10-causes-of-death. |
[2] |
F. Cademartiri, L. La Grutta, A. Palumbo, P. Malagutti, F. Pugliese, W. B. Meijboom, et al., Non-invasive visualization of coronary atherosclerosis: state-of-art, J. Cardiovasc. Med., 8 (2007), 129–137. https://doi.org/10.2459/01.JCM.0000260820.40145.a8 doi: 10.2459/01.JCM.0000260820.40145.a8
![]() |
[3] |
G. Mowatt, E. Cummins, N. Waugh, S. Walker, J. Cook, X. Jia, et al., Systematic review of the clinical effectiveness and cost-effectiveness of 64-slice or higher computed tomography angiography as an alternative to invasive coronary angiography in the investigation of coronary artery disease, Health Technol. Assess., 12 (2008), 3–143. https://doi.org/10.3310/hta12170 doi: 10.3310/hta12170
![]() |
[4] |
A. W. Leber, A. Becker, A. Knez, F. von Ziegler, M. Sirol, K. Nikolaou, et al., Accuracy of 64-slice computed tomography to classify and quantify plaque volumes in the proximal coronary system—A comparative study using intravascular ultrasound, J. Am. Coll. Cardiol., 47 (2006), 672–677. https://doi.org/10.1016/j.jacc.2005.10.058 doi: 10.1016/j.jacc.2005.10.058
![]() |
[5] | A. Kanitsar, D. Fleischmann, R. Wegenkittl, P. Felkel, M. E. Grö ller, CPR-curved planar reformation, in IEEE Visualization, 2002. VIS 2002, (2002), 37–44. https://doi.org/10.1109/VISUAL.2002.1183754 |
[6] |
H. S. Hecht, Applications of multislice coronary computed tomographic angiography to percutaneous coronary intervention: How did we ever do without it, Catheterization Cardiovasc. Interventions, 71 (2008), 490–503. https://doi.org/10.1002/ccd.21427 doi: 10.1002/ccd.21427
![]() |
[7] |
M. F. Khan, S. Wesarg, J. Gurung, S. Dogan, A. Maataoui, B. Brehmer, et al., Facilitating coronary artery evaluation in MDCT using a 3D automatic vessel segmentation tool, Eur. Radiol., 16 (2006), 1789–1795. https://doi.org/10.1007/s00330-006-0159-8 doi: 10.1007/s00330-006-0159-8
![]() |
[8] | J. S. Shinbane, S. S. Mao, M. J. Girsky, R. J. Oudiz, S. Carson, J. Child, et al., Computed tomographic angiography can define three-dimensional relationships between coronary veins and coronary arteries relevant to coronary venous procedures, Circulation, 110 (2004), 702. |
[9] |
Y. Shen, Y. Gao, P. Zhang, W. Yu, S. Zhou, B. Li, et al., Research on CCTA coronary lumen segmentation based on Polar1DMLP model, CT Theory Appl. Res., 29 (2020), 631–642. https://doi.org/10.15953/j.1004-4140.2020.29.06.01 doi: 10.15953/j.1004-4140.2020.29.06.01
![]() |
[10] | M. Umer, S. Sharma, P. Rattan, A survey of deep learning models for medical image analysis, in 2021 International Conference on Computing Sciences (ICCS), (2021), 65–69. https://doi.org/10.1109/iccs54944.2021.00021 |
[11] | W. Huang, L. Huang, Z. Lin, S. Huang, Y. Chi, J. Zhou, et al., Coronary artery segmentation by deep learning neural networks on computed tomographic coronary angiographic images, in 2018 40th Annual international conference of the IEEE engineering in medicine and biology society (EMBC), (2018), 608–611. https://doi.org/10.1109/embc.2018.8512328 |
[12] |
F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, K. H. Maier-Hein, nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation, Nat. Methods, 18 (2021), 203–211. https://doi.org/10.1038/s41592-020-01008-z doi: 10.1038/s41592-020-01008-z
![]() |
[13] |
D. Jia, X. Zhuang, Learning-based algorithms for vessel tracking: A review, Comput. Med. Imaging Graphics, 89 (2021), 101840. https://doi.org/10.1016/j.compmedimag.2020.101840 doi: 10.1016/j.compmedimag.2020.101840
![]() |
[14] |
A. Dorobanțiu, V. Ogrean, R. Brad, Coronary centerline extraction from CCTA using 3D-UNet, Future Int., 13 (2021), https://doi.org/10.3390/fi13040101 doi: 10.3390/fi13040101
![]() |
[15] | S. Rjiba, T. Urruty, P. Bourdon, C. Fernandez-Maloigne, R. Delepaule, L. P. Christiaens, et al., CenterlineNet: Automatic coronary artery centerline extraction for computed tomographic angiographic images using convolutional neural network architectures, in 2020 Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA), (2020), 1–6. https://doi.org/10.1109/IPTA50016.2020.9286458 |
[16] |
J. M. Wolterink, R. W. van Hamersvelt, M. A. Viergever, T. Leiner, I. Išgum, Coronary artery centerline extraction in cardiac CT angiography using a CNN-based orientation classifier, Med. Image Anal., 51 (2019), 46–60. https://doi.org/10.1016/j.media.2018.10.005 doi: 10.1016/j.media.2018.10.005
![]() |
[17] | A. Mostafa, A. M. Ghanem, M. El-Shatoury, T. Basha, Improved centerline extraction in fully automated coronary ostium localization and centerline extraction framework using deep learning, in 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), (2021), 3846–3849. https://doi.org/10.1109/EMBC46164.2021.9629655 |
[18] | H. Yang, J. Chen, Y. Chi, X. Xie, X. Hua, Discriminative coronary artery tracking via 3D CNN in cardiac CT angiography, in International Conference on Medical Image Computing and Computer-assisted Intervention, 11765 (2019), 468–476. https://doi.org/10.1007/978-3-030-32245-8_52 |
[19] | R. Girshick, Fast R-CNN, in 2015 IEEE International Conference on Computer Vision, (2015), 1440–1448. https://doi.org/10.1109/iccv.2015.1697 |
[20] |
C. Zhou, H. P. Chan, A. Chughtai, S. Patel, L. M. Hadjiiski, J. Wei, et al., Automated coronary artery tree extraction in coronary CT angiography using a multiscale enhancement and dynamic balloon tracking (MSCAR-DBT) method, Comput. Med. Imaging Graphics, 36 (2012), 1–10. https://doi.org/10.1016/j.compmedimag.2011.04.001 doi: 10.1016/j.compmedimag.2011.04.001
![]() |
[21] |
D. Han, H. Shim, B. Jeon, Y. Jang, Y. Hong, S. Jung, et al., Automatic coronary artery segmentation using active search for branches and seemingly disconnected vessel segments from coronary CT angiography, Plos One, 11 (2016), https://doi.org/10.1371/journal.pone.0156837 doi: 10.1371/journal.pone.0156837
![]() |
[22] |
M. Schaap, C. T. Metz, T. van Walsum, A. G. van der Giessen, A. C. Weustink, N. R. Mollet, et al., Standardized evaluation methodology and reference database for evaluating coronary artery centerline extraction algorithms, Med. Image Anal., 13 (2009), 701–714. https://doi.org/10.1016/j.media.2009.06.003 doi: 10.1016/j.media.2009.06.003
![]() |
[23] |
W. G. Austen, J. E. Edwards, R. L. Frye, G. G. Gensini, V. L. Gott, L. S. Griffith, et al., A reporting system on patients evaluated for coronary artery disease, Report of the Ad Hoc Committee for Grading of Coronary Artery Disease, Council on Cardiovascular Surgery, American Heart Association, Circulation, 51 (1975), 5–40. https://doi.org/10.1161/01.cir.51.4.5 doi: 10.1161/01.cir.51.4.5
![]() |
[24] |
Y. Geng, H. Liu, X. Wang, J. Zhang, Y. Gong, D. Zheng, et al., Effect of microcirculatory dysfunction on coronary hemodynamics: A pilot study based on computational fluid dynamics simulation, Comput. Biol. Med., 146 (2022), 105583. https://doi.org/10.1016/j.compbiomed.2022.105583 doi: 10.1016/j.compbiomed.2022.105583
![]() |
[25] | R. Gao, Z. Hou, J. Li, H. Han, B. Lu, S. K. Zhou, Joint coronary centerline extraction and lumen segmentation from Ccta using Cnntracker and vascular graph convolutional network, in 2021 IEEE 18th International Symposium on Biomedical Imaging, (2021), 1897–1901. https://doi.org/10.1109/isbi48211.2021.9433764 |
[26] |
B. Jeon, Deep recursive bayesian tracking for fully automatic centerline extraction of coronary arteries in CT images, Sensors, 21 (2021), 6087. https://doi.org/10.3390/s21186087 doi: 10.3390/s21186087
![]() |
[27] |
P. A. L. Tonino, B. De Bruyne, N. H. J. Pijls, U. Siebert, F. Ikeno, M. vant Veer, et al., Fractional flow reserve versus angiography for guiding percutaneous coronary intervention, N. Engl. J. Med., 360 (2009), 213–224. https://doi.org/10.1056/NEJMoa0807611 doi: 10.1056/NEJMoa0807611
![]() |
[28] |
N. H. J. Pijls, B. de Bruyne, K. Peels, P. H. van der Voort, H. J. R. M. Bonnier, J. Bartunek, et al., Measurement of fractional flow reserve to assess the functional severity of coronary-artery stenoses, N. Engl. J. Med., 334 (1996), 1703–1708. https://doi.org/10.1056/nejm199606273342604 doi: 10.1056/nejm199606273342604
![]() |
[29] |
J. Knuuti, W. Wijns, A. Saraste, D. Capodanno, E. Barbato, C. Funck-Brentano, et al., 2019 ESC guidelines for the diagnosis and management of chronic coronary syndromes the task force for the diagnosis and management of chronic coronary syndromes of the European Society of Cardiology (ESC), Eur. Heart J., 41 (2020), 407–477. https://doi.org/10.1093/eurheartj/ehz425 doi: 10.1093/eurheartj/ehz425
![]() |
[30] | L. Yu, S. Wang, X. Li, C. W. Fu, P. A. Heng, Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation, in International Conference on Medical Image Computing and Computer-assisted Intervention, (2019), 605–613. https://doi.org/10.1007/978-3-030-32245-8_67 |
[31] | D. Nie, Y. Gao, L. Wang, D. Shen, ASDNet: Attention based semi-supervised deep networks for medical image segmentation, in International Conference on Medical Image Computing and Computer-assisted Intervention, (2018), 370–378. https://doi.org/10.1007/978-3-030-00937-3_43 |
[32] |
A. Madani, J. R. Ong, A. Tibrewal, M. R. K. Mofrad, Deep echocardiography: data-efficient supervised and semi-supervised deep learning towards automated diagnosis of cardiac disease, NPJ Digital Med., 1 (2018), 59. https://doi.org/10.1038/s41746-018-0065-x doi: 10.1038/s41746-018-0065-x
![]() |
[33] |
V. Cheplygina, M. de Bruijne, J. P. W. Pluim, Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis, Med. Image Anal., 54 (2019), 280–296. https://doi.org/10.1016/j.media.2019.03.009 doi: 10.1016/j.media.2019.03.009
![]() |
1. | Kushwant Kaur, Gaurav Bathla, 2023, WoS-Driven Bibliometric Analysis of Coronary Artery Disease Detetction using Machine Learning Models, 979-8-3503-3091-5, 1, 10.1109/ICAIIHI57871.2023.10489252 | |
2. | Kushwant Kaur, Gaurav Bathla, 2024, Deep Learning Approach for Coronary Artery Disease Detection and Severity Analysis Using ECG Signals, 979-8-3315-1795-3, 1, 10.1109/ICAIQSA64000.2024.10882385 |
CTA Data | Size (Voxels) | Resolution (mm3) |
Train01 | (512,512,275) | (0.422, 0.422, 0.500) |
Train02 | (512,512,275) | (0.391, 0.391, 0.500) |
Train03 | (512,512,275) | (0.422, 0.422, 0.500) |
Train04 | (512,512,297) | (0.447, 0.447, 0.500) |
Train05 | (512,512,275) | (0.367, 0.367, 0.500) |
Train06 | (512,512,275) | (0.422, 0.422, 0.500) |
Train07 | (512,512,275) | (0.422, 0.422, 0.500) |
Train08 | (512,512,275) | (0.422, 0.422, 0.500) |
Train09 | (512,512,275) | (0.422, 0.422, 0.500) |
Train10 | (512,512,256) | (0.488, 0.488, 0.625) |
Train11 | (512,512,256) | (0.488, 0.488, 0.625) |
Train12 | (512,512,256) | (0.488, 0.488, 0.625) |
Test01 | (512,512,275) | (0.422, 0.422, 0.500) |
Test02 | (512,512,275) | (0.422, 0.422, 0.500) |
Test03 | (512,512,256) | (0.488, 0.488, 0.625) |
Test04 | (512,512,256) | (0.488, 0.488, 0.625) |
Test05 | (512,512,275) | (0.367, 0.367, 0.500) |
Test06 | (512,512,275) | (0.391, 0.391, 0.500) |
Dataset | OV (%) | OF (%) | OT (%) | AI (mm) |
Test01 | 91.87 | 87.41 | 90.82 | 0.32 |
Test02 | 94.53 | 90.47 | 95.18 | 0.26 |
Test03 | 88.64 | 89.58 | 94.55 | 0.28 |
Test04 | 82.67 | 63.45 | 86.84 | 0.40 |
Test05 | 85.04 | 73.72 | 87.93 | 0.36 |
Test06 | 92.36 | 89.18 | 93.21 | 0.29 |
Average | 89.19 | 82.30 | 91.42 | 0.32 |
Methods | Av.OV (%) | Av.OF (%) | Av.OT (%) | AI (mm) |
proposed network | 89.19 | 82.30 | 91.42 | 0.32 |
-Spatial Attention | 85.83 | 72.67 | 90.55 | 0.36 |
-Avg Pool | 88.26 | 75.41 | 89.97 | 0.39 |
-Spatial Attention & Avg Pool | 84.45 | 70.86 | 89.23 | 0.38 |
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | |
CNNTracker | 1 | 1 | 1 | 0.9 | 1 | 1 | 1 | 0.9 | 0.9 | 0.8 | 1 | 0.8 | 0.9 | 0.6 | 0.7 | 0.7 | - |
proposed method | 1 | 1 | 1 | 0.8 | 1 | 1 | 1 | 0.8 | 1 ↑ |
0.8 | 1 | 1 ↑ |
1 ↑ |
0.6 | 0.8 ↑ |
1 ↑ |
0.8 |
CTA Data | Size (Voxels) | Resolution (mm3) |
Train01 | (512,512,275) | (0.422, 0.422, 0.500) |
Train02 | (512,512,275) | (0.391, 0.391, 0.500) |
Train03 | (512,512,275) | (0.422, 0.422, 0.500) |
Train04 | (512,512,297) | (0.447, 0.447, 0.500) |
Train05 | (512,512,275) | (0.367, 0.367, 0.500) |
Train06 | (512,512,275) | (0.422, 0.422, 0.500) |
Train07 | (512,512,275) | (0.422, 0.422, 0.500) |
Train08 | (512,512,275) | (0.422, 0.422, 0.500) |
Train09 | (512,512,275) | (0.422, 0.422, 0.500) |
Train10 | (512,512,256) | (0.488, 0.488, 0.625) |
Train11 | (512,512,256) | (0.488, 0.488, 0.625) |
Train12 | (512,512,256) | (0.488, 0.488, 0.625) |
Test01 | (512,512,275) | (0.422, 0.422, 0.500) |
Test02 | (512,512,275) | (0.422, 0.422, 0.500) |
Test03 | (512,512,256) | (0.488, 0.488, 0.625) |
Test04 | (512,512,256) | (0.488, 0.488, 0.625) |
Test05 | (512,512,275) | (0.367, 0.367, 0.500) |
Test06 | (512,512,275) | (0.391, 0.391, 0.500) |
Dataset | OV (%) | OF (%) | OT (%) | AI (mm) |
Test01 | 91.87 | 87.41 | 90.82 | 0.32 |
Test02 | 94.53 | 90.47 | 95.18 | 0.26 |
Test03 | 88.64 | 89.58 | 94.55 | 0.28 |
Test04 | 82.67 | 63.45 | 86.84 | 0.40 |
Test05 | 85.04 | 73.72 | 87.93 | 0.36 |
Test06 | 92.36 | 89.18 | 93.21 | 0.29 |
Average | 89.19 | 82.30 | 91.42 | 0.32 |
Methods | Av.OV (%) | Av.OF (%) | Av.OT (%) | AI (mm) |
proposed network | 89.19 | 82.30 | 91.42 | 0.32 |
-Spatial Attention | 85.83 | 72.67 | 90.55 | 0.36 |
-Avg Pool | 88.26 | 75.41 | 89.97 | 0.39 |
-Spatial Attention & Avg Pool | 84.45 | 70.86 | 89.23 | 0.38 |
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | |
CNNTracker | 1 | 1 | 1 | 0.9 | 1 | 1 | 1 | 0.9 | 0.9 | 0.8 | 1 | 0.8 | 0.9 | 0.6 | 0.7 | 0.7 | - |
proposed method | 1 | 1 | 1 | 0.8 | 1 | 1 | 1 | 0.8 | 1 ↑ |
0.8 | 1 | 1 ↑ |
1 ↑ |
0.6 | 0.8 ↑ |
1 ↑ |
0.8 |