
With the increase of various risk factors such as cesarean section and abortion, placenta accrete spectrum (PAS) disorder is happening more frequently year by year. Therefore, prenatal prediction of PAS is of crucial practical significance. Magnetic resonance imaging (MRI) quality will not be affected by fetal position, maternal size, amniotic fluid volume, etc., which has gradually become an important means for prenatal diagnosis of PAS. In clinical practice, T2-weighted imaging (T2WI) magnetic resonance (MR) images are used to reflect the placental signal and T1-weighted imaging (T1WI) MR images are used to reflect bleeding, both plays a key role in the diagnosis of PAS. However, it is difficult for traditional MR image analysis methods to extract multi-sequence MR image features simultaneously and assign corresponding weights to predict PAS according to their importance. To address this problem, we propose a dual-path neural network fused with a multi-head attention module to detect PAS. The model first uses a dual-path neural network to extract T2WI and T1WI MR image features separately, and then combines these features. The multi-head attention module learns multiple different attention weights to focus on different aspects of the placental image to generate highly discriminative final features. The experimental results on the dataset we constructed demonstrate a superior performance of the proposed method over state-of-the-art techniques in prenatal diagnosis of PAS. Specifically, the model we trained achieves 88.6% accuracy and 89.9% F1-score on the independent validation set, which shows a clear advantage over methods that only use a single sequence of MR images.
Citation: Jian Xu, Qian Shao, Ruo Chen, Rongrong Xuan, Haibing Mei, Yutao Wang. A dual-path neural network fusing dual-sequence magnetic resonance image features for detection of placenta accrete spectrum (PAS) disorder[J]. Mathematical Biosciences and Engineering, 2022, 19(6): 5564-5575. doi: 10.3934/mbe.2022260
[1] | Zijian Wang, Yaqin Zhu, Haibo Shi, Yanting Zhang, Cairong Yan . A 3D multiscale view convolutional neural network with attention for mental disease diagnosis on MRI images. Mathematical Biosciences and Engineering, 2021, 18(5): 6978-6994. doi: 10.3934/mbe.2021347 |
[2] | Qian Shao, Rongrong Xuan, Yutao Wang, Jian Xu, Menglin Ouyang, Caoqian Yin, Wei Jin . Deep learning and radiomics analysis for prediction of placenta invasion based on T2WI. Mathematical Biosciences and Engineering, 2021, 18(5): 6198-6215. doi: 10.3934/mbe.2021310 |
[3] | Hong Yu, Wenhuan Lu, Qilong Sun, Haiqiang Shi, Jianguo Wei, Zhe Wang, Xiaoman Wang, Naixue Xiong . Design and analysis of a robust breast cancer diagnostic system based on multimode MR images. Mathematical Biosciences and Engineering, 2021, 18(4): 3578-3597. doi: 10.3934/mbe.2021180 |
[4] | Xiao Zou, Jintao Zhai, Shengyou Qian, Ang Li, Feng Tian, Xiaofei Cao, Runmin Wang . Improved breast ultrasound tumor classification using dual-input CNN with GAP-guided attention loss. Mathematical Biosciences and Engineering, 2023, 20(8): 15244-15264. doi: 10.3934/mbe.2023682 |
[5] | Xi Lu, Xuedong Zhu . Automatic segmentation of breast cancer histological images based on dual-path feature extraction network. Mathematical Biosciences and Engineering, 2022, 19(11): 11137-11153. doi: 10.3934/mbe.2022519 |
[6] | Yanxia Sun, Xiang Li, Yuechang Liu, Zhongzheng Yuan, Jinke Wang, Changfa Shi . A lightweight dual-path cascaded network for vessel segmentation in fundus image. Mathematical Biosciences and Engineering, 2023, 20(6): 10790-10814. doi: 10.3934/mbe.2023479 |
[7] | Chaofan Li, Kai Ma . Entity recognition of Chinese medical text based on multi-head self-attention combined with BILSTM-CRF. Mathematical Biosciences and Engineering, 2022, 19(3): 2206-2218. doi: 10.3934/mbe.2022103 |
[8] | Fang Luo, Yuan Cui, Xu Wang, Zhiliang Zhang, Yong Liao . Adaptive rotation attention network for accurate defect detection on magnetic tile surface. Mathematical Biosciences and Engineering, 2023, 20(9): 17554-17568. doi: 10.3934/mbe.2023779 |
[9] | Liwei Deng, Jingyi Chen, Xin Yang, Sijuan Huang . MDRN: Multi-distillation residual network for efficient MR image super-resolution. Mathematical Biosciences and Engineering, 2024, 21(10): 7421-7434. doi: 10.3934/mbe.2024326 |
[10] | Qian Wu, Yuyao Pei, Zihao Cheng, Xiaopeng Hu, Changqing Wang . SDS-Net: A lightweight 3D convolutional neural network with multi-branch attention for multimodal brain tumor accurate segmentation. Mathematical Biosciences and Engineering, 2023, 20(9): 17384-17406. doi: 10.3934/mbe.2023773 |
With the increase of various risk factors such as cesarean section and abortion, placenta accrete spectrum (PAS) disorder is happening more frequently year by year. Therefore, prenatal prediction of PAS is of crucial practical significance. Magnetic resonance imaging (MRI) quality will not be affected by fetal position, maternal size, amniotic fluid volume, etc., which has gradually become an important means for prenatal diagnosis of PAS. In clinical practice, T2-weighted imaging (T2WI) magnetic resonance (MR) images are used to reflect the placental signal and T1-weighted imaging (T1WI) MR images are used to reflect bleeding, both plays a key role in the diagnosis of PAS. However, it is difficult for traditional MR image analysis methods to extract multi-sequence MR image features simultaneously and assign corresponding weights to predict PAS according to their importance. To address this problem, we propose a dual-path neural network fused with a multi-head attention module to detect PAS. The model first uses a dual-path neural network to extract T2WI and T1WI MR image features separately, and then combines these features. The multi-head attention module learns multiple different attention weights to focus on different aspects of the placental image to generate highly discriminative final features. The experimental results on the dataset we constructed demonstrate a superior performance of the proposed method over state-of-the-art techniques in prenatal diagnosis of PAS. Specifically, the model we trained achieves 88.6% accuracy and 89.9% F1-score on the independent validation set, which shows a clear advantage over methods that only use a single sequence of MR images.
Placenta accrete spectrum (PAS) disorder is defined as the abnormal invasion of trophoblast cells into the myometrium at different depths of infiltration. It occurs mainly in patients with placenta previa or previous cesarean section [1]. Common complications of PAS include catastrophic perinatal hemorrhage and injury of bladder, rectal and urethral [2]. There are many high-risk factors for PAS, including the history of cesarean section, placenta previa, multiple miscarriages and curettage, history of other uterine-related surgeries such as myomectomy, and advanced maternal age [3,4]. With the increase of various risk factors such as cesarean section and abortion, the incidence of PAS is also increasing year by year [5,6,7,8]. China is a country with a relatively high cesarean section rate [9]. With the implementation of the three-child policy, the number of late marriages and childbearing has increased. It can be speculated that the incidence of PAS in China will further rise. Therefore, the prenatal prediction of PAS is of important practical significance.
The traditional detection methods of PAS based on MRI generally include three consecutive steps: region of interest segmentation, image feature extraction and PAS detection [10]. A recent study showed that experienced radiologists performed significantly better than junior radiologists (90.9% of sensitivity and 75% of specificity for senior attending physicians, 81.8% of sensitivity and 61.8% of specificity for primary attending physicians) [11]. In order to reduce the reliance on the clinical experience of doctors and improve the diagnostic level of PAS, some scoring systems for the diagnosis of placental invasion have been proposed in recent years, which have not been widely tested [12]. Some scholars [13,14] used machine learning methods to detect PAS based on radiomics features and clinical factors (such as whether it was a scarred uterus, whether there was a history of cesarean section, and whether there was a history of miscarriage, etc.). There are also a few studies [15] using deep neural networks to learn powerful visual representations in MR images to predict PAS, but detection of PAS based on MRI is still a challenging task, there are several problems:
(1) Radiomics features are explicitly designed or handcrafted. Although the number of features can reach tens of thousands, these features are shallow and low-level image features. The heterogeneity within the placenta and the relationship between the placenta and adjacent tissues may not be fully characterized, thus limiting the predictive potential of the model.
(2) In clinical practice, doctors usually diagnose PAS based on the placental signal reflected by T2WI MR images supplemented by the bleeding conditions reflected by T1WI MR images. Traditional MR image analysis methods are difficult to extract multiple sequences of MR image features at the same time and assign corresponding weights to predict PAS according to their significance. Figure 1 shows T2WI and T1WI MRI slices of a patient with PAS at the same location.
Among them, Figure 1(a) is a sagittal view of T2WI, which shows PAS in the posterior and lower part of placenta (white arrow), and a strip of low signal on T2WI (red arrow) is seen in the placenta; Figure 1(b) is a sagittal view of T1WI, and a mass of hyperintense hemorrhage is seen in the placenta (white arrow).
In order to solve the above problems, we proposed a dual-path neural network fused with a multi-head attention module, which includes a dual-path neural network and a multi-head attention module. Dual-path neural network can extract T2WI and T1WI MR image features. Specifically, the presence of low-intensity bands or small patches in the placenta on T2WI sequences may indicate PAS; Intraplacental hemorrhage is usually seen in the placenta of T1WI MR images with patchy slightly high signal intensity, which may also suggest PAS. The multi-head attention module assigns different weights to the features of different sequences through learning and fuses them to better measure the importance of different sequence features.
The main contributions of this paper are: (1) A dual-path neural network is designed to extract the features of T2WI and T1WI sequences of MR images; (2) A multi-head attention module is proposed to learn the weights of different features to generate final features with stronger discrimination. Experimental results on an independent validation dataset show that the detection accuracy achieved by our method is superior to the methods using only a single sequence of MR images. The comparative experimental results also show the effectiveness of the multi-head attention module proposed in this paper.
In this section, we discuss the work most related to our work: detection of PAS based on MRI. MRI is less affected by intestinal gas and bones, has high tissue resolution and can be imaged at any angle in multiple directions, so it is especially recommended for cases with unclear posterior placenta and ultrasound results and/or high clinical suspicion [16,17,18]. In practice, MR images can provide important information for doctors to predict and diagnose the type of PAS. Recently, there have been many studies with promising performance. These algorithms typically use hand-designed or measured features to detect PAS. For example, Zheng et al. [19] used the observed imaging features on MR images to diagnose PAS, such as whether it is placenta previa, whether the placenta is thickened, etc. However, manually designing and measuring features is time-consuming and labor-intensive. To overcome this difficulty, several radiomics or deep learning-based methods have been proposed to automatically extract MRI features. For example, Romeo et al. [13] used a machine learning algorithm to predict the types of PAS based on radiomics features. Li et al. [15] used an auto-encoding network to extract features of MR images to predict the types of PAS. However, most of the current studies are based on a single sequence of MRI for the detection of PAS. In clinical practice, doctors usually diagnose PAS based on the placental signal reflected by T2WI MR images supplemented by the bleeding conditions reflected by T1WI MR images. Therefore, we propose a dual-path neural network fused with a multi-head attention module to detect PAS. The model first uses a dual-path neural network to extract T2WI and T1WI MR image features separately, and then combines these features. The multi-head attention module learns multiple different attention weights to focus on different aspects of the placental image to generate discriminative final features.
This retrospective study was approved by the Ethics Committee of The Affiliated Hospital of Medical College of Ningbo University, and all patients' identities were de-identified to protect patient privacy. The MR images were collected from The Affiliated Hospital of Medical College of Ningbo University and Ningbo Women & Children's Hospital from January 2018 to May 2021.
All MRI examinations were performed by radiologists with more than 5-year of work experience using 1.5 Tesla units to perform 8 or 16-channel array sensitivity-coded abdominal coil scans. The imaging equipment of The Affliated Hospital of Medical College of Ningbo University is Ge signa twinspeed 1.5T superconducting dual gradient magnetic resonance scanner with 8-channel body phased array coil. The imaging equipment of Ningbo Women & Children's Hospital is Philips Achieva Noval Dual 1.5T superconducting dual gradient magnetic resonance scanner, using a 16-channel body phased array coil. In this study, we chose the supine sagittal image of the conventional T2WI and T1WI sequence (side-lying imaging is prone to curling artifacts due to the bulge of the abdomen) as the experimental sequence.
Inclusion criteria are as follows: (1) Patients who underwent T2WI and T1WI MRI after 30 weeks of gestation; (2) Those with clear placenta invasion or pathological records after cesarean section; (3) Image quality is good. The exclusion criteria are as follows: (1) Patients without T2WI or T1WI MRI data; (2) Patients with mismatched number of T2WI and T1WI MRI slices; (3) Patients without clinical or surgical pathological confirmation; (4) Patients with severe image artifact.
Based on the above criteria, we collected a total of 321 cases, including 142 normal cases and 179 cases of PAS (including accrete, increta and percreta). The degree of invasion in all patients was determined based on surgical findings, intraoperative diagnosis and pathological examinations. Table 1 is the distribution table of our dataset.
Normal | Accrete | Increta | Percreta | |
Jan., 2018~Dec., 2019 | 86 | 17 | 114 | 8 |
Jan., 2020~May, 2021 | 56 | 14 | 22 | 4 |
Total | 142 | 31 | 136 | 12 |
In order to extract the features of dual-sequence MR images, we designed a dual-path neural network to extract T2WI and T1WI MR image features. The dual-path neural network consists of two independent backbone networks, and finally combines the features extracted from the two backbone networks. Taking ResNet-50 as the backbone network as an example [20], the structure of the dual-path neural network is shown in Figure 2.
As shown in Figure 2, the dual-path neural network consists of two independent backbone networks. The backbone network ResNet-50 includes 5 convolution blocks. The last layer of each convolution block outputs a feature map of a specific scale as the input of next convolution block, and the size of the feature map output by the previous convolution block is 1/2 of the size of the feature map output by the next convolution block. The final output of each backbone network is a 128-dimensional feature vector, and the outputs of the two backbone networks are spliced as the extracted combined features.
We obtained the combined features that fused the two sequences of MR images by the dual-path neural network. To assign different weights to different features and perform further fusion, we proposed a multi-head attention module. The output of the attention module usually focuses on a specific part of the image, such as: intraplacental heterogeneity information in T2WI MR images or hemorrhage information in T1WI MR images. A single attention unit can usually only reflect one aspect of an image. However, it may be more effective to have multiple attention units focus on different features in different sequences of images and to describe the images of the entire case together as the MRI data contains multiple sequences. Therefore, to be able to represent multiple aspects of an image, multiple attention units are needed to focus on different aspects of the image. Multiple attention units focus on the same input, but the parameters between multiple units are independent of each other.
Based on the above ideas, this paper proposed a multi-head attention module, which learns multiple sets of parameters to focus on different aspects of the final feature as shown in Figure 3.
Specifically, multiple sets of paired scalars are learned. The first scalar scales the global features linearly, and the second scalar acts as a bias to introduce nonlinear factors. The above process can be expressed as:
Vi=ωi∗T+bi | (1) |
where ωi and bi represent the scalar parameter pair learned by the ith attention unit. After obtaining the feature Vi output by the attention unit, L2 regularization is performed on it, and the features obtained by all units are spliced together, and after nonlinear transformation, it is used as the output of the multi-head attention module. The process can be expressed as:
V=Relu(V1⊕V2⊕⋯⊕VN) | (2) |
where V represents the final feature, ⊕ represents the splicing operation, and the nonlinear transformation uses the Relu activation function. In subsequent experimental sections, we will compare the detection accuracy of attention module networks with different head counts.
The dataset we collected included 142 normal cases and 179 cases of PAS (31 accrete, 136 increta and 12 percreta). The number of T2WI and T1WI sequence slices in each case is equal, including 24–48 slices. Both sequences scan the same part of the patient, so the slices of the two sequences can be in one-to-one correspondence. We excluded 5 slices from the head and tail of the two sequences in each case because there was no uterine area in these slices. To expand the dataset, we treat each T2WI image and its corresponding T1WI image as a slice group [21,22].
Background information occupies a large proportion in MR images and has a large impact on subsequent feature extraction and classification [23,24]. We crop the center region of the image, and the cropped size is 256 × 256. We randomly split the dataset into a training set and an independent validation set in a 4:1 ratio. Table 2 is the data set split table.
Class | Train | Validation |
Normal | 1886 | 472 |
PAS | 2451 | 613 |
All models are implemented in PyTorch. Batch normalization [25] is used for all models. All networks are trained using one RTX 2080Ti GPU with 50 training epochs for the dual-path neural network and the multi-head attention module. We used the Adam Optimizer with a small learning rate 10−4. In addition, the two backbone networks in the dual-path neural network are separately trained on T2WI and T1WI sequence image data respectively. Based on the trained model, the features are extracted and spliced to obtain the combined features which are used for the training of the multi-head attention module.
Evaluate the performance of classification methods using a confusion matrix. The true positive (TP), true negative (TN), false positive (FP) and false negative (FN) values are obtained from the confusion matrix to calculate four performance evaluation metrics as follows:
Accuracy=TN+TPTN+TP+FN+FP | (3) |
Precision=TPTP+FP | (4) |
Recall=TPTP+FN | (5) |
F1score=2×Precision×RecallPrecision+Recall | (6) |
Among them, accuracy represents the percentage of samples that exactly match the real situation; precision is the proportion of true positives in samples predicted as positive examples; Recall is the proportion of true positives in true positive examples; F1-score is the harmonic mean of precision and recall. In order to avoid dividing by 0 in the calculation process, when TP is 0, the F1 score is recorded as 0.
In order to prove that the features of dual-sequence MR images are effective for the detection of PAS, we use three machine learning methods to compare the detection accuracy of the three features on an independent validation set. The three machine learning methods are: decision tree (DT), random forest (RF), and support vector machine (SVM); the three features are: features extracted from T2WI MR images, features extracted from T1WI MR images, and combined features obtained by splicing the above two features. Table 3 shows the comparison results.
Methods | Features | Accuracy | F1 score |
Decision Tree | T2WI features | 0.857 | 0.870 |
T1WI features | 0.681 | 0.715 | |
Combined features | 0.854 | 0.868 | |
Random Forest | T2WI features | 0.864 | 0.876 |
T1WI features | 0.709 | 0.733 | |
Combined features | 0.862 | 0.874 | |
Support Vector Machine | T2WI features | 0.862 | 0.875 |
T1WI features | 0.700 | 0.737 | |
Combined features | 0.869 | 0.882 |
It can be seen in Table 3 that the method using SVM achieves the highest accuracy and F1 score on combined features, which are 0.869 and 0.882 respectively, which are higher than the accuracy of the same method on T2WI features and T1WI features. Based on DT and RF, the accuracy and F1 score of T2WI features and combined features are almost equal, and the accuracy and F1 score of T1WI features are lower than those of the above two features. In summary, simply splicing the features of the two sequences can improve the detection accuracy of PAS, but the effect is not obvious. Based on the SVM method, we used T2WI features and combined features to draw receiver operating characteristic (ROC) curves and performed the significance test (DeLong's Test) of the area under the ROC curves (AUC). The experimental results are shown in Table 4.
T2WI features ~ Combined features | |
Difference between areas | 0.0236 |
Standard Error | 0.00362 |
95% Confidence Interval | 0.0165 to 0.0307 |
z statistic | 6.519 |
Significance level | P < 0.0001 |
As can be seen from Table 4, the P-value is less than 0.0001, which indicates that there is a significant difference between the AUCs of the two kinds of features. To further improve model performance, we design a multi-head attention module to pay attention to different features in different sequences of MR images. The multi-head attention module can assign their respective weights according to their importance to improve the detection accuracy.
In order to compare the detection accuracy of attention modules with different numbers of heads in the detection of PAS, we calculated the detection accuracy and F1 score of attention modules with different numbers of heads on the independent validation set. Table 5 shows the comparison results. The case where the number of attention heads is 0 means that the attention mechanism is not used. It only uses one Relu layer and fully connected layers (the output dimension of the fully connected layer is 2, which means normal or invasion respectively) to detect PAS.
The number of heads | Accuracy | F1 score |
0 | 0.857 | 0.873 |
2 | 0.859 | 0.874 |
4 | 0.874 | 0.888 |
8 | 0.886 | 0.899 |
16 | 0.861 | 0.876 |
As can be seen from Table 5, the detection accuracy and F1 score of using the attention module are improved compared with those without the attention module. The highest accuracy and F1 score were achieved with 8 attention heads, 0.886 and 0.899 respectively. When the number of heads is less than 8, the fitting ability of the model is insufficient due to insufficient parameters. When the number of heads is greater than 8, because the model has too many parameters and the size of the training set is not large, the model is over-fitted, and the accuracy rate decreases to a certain extent. Therefore, an attention module with 8 heads is used in the experiment. To further compare the model performance with and without the attention module, we plot the ROC curves. Figure 4 shows the ROC curves with and without the 8-head attention module.
As shown in Figure 4, the performance of the model with the 8-head attention module is significantly higher than that of the model without the attention module. To compare the performance of the two models more intuitively, we calculated the AUC. The results are shown in Table 6.
Methods | AUC |
Attention module not used | 0.930 |
Eight attention heads module | 0.940 |
In order to objectively evaluate the performance of the model and eliminate the impact of data leakage on the experimental results, we select 40 samples that do not appear in the training set from the validation set to form a new test set (20 for normal, 20 for PAS), and verify the performance of the proposed model with 8-head attention module from the accuracy and F1 score. The experimental results show that the model can also achieve ideal results on the test set, reaching an accuracy of 0.825 and an F1 score of 0.837.
This paper proposed a method to detect PAS by extracting and fusing dual sequence placental MR image features using a dual-path neural network. The proposed model mainly includes a dual-path neural network and a multi-head attention module. The dual-path neural network is used to extract the features of the two sequences of MR images and perform feature fusion; the multi-head attention module learns the corresponding weights for different features in different sequences to generate more discriminative final features. Experimental results on an independent validation set demonstrate the effectiveness of each module in our method, with clear advantages over methods that only use a single sequence of MR images. This method may assist physicians in clinical diagnosis, help physicians make perinatal planning and improve maternal outcomes.
This work was supported by the Key Talents of Ningbo City Health Technology under Grant 2020SWSQNGG-06 and Zhejiang Province Medicine and Health Project under Grant 2022KY1149.
The authors declare that there are no conflicts of interest.
[1] |
K. E. Fitzpatrick, S. Sellers, P. Spark, J. J. Kurinczuk, P. Brocklehurst, M. Knight, Incidence and risk factors for placenta accreta/increta/percreta in the UK: a national casecontrol study, PLoS One, 7 (2012), 1–6. https://doi.org/10.1371/journal.pone.0052893 doi: 10.1371/journal.pone.0052893
![]() |
[2] |
Y. Oyelese, J. C Smulian, Placenta previa, placenta accreta, and vasa previa, Obstet. Gynecol., 107 (2006), 927–941. https://doi.org/10.1097/01.AOG.0000207559.15715.98 doi: 10.1097/01.AOG.0000207559.15715.98
![]() |
[3] |
A. Kilcoyne, A. S. Shenoy-Bhangle, D. J. Roberts, R. C. Sicodia, S. I. Lee, MRI of placenta accreta, placenta increta, and placenta percreta: pearls and pitfalls, Am. J. Roentgenol., 208 (2017), 214–221. https://doi.org/10.2214/AJR.16.16281 doi: 10.2214/AJR.16.16281
![]() |
[4] |
T. Y. Khong, The pathology of placenta accreta, a worldwide epidemic, J. Clin. Pathol., 61 (2008), 1243–1246. https://doi.org/10.1136/jcp.2008.055202 doi: 10.1136/jcp.2008.055202
![]() |
[5] |
Z. S. Bowman, A. G. Eller, T. R. Bardsley, T. Greene, M. W. Varner, R. M. Silver, Risk factors for placenta accreta: a large prospective cohort, Am. J. Perinatol., 31 (2014), 799–804. https://doi.org/10.1055/s-0033-1361833 doi: 10.1055/s-0033-1361833
![]() |
[6] |
G. Garmi, R. Salim, Epidemiology, etiology, diagnosis, and management of placenta accrete, Obstet. Gynecol. Int., 2012 (2012), 1–7. https://doi.org/10.1155/2012/873929 doi: 10.1155/2012/873929
![]() |
[7] |
Z. S. Bowman, T. A. Manuck, A. G. Eller, M. Simons, R. M. Silver, Risk factors for unscheduled delivery in patients with placenta accreta, Am. J. Obstet. Gynecol., 210 (2013), 241.e1–241.e6. https://doi.org/10.1016/j.ajog.2013.09.044 doi: 10.1016/j.ajog.2013.09.044
![]() |
[8] |
E. Jauniaux, A. Bhide, Prenatal ultrasound diagnosis and outcome of placenta previa accreta after cesarean delivery: a systematic review and metaanalysis, Am. J. Obstet. Gynecol., 217 (2017), 27–36. https://doi.org/10.1016/j.ajog.2017.02.050 doi: 10.1016/j.ajog.2017.02.050
![]() |
[9] |
P. Lumbiganon, M. Laopaiboon, A. M. Gülmezoglu, J. Souza, S. Taneepanichskul, P. Ruyan, et al., Method of delivery and pregnancy outcomes in asia: the who global survey on maternal and perinatal health 2007-08, Lancet, 375 (2010), 490–499. https://doi.org/10.1016/S0140-6736(09)61870-5 doi: 10.1016/S0140-6736(09)61870-5
![]() |
[10] |
H. Sun, H. Qu, L. Chen, W. Wang, Y. Liao, L. Zou, et al., Identification of suspicious invasive placentation based on clinical mri data using textural features and automated machine learning, Eur. Radiol., 29 (2019), 6152–6162. https://doi.org/10.1007/s00330-019-06372-9 doi: 10.1007/s00330-019-06372-9
![]() |
[11] |
L. Alamo, A. Anaye, J. Rey, A. Denys, G. Bongartz, S. Terraz, et al., Detection of suspected placental invasion by MRI: do the results depend on observer' experience?, Eur. J. Radiol., 82 (2013), 51–57. https://doi.org/10.1016/j.ejrad.2012.08.022 doi: 10.1016/j.ejrad.2012.08.022
![]() |
[12] |
Y. Ueno, K. Kitajima, F. Kawakami, T. Maeda, Y. Suenaga, S. Takahashi, et al., Novel MRI finding for diagnosis of invasive placenta praevia: evaluation of findings for 65 patients using clinical and histopathological correlations, Eur. Radiol., 24 (2014), 881–888. https://doi.org/10.1007/s00330-013-3076-7 doi: 10.1007/s00330-013-3076-7
![]() |
[13] |
V. Romeo, C. Ricciardi, R. Cuocolo, A. Stanzione, F. Verde, L. Sarno, et al., Machine learning analysis of MRI-derived texture features to predict placenta accreta spectrum in patients with placenta previa, Magn. Reson. Imaging, 64 (2019), 71–76. https://doi.org/10.1016/j.mri.2019.05.017 doi: 10.1016/j.mri.2019.05.017
![]() |
[14] |
Q. N. Do, M. A. Lewis, Y. Xin, A. J. Madhuranthakam, S. K. Happe, J. S. Dashe, et al., MRI of the placenta accreta spectrum (PAS) disorder: radiomics analysis correlates with surgical and pathological outcome, J. Magn. Reson. Imaging, 51 (2019), 936–946. https://doi.org/10.1002/jmri.26883 doi: 10.1002/jmri.26883
![]() |
[15] |
R. R. Xuan, T. Li, Y. T. Wang, J. Xu, W. Jin, Prenatal prediction and typing of placenta invasion using MRI deep and radiomic features, BioMed. Eng. OnLine, 20 (2021), 1–18. https://doi.org/10.1186/s12938-021-00893-5 doi: 10.1186/s12938-021-00893-5
![]() |
[16] |
M. R. Kocher, D. H. Sheafor, E. Bruner, C. Newman, J. F. M. Nino, Diagnosis of abnormally invasive posterior placentation: the role of MR imaging, Radiol. Case Rep., 12 (2017), 295–299. https://doi.org/10.1016/j.radcr.2017.01.014 doi: 10.1016/j.radcr.2017.01.014
![]() |
[17] |
D. Pizzi, A. Tavoletta, R. Narciso, D. Mastrodicasa, S. Trebeschi, C. Celentano, et al., Prenatal planning of placenta previa: diagnostic accuracy of a novel MRI-based prediction model for placenta accreta spectrum (PAS) and clinical outcome, Abdom. Radiol., 44 (2019), 1873–1882. https://doi.org/10.1007/s00261-018-1882-8 doi: 10.1007/s00261-018-1882-8
![]() |
[18] | A. D. C. Malita, C. Saracin, C. Dan, R. Prejbeanu, The added value of using Fusion-DWI technique in day to day practice for appreciating placental invasion of the myometrium, in 2017 E-Health and Bioengineering Conference (EHB), (2017), 305–308. https://doi.org/10.1109/EHB.2017.7995422 |
[19] |
X. L. Zheng, J. M. Xu, M. J. Yang, MRI diagnosis and classification of placenta increta in the third trimester of pregnancy, Radiol. Pract., 30 (2015), 264–268. https://doi.org/10.4103/0971-3026.125592 doi: 10.4103/0971-3026.125592
![]() |
[20] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), 770–778. |
[21] |
B. D. de Vos, F. F. Berendsen, M. A. Viergever, H. Sokooti, M. Staring, I. Išgum, A deep learning framework for unsupervised affine and deformable image registration, Med. Image Anal., 52 (2019), 128–143. https://doi.org/10.1016/j.media.2018.11.010 doi: 10.1016/j.media.2018.11.010
![]() |
[22] |
M. Hatt, C. Parmar, J. Y. Qi, I. E. Naqa, Machine (deep) learning methods for image processing and radiomics, IEEE Trans. Radiat. Plasma Med. Sci., 3 (2019), 104–108. https://doi.org/10.1109/TRPMS.2019.2899538 doi: 10.1109/TRPMS.2019.2899538
![]() |
[23] | M. Zhu, M. Yao, Y. He, B. Wu, Studies on high-resolution remote sensing sugarcane field extraction based on deep learning, in IOP conference series: earth and environmental science, 237 (2019), 1–8. https://doi.org/10.1088/1755-1315/237/3/032046 |
[24] | S. Guo, T. Li, K. Wang, C. Zhang, H. Kang, A lightweight neural network for hard exudate segmentation of fundus image, in International Conference on Artificial Neural Networks, 11729 (2019), 189–199. https://doi.org/10.1007/978-3-030-30508-6_16 |
[25] | S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in International Conference on Machine Learning, 37 (2015), 448–456. |
Normal | Accrete | Increta | Percreta | |
Jan., 2018~Dec., 2019 | 86 | 17 | 114 | 8 |
Jan., 2020~May, 2021 | 56 | 14 | 22 | 4 |
Total | 142 | 31 | 136 | 12 |
Class | Train | Validation |
Normal | 1886 | 472 |
PAS | 2451 | 613 |
Methods | Features | Accuracy | F1 score |
Decision Tree | T2WI features | 0.857 | 0.870 |
T1WI features | 0.681 | 0.715 | |
Combined features | 0.854 | 0.868 | |
Random Forest | T2WI features | 0.864 | 0.876 |
T1WI features | 0.709 | 0.733 | |
Combined features | 0.862 | 0.874 | |
Support Vector Machine | T2WI features | 0.862 | 0.875 |
T1WI features | 0.700 | 0.737 | |
Combined features | 0.869 | 0.882 |
T2WI features ~ Combined features | |
Difference between areas | 0.0236 |
Standard Error | 0.00362 |
95% Confidence Interval | 0.0165 to 0.0307 |
z statistic | 6.519 |
Significance level | P < 0.0001 |
The number of heads | Accuracy | F1 score |
0 | 0.857 | 0.873 |
2 | 0.859 | 0.874 |
4 | 0.874 | 0.888 |
8 | 0.886 | 0.899 |
16 | 0.861 | 0.876 |
Methods | AUC |
Attention module not used | 0.930 |
Eight attention heads module | 0.940 |
Normal | Accrete | Increta | Percreta | |
Jan., 2018~Dec., 2019 | 86 | 17 | 114 | 8 |
Jan., 2020~May, 2021 | 56 | 14 | 22 | 4 |
Total | 142 | 31 | 136 | 12 |
Class | Train | Validation |
Normal | 1886 | 472 |
PAS | 2451 | 613 |
Methods | Features | Accuracy | F1 score |
Decision Tree | T2WI features | 0.857 | 0.870 |
T1WI features | 0.681 | 0.715 | |
Combined features | 0.854 | 0.868 | |
Random Forest | T2WI features | 0.864 | 0.876 |
T1WI features | 0.709 | 0.733 | |
Combined features | 0.862 | 0.874 | |
Support Vector Machine | T2WI features | 0.862 | 0.875 |
T1WI features | 0.700 | 0.737 | |
Combined features | 0.869 | 0.882 |
T2WI features ~ Combined features | |
Difference between areas | 0.0236 |
Standard Error | 0.00362 |
95% Confidence Interval | 0.0165 to 0.0307 |
z statistic | 6.519 |
Significance level | P < 0.0001 |
The number of heads | Accuracy | F1 score |
0 | 0.857 | 0.873 |
2 | 0.859 | 0.874 |
4 | 0.874 | 0.888 |
8 | 0.886 | 0.899 |
16 | 0.861 | 0.876 |
Methods | AUC |
Attention module not used | 0.930 |
Eight attention heads module | 0.940 |