Loading [Contrib]/a11y/accessibility-menu.js
Research article Special Issues

PercolationDF: A percolation-based medical diagnosis framework


  • Goal: With the continuing shortage and unequal distribution of medical resources, our objective is to develop a general diagnosis framework that utilizes a smaller amount of electronic medical records (EMRs) to alleviate the problem that the data volume requirement of prevailing models is too vast for medical institutions to afford. Methods: The framework proposed contains network construction, network expansion, and disease diagnosis methods. In the first two stages above, the knowledge extracted from EMRs is utilized to build and expense an EMR-based medical knowledge network (EMKN) to model and represent the medical knowledge. Then, percolation theory is modified to diagnose EMKN. Result: Facing the lack of data, our framework outperforms naïve Bayes networks, neural networks and logistic regression, especially in the top-10 recall. Out of 207 test cases, 51.7% achieved 100% in the top-10 recall, 21% better than what was achieved in one of our previous studies. Conclusion: The experimental results show that the proposed framework may be useful for medical knowledge representation and diagnosis. The framework effectively alleviates the lack of data volume by inferring the knowledge modeled in EMKN. Significance: The proposed framework not only has applications for diagnosis but also may be extended to other domains to represent and model the knowledge and inference on the representation.

    Citation: Jingchi Jiang, Xuehui Yu, Yi Lin, Yi Guan. PercolationDF: A percolation-based medical diagnosis framework[J]. Mathematical Biosciences and Engineering, 2022, 19(6): 5832-5849. doi: 10.3934/mbe.2022273

    Related Papers:

    [1] Xiaoming Liu, Zhonghan Liu, Yaqin Pan, Yufeng Huang, Desheng Wu, Zhaoyu Ba . Full-endoscopic transforaminal procedure to treat the single-level adjacent segment disease after posterior lumbar spine fusion: 1–2 years follow-up. Mathematical Biosciences and Engineering, 2019, 16(6): 7829-7838. doi: 10.3934/mbe.2019393
    [2] Wei-wei Jiang, Guang-quan Zhou, Ka-Lee Lai, Song-yu Hu, Qing-yu Gao, Xiao-yan Wang, Yong-ping Zheng . A fast 3-D ultrasound projection imaging method for scoliosis assessment. Mathematical Biosciences and Engineering, 2019, 16(3): 1067-1081. doi: 10.3934/mbe.2019051
    [3] Kai Cheng, Lixia Li, Yanmin Du, Jiangtao Wang, Zhenghua Chen, Jian Liu, Xiangsheng Zhang, Lin Dong, Yuanyuan Shen, Zhenlin Yang . A systematic review of image-guided, surgical robot-assisted percutaneous puncture: Challenges and benefits. Mathematical Biosciences and Engineering, 2023, 20(5): 8375-8399. doi: 10.3934/mbe.2023367
    [4] Wei-wei Jiang, Xin-xin Zhong, Guang-quan Zhou, Qiu Guan, Yong-ping Zheng, Sheng-yong Chen . An automatic measurement method of spinal curvature on ultrasound coronal images in adolescent idiopathic scoliosis. Mathematical Biosciences and Engineering, 2020, 17(1): 776-788. doi: 10.3934/mbe.2020040
    [5] Biao Cai, Qing Xu, Cheng Yang, Yi Lu, Cheng Ge, Zhichao Wang, Kai Liu, Xubin Qiu, Shan Chang . Spine MRI image segmentation method based on ASPP and U-Net network. Mathematical Biosciences and Engineering, 2023, 20(9): 15999-16014. doi: 10.3934/mbe.2023713
    [6] Yongde Zhang, Qihang Yuan, Hafiz Muhammad Muzzammil, Guoqiang Gao, Yong Xu . Image-guided prostate biopsy robots: A review. Mathematical Biosciences and Engineering, 2023, 20(8): 15135-15166. doi: 10.3934/mbe.2023678
    [7] Won Man Park, Young Joon Kim, Shaobai Wang, Yoon Hyuk Kim, Guoan Li . Investigation of lumbar spine biomechanics using global convergence optimization and constant loading path methods. Mathematical Biosciences and Engineering, 2020, 17(4): 2970-2983. doi: 10.3934/mbe.2020168
    [8] Liwei Deng, Zhen Liu, Tao Zhang, Zhe Yan . Study of visual SLAM methods in minimally invasive surgery. Mathematical Biosciences and Engineering, 2023, 20(3): 4388-4402. doi: 10.3934/mbe.2023203
    [9] Tong Sun, Xin Zeng, Penghui Hao, Chien Ting Chin, Mian Chen, Jiejie Yan, Ming Dai, Haoming Lin, Siping Chen, Xin Chen . Optimization of multi-angle Magneto-Acousto-Electrical Tomography (MAET) based on a numerical method. Mathematical Biosciences and Engineering, 2020, 17(4): 2864-2880. doi: 10.3934/mbe.2020161
    [10] Yali Ouyang, Zhuhuang Zhou, Weiwei Wu, Jin Tian, Feng Xu, Shuicai Wu, Po-Hsiang Tsui . A review of ultrasound detection methods for breast microcalcification. Mathematical Biosciences and Engineering, 2019, 16(4): 1761-1785. doi: 10.3934/mbe.2019085
  • Goal: With the continuing shortage and unequal distribution of medical resources, our objective is to develop a general diagnosis framework that utilizes a smaller amount of electronic medical records (EMRs) to alleviate the problem that the data volume requirement of prevailing models is too vast for medical institutions to afford. Methods: The framework proposed contains network construction, network expansion, and disease diagnosis methods. In the first two stages above, the knowledge extracted from EMRs is utilized to build and expense an EMR-based medical knowledge network (EMKN) to model and represent the medical knowledge. Then, percolation theory is modified to diagnose EMKN. Result: Facing the lack of data, our framework outperforms naïve Bayes networks, neural networks and logistic regression, especially in the top-10 recall. Out of 207 test cases, 51.7% achieved 100% in the top-10 recall, 21% better than what was achieved in one of our previous studies. Conclusion: The experimental results show that the proposed framework may be useful for medical knowledge representation and diagnosis. The framework effectively alleviates the lack of data volume by inferring the knowledge modeled in EMKN. Significance: The proposed framework not only has applications for diagnosis but also may be extended to other domains to represent and model the knowledge and inference on the representation.



    As a minimally invasive interventional surgery, percutaneous puncture has the advantages of minimal trauma, fast recovery and strong targeting [1], and it is widely used in human tissue biopsies and drug-directed treatment. Among the various procedures, lumbar puncture is often used for the diagnosis of various central nervous system inflammatory diseases, vascular diseases, spinal cord diseases, intracranial-space-occupying diseases, unknown nervous system diseases and spinal angiography. In addition, lumbar puncture is also a common clinical diagnosis and treatment method for drug injection treatment and decompression due to the high cerebrospinal fluid pressure caused by central nervous system diseases [2,3].

    In traditional lumbar puncture surgery, doctors use medical imaging equipment and their clinical experience to determine the location of the needle puncture point on the skin. Medical imaging mainly includes computed tomography, magnetic resonance imaging and ultrasound imaging [4]. Compared with computed tomography and magnetic resonance imaging, ultrasound imaging has the advantages of being low cost, no radiation and real-time image generation [5]. Therefore, it has become the best choice for doctors in lumbar puncture surgery. However, the time and results required to complete a scan are dependent exclusively on the skills and experience of the doctor. In addition, ultrasound scanning requires a complex posture of the hands of doctors to maintain a certain contact force; hence, long-term work may cause complications such as arthritis [6,7]. The use of electric power to operate a mechanical arm can reduce the burden on doctors.

    During an operation, the robotic arm puncture system has the advantages of high precision, good stability and simple and efficient operation [8]. Aiming at the clinical needs of lumbar puncture, the present study involved the design of a robotic-assisted ultrasound scanning system for lumbar puncture. The computer-controlled robotic arm clamps the ultrasound probe and automatically scans the patient's lumbar spine.

    The proposed system is shown in Figure 1, and it mainly included DP-50 portable ultrasound diagnostic equipment (Shenzhen Mindray Biomedical Electronics Company) for acquiring the two-dimensional ultrasound images, a six-degree-of-freedom robotic arm (UR3, Universal Robots A/S, Odense, Denmark) for controlling the movement of the ultrasound probe, a main control computer with an Intel Core i7-7700 processor to control the robot arm and process the image, a video capture card (Epiphan DVI2USB3.0) for acquiring two-dimensional ultrasound images and transmitting them to a computer and a GRIPKIT-E gripper (Weiss Robotic Company) for holding the ultrasonic probes.

    Figure 1.  Lumbar spine and visualization system hardware components.

    The system was divided into three main parts: control, acquisition and visual processing. Among them, the control part mainly involved the main control computer (including software), the robot arm control box and the structure of the robot arm; the acquisition part mainly involved the portable ultrasonic scanning system and the video capture card; the visualization processing was mainly realized through the software on the main control computer in which the code was embedded. In the operation of the system, the operator controlled the control program interface through the peripheral equipment of the main control computer and monitored the running state of the system in real time through the display, and the robot arm and the control box communicated with the computer through the Transmission Control Protocol/Internet Protocol. When the operator determined the starting point and end point of the scan, the control program automatically controlled the robot arm holding the ultrasound probe to scan the lumbar vertebrae and collect two-dimensional ultrasound images. The collected ultrasound images were transmitted to the main control through the data acquisition card. The computer then processed and saved the images. After the scan was completed, the visualization toolkit (VTK) 3D image reconstruction program was employed to perform the 3D reconstruction of the scanned images. The reconstruction result can provide a reference and guidance, thereby enabling the doctor to determine the puncture point.

    The control software was developed by using the Microsoft C# programming language, which mainly included five modules. The user interface module (LumbarPunctureSystem) was the main module of the system; it had the main following functions: parameter setting, operation control, process control, image display and system status display. The surgery module (SurgeryModule) was used to create new cases, hardware (robot arm, ultrasound) start and control and system settings. The robot control module (RobotModule) realized the motion control of the robot arm so that the robot arm could independently move along the axes of the base coordinate system and the tool coordinate system. The ultrasound imaging module (ImagingModule) was used to collect the ultrasound images and record information, such as the position of the robot arm and the size of the torque during the acquisition process, and to perform 3D reconstruction after the acquisition. The button control module (EnhancedGlassButton) was used to create button controls for the interface.

    Initially, the control software was launched; the program automatically connected the robot arm and the acquisition card. After the connection was successfully established, the new case button in the operation control area was activated. The operator created a case and filled in relevant information by clicking the new case button. Subsequently, after the successful creation, the tool button was activated.

    After the operator clicked the tool button, the program controlled the robot arm to grab the ultrasonic probe placed at a fixed position, activated the button to set the scanning start point and prompted the robot arm motion control panel to appear on screen. The operator adjusted the robot arm through the control panel by moving it to the starting point of the model scan and pressed the start button to activate the scanning end button. Similarly, the probe could be moved to the end point, after which the end point button could be pressed. Subsequently, the program launched the scan control interface, which displayed the corresponding information and supported the modification of some settings. Finally, when the operator pressed the scan button, the program automatically controlled the robot arm to move along a specific path from the scan start point to the scan end point and performed real-time image storage. During the scanning process, every time the robot arm moved 0.5 mm, the program acquired an ultrasound image, as shown in Figure 2. During this process, the robotic arm was set to run in force mode to ensure that the probe was in effective contact with the measured object. After artificial planning, the robotic arm moved in the lumbar spine direction and it automatically adjusted the position in the vertical direction, following force-limited parameters in grip, thereby acquiring a good ultrasound image. It is worth noting that, before the scan started, information such as the pressure of the robot arm and the interval of image acquisition was set on the scan control interface. In addition, once the new case was created, the program automatically activated the stop button in the operation control area. At any time during the scan, the program could be terminated by pressing this button to prevent accidents and ensure the safety of the scan.

    Figure 2.  Two-dimensional ultrasound image acquisition program following robot arm scanning (the active green strip is the force status bar of probe).

    The lumbar spine model used in this study was the Enovo Medical Model (Shanghai, China) shown in Figure 3. Ultrasound has a good propagation speed in hydrogel material [9], and it produces a good coupling between the model and the probe. As a result, the experimental model for scanning was made in an acrylic material container (made in our laboratory), the spine model was put in it and the mixture of hydrogel and pure water was poured for lumbar region modeling. The liquid surface was 1–2 cm above the model. The ultrasonic probe model included a 75L38EA linear array, and the ultrasonic frequency was 8.5 MHz, the scan width was 3.8 cm and the depth was 4.6 cm. According to clinical experience, lumbar puncture is usually performed at the 3rd to 4th lumbar vertebral space, and in children under 4 years old, it needs to be punctured at the 4th to 5th spinous process space of the lumbar vertebrae. Therefore, the scanning range of this experiment was L3–L5. In the experiment, the main control program controlled the robot arm to clamp onto the ultrasonic probe to determine the scanning start and end points. The acquisition interval (0.5 mm) and force mode value (2N) were set to perform the two-dimensional ultrasound image acquisition. Then, the obtained image was processed for the 3D reconstruction and visualization.

    Figure 3.  Lumbar spine model: A. Model from Enovo Medical Model Company, B. Reprocessing lumbar spine model for ultrasonic probe scanning.

    After successfully acquiring the two-dimensional ultrasound tomographic image, the 3D reconstruction button in the main control program was activated. The operator pressed this button to perform the VTK-based 3D ultrasound image reconstruction and display, which can provide a reference and serve as guidance for determining the location of the puncture point and surgical path planning.

    The VTK is an open-source visualization development tool developed by the Kitware Company, and it is widely used in the fields of visualization and 3D graphics, as well as other fields. The 3D visualization process generally involves medical image data reading, data preprocessing, visual mapping, reconstruction and interactive operation. Image data acquisition is a two-dimensional tomographic medical image sequence obtained by scanning real objects by using an imaging device. The format of the read image included the DICOM format and BMP format. Data preprocessing was carried out because the quality of the image may be reduced to a certain extent during the image output, transmission and conversion, and due to the introduction of noise. Hence, the image needed to be preprocessed by filtering. Visual mapping converted the processed raw data into geometric pixels and attributes for drawing, including the display shape, color and other attribute settings. Common methods for reconstruction include surface and volume rendering. Surface rendering describes the 3D structure of organs or tissues by fitting the surface information of the structure by data splicing, and the key technology sets a threshold to extract the isosurface according to the needs [10]. Volume rendering directly synthesizes the 3D images by resampling the volume data based on the principle of vision. This method treats each voxel as a particle that can receive or emit light, and it configures the light intensity and opacity, integrates along the line of sight and finally forms a projected image. Because surface rendering may lead to the reduction of detailed information, resulting in poor authenticity, this study adopted the use of the volume rendering method for 3D reconstruction. The interactive operation was performed on the basis of the appearance of the 3D reconstruction results. By clicking the mouse, the reconstruction body could be rotated to any angle.

    The main methods of volume rendering are ray casting, maximum intensity projection and texture mapping [11]. Since the image quality obtained by the ray-casting algorithm is high, it is most widely used. The flowchart of the ray-casting algorithm is shown in Figure 4. The specific steps of the ray-casting method in the VTK were as follows. First, the bitmap (BMP) picture was read through the vtkBMPReader class. Subsequently, the vtkPiecewiseFunction class was used to determine the opacity of each voxel through the addition of numerical points, and the vtkColorTransferFunction class was employed to implement the color transfer function to add color values or a gray degree value. Then, by selecting different vtkVolumeMapper classes in the drawing pipeline, different volume rendering algorithms were implemented, among which the ray-casting algorithm used the vtkVolumeRayCastMapper class. Finally, the vtkVolumeActor class was used to specify the scene lighting, perspective, focus and other information, and the vtkRender class was used to render the entities in the drawn scene. The VTK also provides the widget base class. In this experiment, the vtkImagePlaneWidget class was used to create cutting plane objects, and the cross section, coronal plane and sagittal plane were successfully displayed.

    Figure 4.  Flowchart of ray-casting algorithm during the three D reconstruction processing.

    In this study, the two-dimensional ultrasound images collected by the control program were saved as an 8-bit BMP grayscale image. Subsequently, the reconstruction program was employed to visualize the 3D reconstruction, and the images were processed by using the median filtering method. According to the two-dimensional images collected every 0.5 mm, the ultrasound acquisition depth and probe width were set corresponding to the pixel points of the BMP image in order to calculate the difference between the data points. This was done to ensure that the image scale and size were consistent with the real object. Interactive operation was also set in the program. When the reconstruction result was obtained, the operator could place the mouse cursor in this area and hold down the left mouse button to move it arbitrarily. The operator could then rotate the 3D reconstruction body to any angle for full observation. The zoom function was also realized by rolling the mouse wheel.

    A total of 136 two-dimensional ultrasound images were collected in the lumbar spine model experiment, and the dimensions (in pixels) were 393 × 473. After preprocessing by enhancing the contrast of and denoising the original ultrasound images, the ray-casting algorithm was used to perform the 3D reconstruction of the lumbar spine model. The reconstruction results are shown in Figure 5. Figure 5B is the 3D reconstruction result of the volume rendering ray-casting method, and Figure 5A, C, D are the cross section, sagittal plane and coronal plane after reconstruction, respectively.

    Figure 5.  Three D reconstruction images of lumbar spine model after robot arm scanning and computer processing (L3, L4 and L5 are the lumbar spinous processes).

    Repeated experiments with different entry points were performed and recorded, following the parameters above; the results are shown in Figure 6. The 3D images and corresponding sagittal plane slice images were labeled to show the reproducibility for this model.

    Figure 6.  Three D images and corresponding sagittal plane slice images acquired for different entry points; 1a–5a are the three D images and 1b–5b are the corresponding sagittal plane slice images; 1c–5c are the location diagrams of the entry points (red points) and scanning direction (green arrows).

    Due to the propagation characteristics of the ultrasound along a straight line, the information of the spinous processes perpendicular to the ultrasound linear array was lost. Thus, the reconstructed structure was defective; however, the location of the spinous process gap could be determined by using the sagittal and coronal images, which can be used as a follow-up reference basis for the puncture surgery. The repeated experiment data with different entry points (Figure 6) shows the anatomical structural changes in grey value and sharpness of boundary, which might have been caused by entry point changes, along with shape changing on the back during the scanning of the model.

    In recent years, with the gradual maturity of the robotic technology, puncture surgery systems assisted by a robotic arm have developed rapidly. Nelson et al. [12] developed a set of ultrasound image-guided breast biopsy systems for the detection of early breast cancer; it was shown to have good repeatability and less than 0.5% variation. Podder et al. [13] designed a multichannel robot system based on the ultrasound guidance, and it comprised multiple particle implantation needles, especially for the implantation of radioactive particles in the prostate [14]. An experiment revealed a conveyor accuracy within 0.2 mm. Kojcev et al. [15] used both arms to design a set of ultrasound-guided auxiliary puncture systems; it implemented the closed-loop control of the planning-imaging-needle insertion to adjust and plan the puncture path in real time. Xu et al. [16] developed a minimally invasive robotic system for the ultrasound-guided microwave ablation of the liver. This system could perform 3D reconstruction of the liver tumor before surgery and plan the puncture path; it assisted the doctors in sending the puncture needle to the target point during an operation. The above-mentioned previous studies mainly involved ultrasound guidance and included location planning of the body or probe through the use of computed tomography, optical assisted technology and other imaging methods, instead of the robotic arm automatic guidance with force feedback. Our study provides an easier way to employ ultrasound scanning, find the precise location and simplify the complicated operation. In this study, the scanned ultrasound images were processed and reconstructed in all three dimensions. After experimental verification, the system was shown to be able to collect two-dimensional ultrasound images and reconstruct 3D lumbar spine images, thereby providing the doctors with stereoscopic visualization guidance, laying the foundation for subsequent robot-assisted puncture surgeries.

    Regarding the usage of this methodology in actual clinical practice, more studies need to be done in the future, such as the use of an automatic drawing point and trajectory as a reference for the doctor, the addition of the modules for different types of conditions, like leukemia cerebrospinal fluid examination, scoliosis and larger body habitus and the redesign of different algorithms for robotic arm motion and puncture planning.

    The present study was purposed to design a 3D visualization-enabled robotic-assisted ultrasound automatic scanning system for the lumbar spine. The control program for the robotic arm was developed using the C# language, and a 3D reconstruction program for the ultrasound images was developed based on the VTK. The system controlled the robotic arm to automatically scan and collect the ultrasound images of the lumbar spine model through the program; it subsequently performed the 3D reconstruction. The system could operate normally and achieved the target function. The experimental results for the lumbar spine model showed that the scanning system could effectively collect the images of the model, visualize the 3D reconstruction, realize a full-dimensional stereoscopic display and provide a reference basis for lumbar puncture surgery.

    This study was supported by grants from the CAMS Innovation Fund for Medical Science (No. 2021-I2M-1-042, 2021-I2M-1-015, No. 2017-I2M-1-016).

    The authors declare that there is no conflict of interest.



    [1] M. L. Craig, C. A. Jackel, P. B. Gerrits, Selection of medical students and the maldistribution of the medical workforce in Queensland, Australia, Aust. J. Rural Health, 1 (1993), 17–21. https://doi.org/10.1111/j.1440-1584.1993.tb00075.x doi: 10.1111/j.1440-1584.1993.tb00075.x
    [2] J. A. Osheroff, J. M. Teich, B. Middleton, E. B Steen, A. Wright, D. E. Detmer, A roadmap for national action on clinical decision support, J. Am. Med. Inf. Assoc., 14 (2007), 141–145. https://doi.org/10.1197/jamia.M2334 doi: 10.1197/jamia.M2334
    [3] D. Demner-Fushman, W. W. Chapman, C. J. McDonald, What can natural language processing do for clinical decision support? J. Biomed. Inf., 42 (2009), 760–772. https://doi.org/10.1016/j.jbi.2009.08.007 doi: 10.1016/j.jbi.2009.08.007
    [4] A. N. Kho, J. A. Pacheco, P. L. Peissig, L. Rasmussen, K. M. Newton, N. Weston, et al., Electronic medical records for genetic research: results of the emerge consortium, Sci. Transl. Med., 3 (2011) 79re1. https://doi.org/10.1126/scitranslmed.3001807 doi: 10.1126/scitranslmed.3001807
    [5] R. C. Wasserman, Electronic medical recor (EMRs), epidemiology, and epistemology: reflections on EMRs and future pediatric clinical research, Acad. Pediatr., 11 (2011), 280–287. https://doi.org/10.1016/j.acap.2011.02.007 doi: 10.1016/j.acap.2011.02.007
    [6] A. Rajkomar, J. Dean, I. Kohane, Machine learning in medicine, N. Engl. J. Med., 2019. https://doi.org/10.1056/NEJMra1814259 doi: 10.1056/NEJMra1814259
    [7] T. Ma, A. Zhang, AffinityNet: semi-supervised few-shot learning for disease type prediction, in Proceedings of the AAAI Conference on Artificial Intelligence, 33 (2019), 1069–1076. https://doi.org/10.1609/aaai.v33i01.33011069
    [8] Y. Wang, Q. Yao, J. T. Kwok, L. M. Ni, Generalizing from a few examples: A survey on few-shot learning, preprint, arXiv: 1904.05046.
    [9] M. E. J. Newman, The structure and function of complex networks, SIAM Rev., 45 (2003), 167–256. https://doi.org/10.1137/S003614450342480 doi: 10.1137/S003614450342480
    [10] A. L. Barabási, N. Gulbahce, J. Loscalzo, Network medicine: A network-based approach to human disease, Nat. Rev. Genet., 12 (2011), 56–68. https://doi.org/10.1038/nrg2918 doi: 10.1038/nrg2918
    [11] K. I. Goh, M. E. Cusick, D. Valle, B. Childs, M. Vidal, A. L. Barabási, The human disease network, Proc. Natl. Acad. Sci., 104 (2007), 8685–8690. https://doi.org/10.1073/pnas.0701361104 doi: 10.1073/pnas.0701361104
    [12] C. A. Hidalgo, N. Blumm, A. L. Barabási, N. A. Christakis, A dynamic network approach for the study of human phenotypes, PLoS Comput. Biol., 5 (2009), e1000353. https://doi.org/10.1371/journal.pcbi.1000353 doi: 10.1371/journal.pcbi.1000353
    [13] X. Z. Zhou, J. Menche, A. L. Barabási, A. Sharma, Human symptoms–disease network, Nat. Commun., 5 (2014), 4212. https://doi.org/10.1038/ncomms5212 doi: 10.1038/ncomms5212
    [14] C. Zhao, J. Jiang, Z. Xu, Y. Guan, A study of EMR-based medical knowledge network and its applications, Comput. Methods Programs Biomed., 143 (2017), 13–23. https://doi.org/10.1016/j.cmpb.2017.02.016 doi: 10.1016/j.cmpb.2017.02.016
    [15] R. Alizadehsani, J. Habibi, M. J. Hosseini, H. Mashayekhi, R. Boghrati, A. Ghandeharioun, et al., A data mining approach for diagnosis of coronary artery disease, Comput. Methods Programs Biomed., 111 (2013), 52–61. https://doi.org/10.1016/j.cmpb.2013.03.004 doi: 10.1016/j.cmpb.2013.03.004
    [16] H. H. Rau, C. Y. Hsu, Y. A. Lin, S. Atique, A. Fuad, L. M. Wei, et al., Development of a web-based liver cancer prediction model for type II diabetes patients by using an artificial neural network, Comput. Methods Programs Biomed., 125 (2016), 58–65. https://doi.org/10.1016/j.cmpb.2015.11.009 doi: 10.1016/j.cmpb.2015.11.009
    [17] E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, J. Sun, Gram: Graph-based attention model for healthcare representation learning, preprint, arXiv: 1611.07012.
    [18] E. Choi, M. T. Bahadori, J. Sun, J. Kulas, A. Schuetz, W. F. Stewart, Retain: An interpretable predictive model for healthcare using reverse time attention mechanism, in Proceedings of the 30th International Conference on Neural Information Processing Systems, (2016), 3512–3520. Available from: https://dl.acm.org/doi/10.5555/3157382.3157490.
    [19] Z. C. Lipton, D. C. Kale, C. Elkan, R. Wetzell, Learning to diagnose with LSTM recurrent neural networks, preprint, arXiv: 1511.03677.
    [20] F. Ma, R. Chitta, J. Zhou, Q. You, T. Sun, J. Gao, Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks, in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2017), 1903–1911. https://doi.org/10.1145/3097983.3098088
    [21] E. Choi, C. Xiao, W. F. Stewart, J. Sun, Mime: Multilevel medical embedding of electronic health records for predictive healthcare, in Proceedings of the 32nd International Conference on Neural Information Processing Systems, (2018), 4547–4557. Available from: https://dl.acm.org/doi/abs/10.5555/3327345.3327366.
    [22] J. Jiang, X. Li, C. Zhao, Y. Guan, Q. Yu, Learning and inference in knowledge-based probabilistic model for medical diagnosis, Knowledge-Based Syst., 138 (2017), 58–68. https://doi.org/10.1016/j.knosys.2017.09.030 doi: 10.1016/j.knosys.2017.09.030
    [23] D. E. Heckerman, E. J. Horvitz, B. N. Nathwani, Toward normative expert systems: Part I the pathfinder project, Methods Inf. Med., 31 (1991), 90–105. https://doi.org/10.1055/s-0038-1634867 doi: 10.1055/s-0038-1634867
    [24] J. G. Klann, P. Szolovits, S. M. Downs, G. Schadow, Decision support from local data: creating adaptive order menus from past clinician behavior, J. Biomed. Inf., 48 (2014), 84–93. https://doi.org/10.1016/j.jbi.2013.12.005 doi: 10.1016/j.jbi.2013.12.005
    [25] M. J. Flores, A. E. Nicholson, A. Brunskill, K. B. Korb, S. Mascaro, Incorporating expert knowledge when learning bayesian network structure: a medical case study, Artif. Intell. Med., 53 (2011), 181–204. https://doi.org/10.1016/j.artmed.2011.08.004 doi: 10.1016/j.artmed.2011.08.004
    [26] D. M. Chickering, D. Heckerman, C. Meek, Large-sample learning of bayesian networks is np-hard, J. Mach. Learn. Res., 5 (2004), 1287–1330.
    [27] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, preprint, arXiv: 1609.02907.
    [28] C. Y. Wee, C. Liu, A. Lee, J. S. Poh, H. Ji, A. Qiu, et al., Cortical graph neural network for ad and mci diagnosis and transfer learning across populations, NeuroImage: Clin., 23 (2019), 101929. https://doi.org/10.1016/j.nicl.2019.101929 doi: 10.1016/j.nicl.2019.101929
    [29] R. C. Petersen, P. Aisen, L. A. Beckett, M. Donohue, A. Gamst, D. J. Harvey, et al., Alzheimer's disease neuroimaging initiative (adni): clinical characterization, Neurology, 74 (2010), 201–209. https://doi.org/10.1212/WNL.0b013e3181cb3e25 doi: 10.1212/WNL.0b013e3181cb3e25
    [30] D. Ahmedt-Aristizabal, M. A. Armin, S. Denman, C. Fookes, L. Perersson, Graph-based deep learning for medical diagnosis and analysis: past, present and future, Sensors, 21 (2021), 4758. https://doi.org/10.3390/s21144758 doi: 10.3390/s21144758
    [31] M. Bastian, S. Heymann, M. Jacomy, Gephi: An open source software for exploring and manipulating networks, in Proceedings of the International AAAI Conference on Web and Social Media, 3 (2009), 361–362. Available from: https: //ojs.aaai.org/index.php/ICWSM/article/view/13937.
    [32] S. R. Broadbentand J. M. Hammersley, Percolation processes: I. Crystals and mazes, Math. Proc. Cambridge Philos. Soc., 53 (1957), 629–641. https://doi.org/10.1017/S0305004100032680 doi: 10.1017/S0305004100032680
    [33] J. M. Hammersley, Percolation processes: II. The connective constant, Math. Proc. Cambridge Philos. Soc., 53 (1957), 642–645. https://doi.org/10.1017/S0305004100032692 doi: 10.1017/S0305004100032692
    [34] G. Grimmett, Percolation, Springer, New York, 1989. https://doi.org/10.1007/978-1-4757-4208-4
    [35] E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, J. Sun, Doctor AI: predicting clinical events via recurrent neural networks, in Proceedings of the 1st machine learning for healthcare conference, 56 (2016), 301–318. Available from: http://proceedings.mlr.press/v56/Choi16.pdf.
    [36] 2010 i2b2/va challenge evaluation assertion annotation guidelines. Available from: https://www.i2b2.org/NLP/Relations/assets/Assertion%20Annotation%20Guideline.pdf.
    [37] 2010 i2b2/va challenge evaluation concept annotation guidelines. Available from: https://www.i2b2.org/NLP/Relations/assets/Concept%20Annotation%20Guideline.pdf.
    [38] J. Yang, Y. Guan, B. He, C. Qu, Q. Yu, Y. Liu, et al., Annotation scheme and corpus construction for named entities and entity relations on Chinese electronic medical records, J. Software, 27 (2016), 2725–2746. https://doi.org/10.13328/j.cnki.jos.004880 doi: 10.13328/j.cnki.jos.004880
    [39] B. He, B. Dong, Y. Guan, J. Yang, Z. Jiang, Q. Yu, et al., Building a comprehensive syntactic and semantic corpus of Chinese clinical texts, J. Biomed. Inf., 69 (2017), 203–217. https://doi.org/10.1016/j.jbi.2017.04.006 doi: 10.1016/j.jbi.2017.04.006
    [40] E. Choi, A. Schuetz, W. F. Stewart, J. Sun, Using recurrent neural network models for early detection of heart failure onset, J. Am. Med. Inf. Assoc., 24 (2017), 361–370. https://doi.org/10.1093/jamia/ocw112 doi: 10.1093/jamia/ocw112
    [41] P. Nguyen, T. Tran, N. Wickramasinghe, S. Venkatesh, Deepr: a convolutional net for medical records, IEEE J. Biomed. Health Inf., 21 (2017), 22–30. https://doi.org/10.1109/JBHI.2016.2633963 doi: 10.1109/JBHI.2016.2633963
    [42] C. Zhao, J. Jiang, Y. Guan, X. Guo, B. He, EMR-based medical knowledge representation and inference via Markov random fields and distributed representation learning, Artif. Intell. Med., 87 (2018), 49–59. https://doi.org/10.1016/j.artmed.2018.03.005 doi: 10.1016/j.artmed.2018.03.005
    [43] R. Miotto, L. Li, B. A. Kidd, J. T. Dudley, Deep Patient: An unsupervised representation to predict the future of patients from the electronic health records, Sci. Rep., 6 (2016), 26094. https://doi.org/10.1038/srep26094 doi: 10.1038/srep26094
    [44] C. Buckley, E. M. Voorhees, Retrieval evaluation with incomplete information, in Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, (2004), 25–32. https://doi.org/10.1145/1008992.1009000
    [45] C. Buckley, E. M. Voorhees, Evaluating evaluation measure stability, ACM SIGIR Forum, 51 (2017), 235–242. https://doi.org/10.1145/3130348.3130373 doi: 10.1145/3130348.3130373
    [46] M. D. Smucker, J. Allan, B. Carterette, A comparison of statistical significance tests for information retrieval evaluation, in Proceedings of the 16th ACM Conference on Conference on Information and Knowledge Management, (2007), 623–632. https://doi.org/10.1145/1321440.1321528
  • This article has been cited by:

    1. Xingmao Shao, Lun Xie, Chiqin Li, Zhiliang Wang, A Study on Networked Industrial Robots in Smart Manufacturing: Vulnerabilities, Data Integrity Attacks and Countermeasures, 2023, 109, 0921-0296, 10.1007/s10846-023-01984-2
    2. Xiaoyu Wang, Tianbo Liu, Songping Mai, Respiratory motion tracking of the thoracoabdominal surface based on defect-aware point cloud registration, 2024, 14, 2093-9868, 1057, 10.1007/s13534-024-00390-3
    3. Shuai Hu, Rongjian Lu, Yinlong Zhu, Wenhan Zhu, Hongzhe Jiang, Suzhao Bi, Application of Medical Image Navigation Technology in Minimally Invasive Puncture Robot, 2023, 23, 1424-8220, 7196, 10.3390/s23167196
    4. Silong Zhang, Jicheng Chen, Hengkai Sun, Zhi Qi, Hui Zhang, A scientometric review of medical flexible needle systems in surgery: signal processing, navigation and control, 2024, 18, 1863-1703, 627, 10.1007/s11760-024-03179-0
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2687) PDF downloads(82) Cited by(3)

Figures and Tables

Figures(6)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog