Review

Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review

  • Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems.

    Citation: Deven Nahata, Kareem Othman. Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review[J]. AIMS Electronics and Electrical Engineering, 2023, 7(4): 271-321. doi: 10.3934/electreng.2023016

    Related Papers:

    [1] Honghui Zhang, Jun Xia, Yinlong Yang, Qingqing Yang, Hongfang Song, Jinjie Xie, Yue Ma, Yang Hou, Aike Qiao . Branch flow distribution approach and its application in the calculation of fractional flow reserve in stenotic coronary artery. Mathematical Biosciences and Engineering, 2021, 18(5): 5978-5994. doi: 10.3934/mbe.2021299
    [2] Li Cai, Qian Zhong, Juan Xu, Yuan Huang, Hao Gao . A lumped parameter model for evaluating coronary artery blood supply capacity. Mathematical Biosciences and Engineering, 2024, 21(4): 5838-5862. doi: 10.3934/mbe.2024258
    [3] Benchawan Wiwatanapataphee, Yong Hong Wu, Thanongchai Siriapisith, Buraskorn Nuntadilok . Effect of branchings on blood flow in the system of human coronary arteries. Mathematical Biosciences and Engineering, 2012, 9(1): 199-214. doi: 10.3934/mbe.2012.9.199
    [4] Alberto M. Gambaruto, João Janela, Alexandra Moura, Adélia Sequeira . Sensitivity of hemodynamics in a patient specific cerebral aneurysm to vascular geometry and blood rheology. Mathematical Biosciences and Engineering, 2011, 8(2): 409-423. doi: 10.3934/mbe.2011.8.409
    [5] Mette S. Olufsen, Ali Nadim . On deriving lumped models for blood flow and pressure in the systemic arteries. Mathematical Biosciences and Engineering, 2004, 1(1): 61-80. doi: 10.3934/mbe.2004.1.61
    [6] Nattawan Chuchalerm, Wannika Sawangtong, Benchawan Wiwatanapataphee, Thanongchai Siriapisith . Study of Non-Newtonian blood flow - heat transfer characteristics in the human coronary system with an external magnetic field. Mathematical Biosciences and Engineering, 2022, 19(9): 9550-9570. doi: 10.3934/mbe.2022444
    [7] Xu Bie, Yuanyuan Tang, Ming Zhao, Yingxi Liu, Shen Yu, Dong Sun, Jing Liu, Ying Wang, Jianing Zhang, Xiuzhen Sun . Pilot study of pressure-flow properties in a numerical model of the middle ear. Mathematical Biosciences and Engineering, 2020, 17(3): 2418-2431. doi: 10.3934/mbe.2020131
    [8] A. Newton Licciardi Jr., L.H.A. Monteiro . A network model of social contacts with small-world and scale-free features, tunable connectivity, and geographic restrictions. Mathematical Biosciences and Engineering, 2024, 21(4): 4801-4813. doi: 10.3934/mbe.2024211
    [9] Derek H. Justice, H. Joel Trussell, Mette S. Olufsen . Analysis of Blood Flow Velocity and Pressure Signals using the Multipulse Method. Mathematical Biosciences and Engineering, 2006, 3(2): 419-440. doi: 10.3934/mbe.2006.3.419
    [10] Juan Palomares-Ruiz, Efrén Ruelas, Flavio Muñoz, José Castro, Angel Rodríguez . A fractional approach to 3D artery simulation under a regular pulse load. Mathematical Biosciences and Engineering, 2020, 17(3): 2516-2529. doi: 10.3934/mbe.2020138
  • Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems.



    The coronary fraction flow reserve (FFR) represents the gold standard for the evaluation of myocardial ischemia [1,2,3,4]. However, it is measured through an invasive operation with adenosine injection; this represents a limitation in some patients who are intolerant to adenosine [3,5]. The rapid advancements in computational fluid dynamics and medical images made it possible to perform a personalized hemodynamic simulation based on medical images. By combining the advantages of computed tomography imaging (CTA) and FFR, a new non-invasive technique (FFRCT) was proposed to assess the coronary artery stenosis from both anatomical and functional aspects [6,7,8,9]. The consistency of FFRCT and FFR was evaluated in the multi-center, early prospective study of DISCOVER-FLOW (Diagnosis of Ischemia-Causing Stenoses Obtained Via Noninvasive Fractional Flow Reserve) [7,10,11]. The results showed a good consistency.

    Taylor et al. laid the foundation for the numerical calculation of FFRCT in HeartFlow NXT [4,9,10,11,12,13]. Calculating FFR based on the coronary CTA image includes five basic processes: 1) Based on the CTA images, the patient's accurate and personalized anatomical model of the epicardial coronary artery is constructed; 2) Based on the assumption that there is no vascular stenosis, the total coronary flow and the flow of each branch in the resting state are accurately evaluated; 3) Coronary microcirculation resistance is calculated at rest; 4) The change in the coronary microcirculation resistance is quantified under maximum hyperemia; 5) The control equation (N-S equation) of the coronary fluid is numerically calculated to obtain the flow rate and pressure in the coronary artery in the resting and hyperemic states, and FFR is calculated. Combining anatomy, physiology and computational fluid dynamics makes it possible to calculate the blood flow and pressure in the coronary arteries in a hyperemic state. However, in the second step, the coronary artery branch flow cannot be measured in the clinic, so the subsequent calculations cannot be performed. To solve this problem, the most commonly used method is based on the assumption of the blood flow diameter scaling law to evaluate the flow in bifurcated vessels.

    The power-law scaling relationship has been assumed to be ubiquitous in biology and it is relevant to the domains of medicine, nutrition and ecology [14,15,16]. Eighty years ago, Murray proposed a compromise between the frictional and metabolic cost, expressed as a cost function. Murray's law predicts a universal exponent that is invariant (3.0) for all vascular trees with internal flows obeying laminar conditions [15,17,18]. Unlike Murray's law, Zhou et al. showed that the exponent of the blood-diameter scaling law is not necessarily equal to 3.0, but it depends on the ratio of the metabolic to viscous power dissipation of the tree of interest [19,20]. Subsequently, Lucian et al. proposed several power coefficient values (ranging from 2.1 to 3) based on the assumption that blood flow requires minimal energy, and they verified their suggestions with a bifurcated asymmetric vascular tree [21]. Eventually, 2.7 was extrapolated as the best exponent of the blood-diameter scaling law. Besides, Yunlong Huo et al. did not reference empirical parameters when constructing the coronary artery tree. They derived that the blood flow of the bifurcated vessel is proportional to the 7/3 power of the diameter based on the relationship between the predicted volume and diameter of the intraspecies scaling theory index 3.0 [17,22,23]. The above-mentioned widely used blood flow-diameter scaling laws are obtained by performing least-squares fitting of various organs through a variety of animal experiments. However, for many blood flow diameter scaling laws, which one is suitable for the blood flow distribution of human coronary bifurcation vessels and how to perform the numerical calculation of FFRCT [24,25,26].

    This study retrospectively calculated the microcirculation resistance under different blood flow-diameter scaling laws of 3, 2.7 and 7/3 and used it as the exit boundary condition of FFRCT in the simulation calculation. The accuracy of FFRCT was evaluated with invasive FFR as the standard, which derived the optimal blood flow-diameter scaling law that is suitable for the FFRCT simulation calculation. This provides a theoretical basis for the blood flow distribution of coronary bifurcation vessels and is significant for modifying the 0D/3D coupling model to improve the accuracy of FFRCT.

    In this study, an open-loop 0D/3D geometric multi-scale model was used to numerically simulate FFRCT under different blood flow diameter scaling laws. The accuracy of the simulation results of the three methods was evaluated with the clinically measured FFR as the evaluation standard. The whole research process included four main parts: the collection of physiological parameters and structural parameters, Rm calculation, the 0D/3D multi-scale model setting and comparative analysis of the simulation results, as shown in Figure 1.

    Figure 1.  Flow chart of the research framework.

    This study was designed as a retrospective study and approved by the ethics committee of Peking University People's Hospital. The coronary artery disease (CAD) patients enrolled in the group signed an informed consent form.

    The entire enrollment and exclusion process is shown in Figure 2. In detail, the inclusion and exclusion criteria of 50 stable and unstable CAD patients should meet the diagnostic criteria of local CAD patients. 9 patients were excluded due to clinical instability, 1 patient had arrhythmia, 3 patients had poor CT image quality resulting in incomplete reconstruction of the coronary artery tree, 7 patients were excluded due to stenosis before the first bifurcation node (the stenosis of 5 patients was located in the left main stem, while the stenosis of 2 patients was located in the proximal right coronary segment), 2 patients were excluded due to incomplete ultrasound results, 2 patients were excluded due to diabetes with suspected microcirculation dysfunction. Finally, 26 patients were enrolled in the group. Among which 19 patients had non-myocardial ischemia and 7 patients had myocardial ischemia.

    Figure 2.  Enrollment and exclusion process of the study population.

    Before invasive FFR measurement, multiple clinical examinations were required, such as brachial artery blood pressure measurement, CTA examination, ultrasound examination and biochemical examination. The FFR value was obtained in standard operating procedures. The patients were all in the supine position, and under the digital subtraction angiography machine (DSA, PHILIPS, Netherlands), adenosine triphosphate was injected at a uniform rate of 140~180 μg/kg/min to make the coronary blood vessels reach the maximum hyperemia state. The interval between various inspections should not exceed 30 days.

    The necessary information for the calculation of Rm was the physiological parameters of the patient and the structural parameters of the coronary arteries. The patient's physiological parameters mainly included the heart rate (HR), systolic blood pressure (SP), diastolic blood pressure (DP), left ventricular end-systolic volume (ESV) and left ventricular end-diastolic volume (EDV), which could be retrieved from the Hospital Information System (HIS). As for the structural information of the coronary artery, it included the length of the blood vessel (Lv) and its cross-sectional area of the blood vessel (Av), which were measured from the coronary artery reconstructed based on the patient's CTA images.

    The reconstruction of the three-dimensional (3D) model was based on the patient's computed tomography image (CT). In this study, all coronary CTA examinations were performed on CT scanners with ≥ 64 detector rows (Revolution, GE, USA) and double cylinder high-pressure syringe (STELLANT, MEDRAD Inc., USA). The CT images in DICOM format were imported into Mimics Version 19.0 (Materialise, Inc., Belgium) software. Two professional technicians manually reconstructed the coronary artery tree and performed cross-checking. Coronary angiography was used as a control, and coronary vessels with a diameter of more than 1 mm were preserved. If the reconstructed model was different, then a third professional technician would perform the reconstruction. Finally, physicians with more than 10 years of clinical experience were invited to check all reconstructed 3D models to ensure the accuracy of the models.

    Each bifurcation of a blood vessel was represented as a node, and the distance between every two nodes was defined as the length of the vessel. The centerline measurement method was more accurate, which could avoid the error caused by the blood vessel surface measurement. The size of the cross-sectional area was the size of the area where the diameter of the blood vessel did not drastically change after each node. It was necessary to ensure that the cross-section was perpendicular to the centerline of the blood vessel as much as possible [27,28], as shown in Figure 3.

    Figure 3.  Coronary artery 3D reconstruction. (A) 3D reconstruction based on the patient's CT images. The stenosis was located in the left anterior descending branch (LAD), and vessels with a diameter of 1 mm or more were preserved. (B) The length from the root of the coronary artery to the first bifurcation was denoted as L5, the length from the first bifurcation to the second bifurcation was denoted as L6 on LAD, and the length from the first bifurcation node to the end of the blood vessel was denoted as L11 on the left circumflex (LCX). (C) After the first bifurcation, we measured the cross-sectional area of the blood vessel perpendicular to the center line, avoiding the stenosis, and marked the areas as A6 on the LAD and A11 on the LCX.

    The numerical calculation of Rm was carried out in an ideal state, i.e., the blood vessel was not narrowed. Based on the pressure drop and the mean blood vessel flow, the equivalent resistance for each coronary artery tree could be calculated, as shown in Eq (11). However, the pressure drop and flow could not be directly obtained in the real coronary artery model, and the calculation needed to be started from the coronary artery root.

    Taken the model in Figure 4A as an example, the stenosis was located in the proximal part of LAD, and the resistance of the arterioles and capillaries after the bifurcation of the blood vessel was R6m. The entire left coronary artery could be simplified as shown in Figure 4B. According to the principle of blood vessel modeling, the bifurcated blood vessel was equivalent to a parallel circuit [29], as shown in Figure 4C.

    Figure 4.  Rm calculation model. (A) There was a large number of small arteries and capillaries in the posterior segment of the blood vessel, which represented the main part of the Rm; (B) The simplified model of the left coronary artery. The resistance of each segment of the blood vessel was recorded as R5, R6 and R11. R6m and R11m represented the Rm at the back end of the blood vessel; (C) The resistance on the same blood vessel was a series circuit, and the bifurcated blood vessel was a parallel circuit.

    In this process, the pressure of the right atrium was assumed to be 0 mmHg. The cardiac output Qco could be obtained through ultrasonic measurement of EDV and ESV, assuming that the coronary flow Qcor was 4% of the cardiac output, and the left coronary flow Ql represented 60% of coronary flow [2,9,10,30]. The pressure P of the coronary artery was the mean arterial pressure, as shown in the Eqs (1)-(4).

    Qco=(EDVESV)HR (1)
    Qcor=Qco4% (2)
    Ql=60%Qcor (3)
    P=(SP+2DP)/3 (4)

    The resistance of normal blood vessels was mainly caused by the blood viscosity. Coronary vessels were divided according to the bifurcation nodes, and the resistance of the vessels between each two nodes, such as R5, R6 and R11, could be calculated according to Eq (5).

    Rv=8πηLVA2V (5)

    where Lv denoted the length of the blood vessels of two branch nodes, Av was the cross-sectional area at the beginning of the vessel, and ηrepresented a blood viscosity 0.0035 Pa·s.

    The blood flow was proportional to the power of the diameter, and the flow of bifurcated blood vessels was obtained by Eq (6), in which i represented the number of forked nodes, i.e., the number of forked layers (i = 1, 2, 3...), and j represented the number of bifurcated vessels on a bifurcation node (j = 1, 2, 3...). In this study, the Rm values when α was 3, 2.7 and 7/3 were calculated and recorded as 3Rm, 2.7Rm and 7/3Rm, respectively.

    Qij=DαijDαi1+Dαi2+DαijQl (6)

    By subtracting the pressure drop of the blood vessel from that of the coronary artery root, the pressure of the coronary microcirculation could be obtained in LAD. Rm was calculated by Eq (7).

    Rm=pΔp1Δp2Δpi0.024Qco(Dα11+Dα12+Dα1jDα1jDα21+Dα22+Dα2jDα2jDαi1+Dαi2+DαijDαij) (7)

    A widely recognized open-loop 0-3D geometric multi-scale model was used for FFRCT simulation calculation, as shown in Figure 5.

    Figure 5.  Open-loop 0-3D geometric multi-scale coupling model.

    In this study, all models were meshed using the tetrahedral meshing method in the ANSYS workbench 14.5 software. The simulation parameters were set in the ANSYS CFX 14.5 software. The material properties of blood were set as adiabatic, isotropic, incompressible Newtonian fluid with a density of 1050 kg/m3 and a viscosity of 0.0035 Pa·s. The blood vessel wall was defined as a non-slip rigid wall.

    In the geometric multi-scale model, the zero-dimensional (0D) part (lumped parameter model) was mainly composed of three modules: the heart module (inlet boundary conditions), systemic circulation module (systemic circulation exit boundary conditions) and coronary artery module (coronary exit boundary conditions). In each of these modules, the resistance (R) was used to simulate the flow resistance, the capacitance (C) was used to simulate the vascular compliance, and the inductance (L) was used to simulate the blood inertia. In the heart module, the heart valve was simulated by the diode, and the left ventricle was simulated by the variable capacitance C(t). At the same time, a left ventricular module (LV) was added at the distal end of the coronary module to simulate the effect of myocardial contraction on the coronary blood flow.

    The pressure-volume relationship was used in the ventricular module to describe the change in the variable capacitance in a cardiac cycle [27,31], as shown in Eq (8).

    E(t)=P(t)V(t)V0 (8)

    where E(t) denoted the time-varying elasticity (mmHg/ml), which was the reciprocal of the variable capacitance C, V(t) and P(t) represented the time-varying ventricular volume (ml) and pressure (mmHg), respectively, and V0 (ml) denoted the reference solvent. Mathematically, Eq (9) was used to describe E(t):

    E(t)=(EmaxEmin)En(tn)+Emin (9)

    En(tn) represented the normalized time-varying elasticity, expressed in Eq (10):

    En(tn)=1.55[(tn0.7)1.91+(tn0.7)1.9][11+(tn1.17)21.9] (10)

    where tn=tTmax, Tmax=0.1+0.15tc, and tc denoted the cardiac cycle (s).

    In the coronary artery module, the coronary exit boundary conditions were Rm, mainly consisting of the coronary arterial resistance Ra, coronary arterial microcirculation resistance Ra-m and coronary venous microcirculation resistance Ra-v. According to literature, their proportions were 0.32, 0.52 and 0.16, respectively. When the patient transitioned from a resting state to a hyperemic state, the Rm became 0.24 times that of the resting state [10,27].

    The upstream and downstream blood vessels of the 3D model were represented by a lumped parameter model and solved by the displayed Euler method. The two models were connected by interface conditions (pressure and flow were continuous). To realize the 0D/3D coupling, the software development was performed. The secondary development system was a user-defined subroutine based on the FORTRAN language.

    In order to verify the influence of the power law scaling of the blood flow diameter on the simulation results, we repeated the FFRCT simulation with 3Rm, 2.7Rm and 7/3Rm as the exit boundary conditions without changing any other settings, the simulation program or FFRCT measurement position. The results were recorded as 3FFRCT, 2.7FFRCT and 7/3FFRCT, respectively.

    Statistical analysis was performed using the SPSS Version 23.0 (IBM, New York, USA) and MedCalc Version 15 (MedCalc Software, Ostend, Belgium) software. In the demographic analysis, continuous data were represented by the mean ± standard deviation and the correlation between continuous variables and FFR was expressed by ANOVA. Qualitative data were represented by the statistical frequency and percentage, and the chi-square test was used to express the correlation between qualitative variables and FFR. In the correlation analysis, a p-value of less than 0.05 indicated statistical significance.

    The simulated FFRCT and the clinically measured FFR were continuous values, and the Bland-Altman diagram was chosen to analyze the consistency of the two methods. If the difference between the two measurement results was clinically acceptable, within the 95% limits of agreement (95% LoA), then a good agreement could be considered between the measurement results of the two methods. The smaller the 95% LoA, the better the consistency. The area under the curve (AUC) value of the receiver operating characteristic (ROC) was used as the criterion to evaluate the diagnostic performance. The closer the AUC to 1, the better the diagnostic performance.

    As shown in Table 1, there was a clear correlation between the Rm calculated under the three powers and the clinically measured FFR value. The average values of 3Rm, 2.7Rm and 7/3Rm were 146.02 ± 80.73, 149.13 ± 80.94 and 154.17 ± 83.59 mmHg/ml/s, and their p-values were 0.004, 0.005 and 0.010, respectively. Subsequently, the p-values of the correlation between gender, age, HR, SP, DP, EDV, ESV and invasive FFR were all greater than 0.05, indicating that the correlation was statistically insignificant.

    Table 1.  Patient characteristics.
    Variables Value P value
    Gender (male/female, percentage) 20/6, 76.9% 0.455
    Age (x±S, year) 60.73 ± 8.79 0.573
    Heart rate (x±S, beat/min) 66.86 ± 13.43 0.285
    Systolic blood pressure (x±S, mmHg) 127.57 ± 25.64 0.275
    Diastolic blood pressure (x±S, mmHg) 73.67 ± 16.24 0.293
    Left ventricular end-systolic volume (ESV) (x±S, ml/min) 81.79 ± 50.96 0.449
    Left ventricular end-diastolic volume (EDV) (x±S, ml/min) 60.45 ± 43.72 0.895
    Calculated Rm under QD3 (3Rm) (x±S, mmHg/ml/s) 146.02 ± 80.73 0.004
    Calculated Rm under QD2.7 (2.7Rm) (x±S, mmHg/ml/s) 149.13 ± 80.94 0.005
    Calculated Rm under QD7/3 (7/3Rm) (x±S, mmHg/ml/s) 154.17 ± 83.60 0.010

     | Show Table
    DownLoad: CSV

    The average values of 3FFRCT, 2.7FFRCT and 7/3FFRCT were 0.827 ± 0.172, 0.825 ± 0.169 and 0.824 ± 0.167, respectively, which were slightly higher than the clinically measured FFR (0.814 ± 0.151). Figure 6 shows the scatter plot of FFRCT and FFR under the three different power-laws. The Pearson correlation coefficients of 3FFRCT, 2.7FFRCT and 7/3FFRCT with FFR were 0.95, 0.95 and 0.96, respectively, p-value < 0.001, indicating a good correlation between the results of the simulation calculation and those of the clinical measurement. In particular, the simulated 7/3FFRCT was slightly better than the other two.

    Figure 6.  Scatter plots of three kinds of blood flow-diameter.

    Figure 7 shows the Bland-Altman diagram. The average difference between 3FFRCT, 2.7FFRCT, 7/3FFRCT and FFR was 0.01, and the 95% LoA were -0.11~0.12, -0.09~0.11 and -0.08~0.11 (n = 26), respectively. Obviously, FFRCT was slightly overestimated compared with the clinically measured FFR, but the results of 7/3FFRCT-FFRCT were most concentrated in the simulation results of the three different blood flow-diameter power laws.

    Figure 7.  Bland-Altman diagram between three kinds of blood flow-diameter power-law simulation FFRCT and clinically measured FFR.

    The diagnostic performance of FFRCT was evaluated using the ROC curve analysis. Figure 8 shows the ROC curve of FFRCT using a cut-off clinically measured FFR of ≤ 0.80. In the case of 3FFRCT, 2.7FFRCT and 7/3FFRCT, the AUC values were 0.944 [95% (CI): 0.777-0.996], 0.962 [95% (CI): 0.805-0.999], and 0.962 [95% (CI): 0.805-0.999], respectively.

    Figure 8.  Receiver-operating characteristic (ROC) curve of FFRCT using the cut-off invasive FFR of ≤ 0.8 on a per-vessel basis. (A) The blue line represents the ROC curve of 3FFRCT; (B) The green line represents the ROC curve of 2.7FFRCT; (C) The orange line represents the ROC curve of 7/3FFRCT.

    Correspondingly, the standard errors were 0.0536, 0.0400 and 0.0400, respectively, as shown in Table 2. The metrics of sensitivity, specificity, positive predictive value (NPV), negative predictive value (NPV) and accuracy were also important indicators for evaluating the diagnostic performance, as shown in Table 2.

    Table 2.  The diagnostic metrics of three numerical simulation results.
    Diagnostic metrics 3FFRCT 2.7FFRCT 7/3FFRCT
    AUC (95% CI) 0.944 (0.777-0.996) 0.962 (0.805-0.999) 0.962 (0.805-0.999)
    SE 0.0536 0.0400 0.0400
    Sensitivity 94.7% 100% 100%
    Specificity 85.7% 85.7% 85.7%
    PPV 85.7% 85.7% 85.7%
    NPV 94.7% 100% 100%
    Accuracy 92.3% 96.15% 96.15%
    Threshold 0.816 0.787 0.791

     | Show Table
    DownLoad: CSV

    2.7FFRCT and 7/3FFRCT had the same sensitivity, specificity, PPV, NPV and accuracy in judging myocardial ischemia, which were higher than the metrics of 3FFRCT. During the whole calculation process, the corresponding thresholds for judging the myocardial ischemia were 0.816, 0.787 and 0.791, respectively. The diagnostic threshold of 7/3FFRCT was the closest to 0.8. Based on the above-presented data, it was found that FFRCT had the best diagnostic performance when the blood flow-diameter power-law scaling was 7/3.

    The blood flow diameter power-law scaling combined the structure and function and represented the supply-demand relationship between blood and the myocardium. The variable power-law size affected the distribution of blood flow in the blood vessel. In addition, it further affected the setting of the CFD exit boundary condition Rm, which had special significance for the accuracy of the simulation results.

    A significant correlation between Rm and the clinically measured FFR can be observed in Table 1, which was closely related to the calculation method and process of Rm. The calculation method was explained by the theoretical simplified model, as shown in Figure 9.

    Figure 9.  A simplified model explaining the calculation method.

    Blood flow from the left ventricle into the blood vessel, and finally circulates to the right atrium through the capillaries. Since the pressure (Pv) of the right atrium is smaller than the pressure (Pa) of the left ventricle, this work assumed Pv to be 0 mmHg. Vascular resistance (Rv) is mainly caused by the blood viscosity, and the blood pressure drop (Δp) can be obtained. Then, the calculation method of Rm can be obtained by Ohm's law, as shown in Eq (11).

    Rm=PaΔpQ (11)

    Rm represented the resistance produced by all blood vessels at the back of the stenosis. In Equation 11, Δp denoted the pressure drop of a blood vessel, which is related to the length and cross-sectional area of the corresponding blood vessel. Similarly, Q represented the mean flow, which is affected by the diameter of the blood vessel. Throughout the entire calculation process, Rm did not only involve the patient's personalized physiological parameters, but also involved the patient's coronary artery structure. Therefore, the calculated Rm represented the predictive value of the comprehensive calculation of the patient's structure and function evaluation.

    The prediction threshold of 3FFRCT (0.816) was greater than 0.8, but the prediction threshold of 2.7FFRCT (0.787) and 7/3FFRCT (0.791) was less than 0.8. This was related to the data distribution. ROC curve analysis could get the Yoden index, as shown in the Eq (12).

    YoudenIndex=Sensitivity+Specificity1 (12)

    The value corresponding to the largest Youden index was the prediction threshold. In other words, the threshold was the value with the best sensitivity and specificity. The predicted threshold value distinguishes the result of ischemia and ischemia to the greatest extent consistent with the result of FFR judgment. According to the distribution of the simulation results, the ROC curve analysis automatically calculated the points with the highest sensitivity and specificity to obtain the predicted threshold. The distribution of FFRCT data was not the same, and the prediction threshold was also different. However, the closer the threshold was to 0.8, the closer the distribution of FFRCT was to that of FFR.

    Figure 7 shows that the average difference between 2.7FFRCT, 7/3FFRCT and clinical FFR was 0.01, which showed that the simulation results overestimated FFR. Correspondingly, Table 2 shows that the prediction threshold of 2.7FFRCT and 7/3FFRCT was lower than 0.8. This phenomenon was related to the data processing method. FFR was a continuous variable when performing Bland-Altman diagram analysis. In contrast, the thresholds of the three methods were predicted by ROC curve analysis, with FFRCT as the test variable and FFR (0.8) as the state variable. FFR was a categorical variable. FFR ≥ 0.8 was defined as non-ischemic, and FFR < 0.8 was defined as ischemic. So, for non-ischemic data, no matter whether FFRCT was 0.8 or 1, there was almost no difference for the ROC curve.

    The allometric scaling law represents the basis to understand the morphological and hemodynamic characteristics of biological vascular trees and is of great significance to the construction of vascular models. In this work, we found that having blood flow proportional to the power 7/3 of the diameter is more suitable for the simulation calculation of FFRCT, and it has a higher accuracy than the FFRCT calculated by the powers 3 and 2.7. This phenomenon may be related to the geometry of the coronary artery.

    The influence of the coronary artery structure on the blood flow diameter scaling law was mainly reflected in the branch diameters of blood vessels of the same bifurcated vessel. Assuming that the stenosis was on vessel j of branch i, and there were two vessels at the bifurcation, the blood flow of the vessel could be calculated as in Eq (6). Dij/Di(j+1) could represent the relative size of the blood vessel in which the stenosis was located, represented by the letter A. Then the formula can be transformed into Eq (13).

    Qij=(11Aα+1)Ql (13)

    This is a piecewise function. When A was greater than 1, as the power increased, the flow gradually increased, and A when was less than 1, as the power increased, the flow gradually decreased. Correspondingly, only when A was 1, no matter how the power changed, it would not affect the flow. This phenomenon leads to a greater dispersion of FFRCT obtained by simulation when the power is 3. In the past 80 years, many papers on Murray's law and exponential verification have been published. These studies show that the 3.0 application has significant dispersibility in all coronary trees with an internal flow that obeys laminar flow conditions [14-­20]. This may be the reason why 7/3FFRCT is better than the other two methods.

    The simulation results of the three power scaling laws did not show much change, but this phenomenon was acceptable. The scatter plots of the difference between 3FFRCT and 2.7FFRCT, 3FFRCT and 7/3FFRCT, and 2.7FFRCT and 7/3FFRCT were shown in Figure 10A. It could be found that the maximum difference of the simulation results was 0.027, and its corresponding 3FFRCT and 7/3FFRCT values were 0.947 and 0.920, respectively, as shown in Figure 10B and 10C. Correspondingly, the difference between 3Rm and 7/3Rm was 84 mmHg/ml/s. Previous works that the Rm in the hyperemic state to be 0.24 times Rm in the resting state [29], which narrows the gap in the exit boundary conditions simulated by FFRCT. At the same time, FFR is a dimensionless value between 0-1 [8,29,31], which further narrows the gap between the simulation results of different power scaling laws.

    Figure 10.  Scatter plot of the differences between the three types of FFRCT. The figure shows the differences in three kinds of simulation results of 26 patients. The blue dots represent the difference between 3FFRCT and 2.7FFRCT; the red dots represent the difference between 3FFRCT and 7/3FFRCT; and the green dots represent the difference between 2.7FFRCT and 7/3FFRCT. The simulation result of the 25th patient has the largest difference, which is 0.027 marked with a red circle. The FFR cloud chart of patients with the largest results difference under the three kinds of blood flow-diameter scaling laws.

    The simulation of FFRCT makes non-invasive FFR calculation possible, which has passed the FDA certification [1,6]. However, it has disadvantages, including the long time of the entire process, large amount of calculation and complicated operation. This makes non-professionals unable to operate it, so the simulation results cannot be presented in real time and the clinical development of this technology is limited.

    The rapid development of machine learning (ML) has made the application of FFRCT possible. The process of using ML generally consists of four parts: 1) the ML framework is built; 2) the training set is used for supervised learning; 3) the verification set is used to determine the parameters of the network; 4) the test set is used to verify the optimal performance of the model. Previous research focused on the ML framework, such as random forest (RF), self-organizing map (SOM), support vector machine (SVM), fully connected neural network (FC), long and short-term memory unit (LSTM), TreeVes -D, TreeVes-U and TreeVes-Net [32,33,34,35,36].

    However, the training set includes a set of samples of known categories to adjust the parameters of the classifier to achieve the required performance. Hence, the accuracy of the training set is very important. The method of this research improves the accuracy of FFRCT, which can be applied to improve the accuracy of the training set and validation set. This lays the foundation for the use of machine learning in the flow field in the coronary arteries.

    smartFFR was a new method of true onsite and real-time, geometrically derived coronary artery stenosis function assessment. Its inlet boundary condition was an average static pressure of 100 mmHg, and its outlet boundary condition was an increased transient flow profile. In order to do this, after evaluating the diameter and cross-sectional area of the branch, Murray's law was applied, which stated that the flow of the branch was proportional to 3 exponent of the corresponding diameter of the branch. This study pointed out that the accuracy of FFRCT simulation with an application of the 7/3 exponent was higher than that of the 3 exponent. In future work, it could be applied to smartFFR to check whether 7/3 could improve the accuracy of smartFFR [37].

    This work had some limitations. Firstly, the number of enrolled groups was not large enough, the selection of data was biased, and there were fewer ischemic vessels than non-ischemic vessels. Secondly, the research performed based on the assumption that the ratio of flow distribution between the left and right coronary arteries was 6:4, which was not personalized. Finally, a constant blood viscosity value of 0.0035 Pa.s was applied in the simulation, which was not a personalized value.

    The blood flow-diameter power-law scaling affects the calculation of blood flow distribution and microcirculation resistance; hence, it affects the simulation results of FFRCT. This paper simulated and calculated FFRCT in the case of 3, 2.7 and 7/3 powers. Through retrospective analysis between these results and invasive FFR, the accuracy of FFRCT was shown to be the highest when the blood flow diameter scale law was 7/3. This has positive significance for the improvement of the FFRCT model and the accuracy of the model.

    This study was supported by National Natural Science Foundation of China (11832003, 11772016, 11702008), National Key R & D Program of China (2020YFC2004400), and Key Project of Science and Technology of Beijing Municipal Education Commission (KZ201810005007).

    This study passed the inspection by the medical ethics committee of Peking University People's Hospital. All participants have signed an informed consent.

    The authors declare that there is no conflict of interests of this article.



    [1] World Health Organization (2018) Global Status Report on Road Safety. WHO: Geneva, Switzerland.
    [2] Othman K (2021) Public acceptance and perception of autonomous vehicles: a comprehensive review. AI and Ethics 1: 355-387. https://doi.org/10.1007/s43681-021-00041-8 doi: 10.1007/s43681-021-00041-8
    [3] Autonomous Vehicle Market to Garner Growth 63.5%. Available from: https://www.precedenceresearch.com/autonomous-vehicle-market
    [4] Glon, R, Edelstein, S (2020) The History of Self-Driving Cars. Available from: https://www.digitaltrends.com/cars/history-of-self-driving-cars-milestones/
    [5] Wiggers K (2020) Waymo's Autonomous Cars Have Driven 20 Million Miles on Public Roads. Available from: https://venturebeat.com/2020/01/06/waymos-autonomous-cars-have-driven-20-million-miles-on-public-roads/
    [6] Othman K (2022) Exploring the implications of autonomous vehicles: A comprehensive review. Innovative Infrastructure Solutions 7: 165. https://doi.org/10.1007/s41062-022-00763-6 doi: 10.1007/s41062-022-00763-6
    [7] Shuttleworth J (2019) SAE Standard News: J3016 Automated-Driving Graphic Update. Available from: https://www.sae.org/news/2019/01/sae-updates-j3016-automated-driving-graphic
    [8] Autopilot. Available from: https://www.tesla.com/en_IE/autopilot
    [9] Othman K (2021) Impact of autonomous vehicles on the physical infrastructure: Changes and challenges. Designs 5: 40. https://doi.org/10.3390/designs5030040 doi: 10.3390/designs5030040
    [10] Othman K (2023) Exploring the evolution of public acceptance towards autonomous vehicles with the level of knowledge. Innovative Infrastructure Solutions 8: 208. https://doi.org/10.1007/s41062-023-01180-z doi: 10.1007/s41062-023-01180-z
    [11] Othman K (2022) Multidimension analysis of autonomous vehicles: the future of mobility. Civil Engineering Journal 7: 71-93. https://doi.org/10.28991/CEJ-SP2021-07-06 doi: 10.28991/CEJ-SP2021-07-06
    [12] Mozaffari S, Al-Jarrah OY, Dianati M, Jennings P, Mouzakitis A (2020) Deep Learning-Based Vehicle Behavior Prediction for Autonomous Driving Applications: A Review. IEEE Trans Intell Transp Syst, 1-15.
    [13] Mehra A, Mandal M, Narang P, Chamola V (2020) ReViewNet: A Fast and Resource Optimized Network for Enabling Safe Autonomous Driving in Hazy Weather Conditions. IEEE Trans Intell Transp Syst, 1-11. https://doi.org/10.1109/TITS.2020.3013099 doi: 10.1109/TITS.2020.3013099
    [14] Othman K (2023) Public attitude towards autonomous vehicles before and after crashes: A detailed analysis based on the demographic characteristics. Cogent Engineering 10: 2156063. https://doi.org/10.1109/TITS.2020.3013099 doi: 10.1109/TITS.2020.3013099
    [15] Velasco-Hernandez G, Yeong DJ, Barry J, Walsh J (2020) Autonomous Driving Architectures, Perception and Data Fusion: A Review. Proceedings of the 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP 2020), Cluj-Napoca, Romania, 3-5. https://doi.org/10.1109/ICCP51029.2020.9266268
    [16] Giacalone J, Bourgeois L, Ancora A (2019) Challenges in aggregation of heterogeneous sensors of Autonomous Driving Systems. Proceedings of the 2019 IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11-13. https://doi.org/10.1109/SAS.2019.8706005
    [17] Liu X, Baiocchi O (2016) A comparison of the definitions for smart sensors, smart objects and Things in IoT. Proceedings of the 2016 IEEE 7th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 13-15.
    [18] Wojciechowicz T, Smart Sensor vs Base Sensor—What's the Difference? Symmetry Blog. Available from: https://www.semiconductorstore.com/blog/2018/Smart-Sensor-vs-Base-Sensor-Whats-the-Difference-Symmetry-Blog/3538/#: ~: text = By%20using%20a%20smart%20sensor, achieve%20on%20a%20base%20sensor
    [19] Fayyad J, Jaradat MA, Gruyer D, Najjaran H (2020) Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. Sensors 20: 4220. https://doi.org/10.3390/s20154220 doi: 10.3390/s20154220
    [20] What Are Convolutional Neural Networks? IBM. 2020. Available from: https://www.ibm.com/cloud/learn/convolutional-neural-networks
    [21] Saha S (2018) A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way. Data Science and ML. Saturn Cloud. Available from: https://saturncloud.io/blog/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way/
    [22] Brownlee J (2019) A Gentle Introduction to the Rectified Linear Unit (ReLU). In Deep Learning Performance. Machine Learning Mastery. Available from: https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
    [23] What is LIDAR? Learn How Lidar Works. Velodyne Lidar. 2022. Available from: https://velodynelidar.com/what-is-lidar/
    [24] Wang P (2021) Research on comparison of LIDAR and camera in autonomous driving. Journal of Physics: Conference Series 2093: 012032. https://doi.org/10.1088/1742-6596/2093/1/012032 doi: 10.1088/1742-6596/2093/1/012032
    [25] ScienceDirect (2018) Inertial measurement. Inertial Measurement - an overview. Available from: https://www.sciencedirect.com/topics/engineering/inertial-measurement
    [26] Camera, radar and LIDAR: A comparison of the three types of sensors and their limitations. 2021. Available from: https://autocrypt.io/camera-radar-lidar-comparison-three-types-of-sensors/
    [27] The use of radar technology in Autonomous Vehicles. 2022. Cadence. Available from: https://resources.system-analysis.cadence.com/blog/msa2022-the-use-of-radar-technology-in-autonomous-vehicles
    [28] Dobler S, Kondel V (2023) LiDAR and Radar Battle For Autonomous Vehicle Turf. Determining the future of autonomous driving system. Available from: https://www.oliverwyman.com/our-expertise/insights/2023/jul/lidar-radar-future-of-autonomous-driving-systems.html
    [29] Minaee S, Boykov Y, Forikli F, Plaza A, Kehtarnavaz N, Terzopoulos D (2021) Image Segmentation Using Deep Learning: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 44: 3523-3542. https://doi.org/10.1109/tpami.2021.3059968 doi: 10.1109/tpami.2021.3059968
    [30] Sensor fusion. Sensor Fusion - an overview. ScienceDirect Topics. 2017. Available from: https://www.sciencedirect.com/topics/engineering/sensor-fusion
    [31] Nabati R, Qi H (2019) RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles. 2019 IEEE International Conference on Image Processing (ICIP), 3093-3097. https://doi.org/10.1109/ICIP.2019.8803392
    [32] Lewis G (2016) Object Detection for Autonomous Vehicles.
    [33] Satilmis Y, Tufan F, Şara M, Karslı M, Eken S, Sayar A (2019) CNN Based Traffic Sign Recognition for Mini Autonomous Vehicles. Information Systems Architecture and Technology: Proceedings of 39th International Conference on Information Systems Architecture and Technology-ISAT 2018: Part II, 85-94. https://doi.org/10.1007/978-3-319-99996-8_8
    [34] Shen X, Batkovic I, Govindarajan V, Falcone P, Darrell T, Borrelli F (2020) ParkPredict: Motion and Intent Prediction of Vehicles in Parking Lots. 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 1170-1175. https://doi.org/10.1109/IV47402.2020.9304795
    [35] Gao H, Cheng B, Wang J, Li K, Zhao J, Li D (2018) Object Classification using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment. IEEE T Ind Inform 14: 4224-4231. https://doi.org/10.1109/TII.2018.2822828 doi: 10.1109/TII.2018.2822828
    [36] Saez A, Bergasa L, Romeral E, Guillén M, Barea R, Sanz R (2018) CNN-based Fisheye Image Real-Time Semantic Segmentation. 2018 IEEE Intelligent Vehicles Symposium (IV), 1039-1044. https://doi.org/10.1109/IVS.2018.8500456
    [37] Hofesmann E (2020) IoU a better detection evaluation metric. Towards Data Science. Available from: https://towardsdatascience.com/iou-a-better-detection-evaluation-metric-45a511185be1
    [38] Farag W, Saleh Z (2018) Behavior Cloning for Autonomous Driving using Convolutional Neural Networks. 2018 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), 1-7. https://doi.org/10.1109/3ICT.2018.8855753
    [39] Iftikhar S, Asim M, Zhang Z, El-Latif AAA (2022) Advance generalization technique through 3D CNN to overcome the false positives pedestrian in autonomous vehicles. Telecommun Syst 80: 545-557.
    [40] Gao Y, Tian F, Li J, Fang Z, Al-Rubaye S, Song W, et al. (2022) Joint optimization of depth and ego-motion for intelligent autonomous vehicles. IEEE T Intell Transp Syst. https://doi.org/10.1109/TITS.2022.3159275
    [41] Liang T, Bao H, Pan W, Pan F (2022) Traffic sign detection via improved sparse R-CNN for autonomous vehicles. J Adv Transport 2022: 1-16. https://doi.org/10.1155/2022/3825532 doi: 10.1155/2022/3825532
    [42] Zhu C, Mehrabi A, Xiao Y, Wen Y (2019) CrowdParking: Crowdsourcing Based Parking Navigation in Autonomous Driving Era. 2019 International Conference on Electromagnetics in Advanced Applications (ICEAA), 1401-1405. https://doi.org/10.1109/ICEAA.2019.8879201
    [43] Park M, Kim H, Park S (2021) A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment. Electronics 10: 2608. https://doi.org/10.3390/electronics10212608 doi: 10.3390/electronics10212608
    [44] Shen X, Lacayo M, Guggilla N, Borrelli F (2022) ParkPredict+: Multimodal Intent and Motion Prediction for Vehicles in Parking Lots with CNN and Transformer. 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), 3999-4004. https://doi.org/10.1109/ITSC55140.2022.9922162
    [45] Heinen MR, Osorio FS, Heinen FJ, Kelber C (2006) SEVA3D: Using Arti cial Neural Networks to Autonomous Vehicle Parking Control. 2006 IEEE International Joint Conference on Neural Network Proceedings, 4704-4711. https://doi.org/10.1109/IJCNN.2006.247124
    [46] Wang Y, Ren B (2020) Quadrotor-Enabled Autonomous Parking Occupancy Detection. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 8287-8292. https://doi.org/10.1109/IROS45743.2020.9341081
    [47] Min C, Xu J, Xiao L, Zhao D, Nie Y, Dai B (2021) Attentional Graph Neural Network for Parking-slot Detection. IEEE Robotic Autom Lett 6: 3445-3450. https://doi.org/10.1109/LRA.2021.3064270 doi: 10.1109/LRA.2021.3064270
    [48] Bernuth AV, Volk G, Bringmann O (2019) Simulating Photo-realistic Snow and Fog on Existing Images for Enhanced CNN Training and Evaluation. 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 41-46. https://doi.org/10.1109/ITSC.2019.8917367
    [49] Lei Y, Emaru T, Ravankar AA, Kobayashi Y, Wang S (2020) Semantic Image Segmentation on Snow Driving Scenarios. 2020 IEEE International Conference on Mechatronics and Automation (ICMA), 1094-1100. https://doi.org/10.1109/ICMA49215.2020.9233538
    [50] Bijelic M, Gruber T, Mannan F, Kraus F, Ritter W, Dietmayer K, et al. (2020) Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11682-11692. https://doi.org/10.1109/CVPR42600.2020.01170
    [51] Cai Y, Sun X, Wang H, Chen L, Jiang H (2016) Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning. Journal of Sensors 2016: 1-7. https://doi.org/10.1155/2016/8046529 doi: 10.1155/2016/8046529
    [52] Liu Q, Li X, Yuan S, Li Z (2021) Decision-Making Technology for Autonomous Vehicles: Learning-Based Methods, Applications and Future Outlook. 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), 30-37. https://doi.org/10.1109/ITSC48978.2021.9564580
    [53] Jiménez F, Clavijo M, Cerrato A (2022) Perception, Positioning and Decision-Making Algorithms Adaptation for an Autonomous Valet Parking System Based on Infrastructure Reference Points Using One Single LiDAR. Sensors 22: 979. https://doi.org/10.3390/s22030979 doi: 10.3390/s22030979
    [54] Ferguson D, Baker C, Likhachev M, Dolan J (2008) A reasoning framework for autonomous urban driving. 2008 IEEE Intelligent Vehicles Symposium, 775-780. https://doi.org/10.1109/IVS.2008.4621247
    [55] Babu M, Oza Y, Singh AK, Krishna KM, Medasani S (2018) Model Predictive Control for Autonomous Driving Based on Time Scaled Collision Cone. 2018 European Control Conference (ECC), 641-648. https://doi.org/10.23919/ECC.2018.8550510
    [56] Zhang X, Liniger A, Sakai A, Borrelli F (2018) Autonomous Parking Using Optimization-Based Collision Avoidance. 2018 IEEE Conference on Decision and Control (CDC), 4327-4332. https://doi.org/10.1109/CDC.2018.8619433
    [57] Gindullina E, Mortag S, Dudin M, Badia L (2021) Multi-Agent Navigation of a Multi-Storey Parking Garage via Game Theory. 2021 IEEE 22nd International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), Pisa, Italy, 280-285. https://doi.org/10.1109/WoWMoM51794.2021.00052
    [58] Sheng W, Li B, Zhong X (2021) Autonomous Parking Trajectory Planning With Tiny Passages: A Combination of Multistage Hybrid A-Star Algorithm and Numerical Optimal Control. IEEE Access 9: 102801-102810. https://doi.org/10.1109/ACCESS.2021.3098676 doi: 10.1109/ACCESS.2021.3098676
    [59] Hongbo G, Guotao X, Xinyu Z, Bo C (2017) Autonomous parking control for intelligent vehicles based on a novel algorithm. The Journal of China Universities of Posts and Telecommunications 24: 51-56. https://doi.org/10.1016/S1005-8885(17)60223-1 doi: 10.1016/S1005-8885(17)60223-1
    [60] Li Q, Li R, Ji K, Dai W (2015) Kalman filter and its application. In 2015 8th International Conference on Intelligent Networks and Intelligent Systems (ICINIS), 74-77. IEEE.
    [61] Kato S, Tokunaga S, Maruyama Y, Maeda S, Hirabayashi M, Kitsukawa Y, et al. (2018) Autoware on Board: Enabling Autonomous Vehicles with Embedded Systems. 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), 287-296. https://doi.org/10.1109/ICCPS.2018.00035
    [62] Li Q, Queralta JP, Gia TN, Zou Z, Westerlund T (2020) Multi Sensor Fusion for Navigation and Mapping in Autonomous Vehicles: Accurate Localization in Urban Environments. Unmanned Systems 8: 229-237.
    [63] Realpe M, Vintimilla B, Vlacic L (2016) MULTI-SENSOR FUSION MODULE IN A FAULT TOLERANT PERCEPTION SYSTEM FOR AUTONOMOUS VEHICLES. Journal of Automation and Control Engineering 4: 460-466. https://doi.org/10.18178/joace.4.6.460-466 doi: 10.18178/joace.4.6.460-466
    [64] Saxena S, Isukapati IK, Smith SF, Dolan JM (2019) Multiagent Sensor Fusion for Connected & Autonomous Vehicles to Enhance Navigation Safety. 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2490-2495. https://doi.org/10.1109/ITSC.2019.8917298
    [65] Nabati R, Qi H (2020) Radar-Camera Sensor Fusion for Joint Object Detection and Distance Estimation in Autonomous Vehicles. arXiv, abs/2009.08428.
    [66] Farag W (2020) Kalman-filter-based sensor fusion applied to road-objects detection and tracking for autonomous vehicles. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 235: 1125-1138. https://doi.org/10.1177/0959651820975523 doi: 10.1177/0959651820975523
    [67] Liu Y, Fan X, Lv C, Wu J, Li L, Ding D (2017) An innovative information fusion method with adaptive Kalman filter for integrated INS/GPS navigation of autonomous vehicles. Mech Syst Signal Process 100: 605-616. https://doi.org/10.1016/j.ymssp.2017.07.051 doi: 10.1016/j.ymssp.2017.07.051
    [68] Ouyang Z, Cui J, Dong X, Li Y, Niu J (2021) SaccadeFork: A lightweight multi-sensor fusion-based target detector. Informa Fusion 77: 172-183. https://doi.org/10.1016/j.inffus.2021.07.004 doi: 10.1016/j.inffus.2021.07.004
    [69] Aldibaja M, Kuramoto A, Yanase R, Kim TH, Yonada K, Suganuma N (2018) Lateral Road-mark Reconstruction Using Neural Network for Safe Autonomous Driving in Snow-wet Environments. 2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR), 486-493. https://doi.org/10.1109/IISR.2018.8535758
    [70] Convolutional Neural Network (CNN). Developers Breach. Available from: https://developersbreach.com/convolution-neural-network-deep-learning/
    [71] Jocher G, Keita Z (2022) YOLO Object Detection Explained: A Beginner's Guide. DataCamp. Available from: https://www.datacamp.com/blog/yolo-object-detection-explained
    [72] Chablani M (2017) YOLO — You only look once, real time object detection explained. Towards Data Science. Available from: https://towardsdatascience.com/yolo-you-only-look-once-real-time-object-detection-explained-492dc9230006
    [73] Scanbot SDK (2022) YOLO object detection and its applications in computer vision. Available from: https://www.linkedin.com/pulse/yolo-object-detection-its-applications-computer-vision-scanbotsdk/
    [74] Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition, 779-788.
    [75] Gandhi R (2018) R-CNN, Fast R-CNN, Faster R-CNN, YOLO — Object Detection Algorithms. Towards Data Science. Available from: https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e
    [76] Ananth S (2019) Faster R-CNN for object detection. Towards Data Science. Available from: https://towardsdatascience.com/faster-r-cnn-for-object-detection-a-technical-summary-474c5b857b46
    [77] Pujara A (2020) Concept of AlexNet: - Convolutional Neural Network. Analytics Vidhya. Available from: https://medium.com/analytics-vidhya/concept-of-alexnet-convolutional-neural-network-6e73b4f9ee30
    [78] Ertan H (2021) CNN-LSTM based Models for Multiple Parallel Input and Multi-Step Forecast. Towards Data Science. Available from: https://towardsdatascience.com/cnn-lstm-based-models-for-multiple-parallel-input-and-multi-step-forecast-6fe2172f7668
    [79] Romera E, Álvarez JM, Bergasa LM, Arroyo R (2017) ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation. IEEE T Intell Transp Syst 19: 263-272. https://doi.org/10.1109/TITS.2017.2750080 doi: 10.1109/TITS.2017.2750080
    [80] Sanchez-Lengeling B, Reif E, Pearce A, Wiltschko AB (2021) A Gentle Introduction to Graph Neural Networks. Distill.pub. Available from: https://distill.pub/2021/gnn-intro/
    [81] Wood T, Transformer Neural Network Definition. DeepAI. Available from: https://deepai.org/machine-learning-glossary-and-terms/transformer-neural-network
    [82] Rjoub G, Wahab OA, Bentahar J, Bataineh AS (2021) Improving autonomous vehicles safety in snow weather using federated YOLO CNN learning. In Mobile Web and Intelligent Information Systems: 17th International Conference, MobiWIS 2021, Virtual Event, 121-134. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-83164-6_10
    [83] Zaghari N, Fathy M, Jameii SM, Shahverdy M (2021) The improvement in obstacle detection in autonomous vehicles using YOLO non-maximum suppression fuzzy algorithm. The Journal of Supercomputing 77: 13421-13446. https://doi.org/10.1007/s11227-021-03813-5 doi: 10.1007/s11227-021-03813-5
    [84] Kavitha R, Nivetha S (2021) Pothole and object detection for an autonomous vehicle using yolo. In 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), 1585-1589. IEEE.
    [85] Pandey R, Malik A (2021) Object detection and movement prediction for autonomous vehicle: a review. In 2021 2nd International Conference on Secure Cyber Computing and Communications (ICSCCC), 60-65. IEEE. https://doi.org/10.1109/ICSCCC51823.2021.9478167
    [86] Mseddi WS, Sedrine MA, Attia R (2021) YOLOv5 based visual localization for autonomous vehicles. In 2021 29th European Signal Processing Conference (EUSIPCO), 746-750. IEEE.
    [87] Liang S, Wu H, Zhen L, Hua Q, Garg S, Kaddoum G, et al. (2022) Edge YOLO: Real-time intelligent object detection system based on edge-cloud cooperation in autonomous vehicles. IEEE T Intell Transp Syst 23: 25345-25360. https://doi.org/10.1109/TITS.2022.3158253 doi: 10.1109/TITS.2022.3158253
    [88] Mohanapriya S, Natesan P, Indhumathi P, Mohanapriya STP, Monisha R (2021) Object and lane detection for autonomous vehicle using YOLO V3 algorithm. In AIP Conference Proceedings 2387: 140009. AIP Publishing LLC. https://doi.org/10.1063/5.0068836
    [89] Dewi C, Chen RC, Jiang X, Yu H (2022) Deep convolutional neural network for enhancing traffic sign recognition developed on Yolo V4. Multimed Tools Appl 81: 37821-37845. https://doi.org/10.1007/s11042-022-12962-5 doi: 10.1007/s11042-022-12962-5
    [90] Benjumea A, Teeti I, Cuzzolin F, Bradley A (2021) YOLO-Z: Improving small object detection in YOLOv5 for autonomous vehicles. arXiv preprint arXiv: 2112.11798.
    [91] Kosuru VSR, Venkitaraman AK (2022) Preventing the False Negatives of Vehicle Object Detection in Autonomous Driving Control Using Clear Object Filter Technique. In 2022 Third International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE), 1-6. IEEE. https://doi.org/10.1109/ICSTCEE56972.2022.10100170
    [92] Fanthony IV, Husin Z, Hikmarika H, Dwijayanti S, Suprapto BY (2021) YOLO Algorithm-Based Surrounding Object Identification on Autonomous Electric Vehicle. In 2021 8th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), 151-156. IEEE. https://doi.org/10.23919/EECSI53397.2021.9624275
    [93] Motwani NP, Soumya S, Singh U (2022) Object Detection and Tracking for Autonomous Vehicles using Deep Learning Technique-YOLO. In 2022 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON), 1-6. IEEE. https://doi.org/10.1109/SMARTGENCON56628.2022.10083703
    [94] Valeja Y, Pathare S, Patel D, Pawar M (2021) Traffic Sign Detection using Clara and Yolo in Python. In 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS) 1: 367-371. IEEE. https://doi.org/10.1109/ICACCS51430.2021.9442065
    [95] Prakash M, Janarthanan M, Devi D (2023) Multiple Objects Identification for Autonomous Car using YOLO and CNN. 2023 7th International Conference on Intelligent Computing and Control Systems (ICICCS), 597-601. IEEE. https://doi.org/10.1109/ICICCS56967.2023.10142751
    [96] Unlu E, Zenou E, Riviere N, Dupouy PE (2019) An autonomous drone surveillance and tracking architecture. In 2019 Autonomous Vehicles and Machines Conference, AVM 2019 31: 35-1 - 35-7. https://doi.org/10.2352/ISSN.2470-1173.2019.15.AVM-035
    [97] Iftikhar S, Asim M, Zhang Z, El-Latif AAA (2022) Advance generalization technique through 3D CNN to overcome the false positives pedestrian in autonomous vehicles. Telecommun Syst 80: 545-557. https://doi.org/10.1007/s11235-022-00930-1 doi: 10.1007/s11235-022-00930-1
    [98] Dazlee NMAA, Khalil SA, Abdul-Rahman S, Mutalib S (2022) Object detection for autonomous vehicles with sensor-based technology using yolo. International Journal of Intelligent Systems and Applications in Engineering, 10: 129-134. https://doi.org/10.18201/ijisae.2022.276 doi: 10.18201/ijisae.2022.276
    [99] Masmoudi M, Ghazzai H, Frikha M, Massoud Y (2019) Object detection learning techniques for autonomous vehicle applications. In 2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES), 1-5. IEEE. https://doi.org/10.1109/ICVES.2019.8906437
    [100] Farrukh FUD, Zhang C, Jiang Y, Zhang Z, Wang Z, Wang Z, et al. (2020) Power efficient tiny yolo cnn using reduced hardware resources based on booth multiplier and wallace tree adders. IEEE Open Journal of Circuits and Systems 1: 76-87. https://doi.org/10.1109/OJCAS.2020.3007334 doi: 10.1109/OJCAS.2020.3007334
    [101] Wang G, Guo J, Chen Y, Li Y, Xu Q (2019) A PSO and BFO-based learning strategy applied to faster R-CNN for object detection in autonomous driving. IEEE Access 7: 18840-18859. https://doi.org/10.1109/ACCESS.2019.2897283 doi: 10.1109/ACCESS.2019.2897283
    [102] Li X, Xie Z, Deng X, Wu Y, Pi Y (2022) Traffic sign detection based on improved faster R-CNN for autonomous driving. The Journal of Supercomputing, 1-21. https://doi.org/10.1007/s11227-021-04230-4
    [103] Bin Issa R, Das M, Rahman MS, Barua M, Rhaman MK, Ripon KSN, et al. (2021) Double deep Q-learning and faster R-Cnn-based autonomous vehicle navigation and obstacle avoidance in dynamic environment. Sensors 21: 1468. https://doi.org/10.3390/s21041468 doi: 10.3390/s21041468
    [104] Li P, Chen X, Shen S (2019) Stereo r-cnn based 3d object detection for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7644-7652. https://doi.org/10.1109/CVPR.2019.00783
    [105] Chen ST, Cornelius C, Martin J, Chau DH (2019) Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Proceedings, Part I 18, 52-68. Springer International Publishing. https://doi.org/10.1007/978-3-030-10925-7_4
    [106] Mostafa T, Chowdhury SJ, Rhaman MK, Alam MGR (2022) Occluded Object Detection for Autonomous Vehicles Employing YOLOv5, YOLOX and Faster R-CNN. In 2022 IEEE 13th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), 0405-0410. IEEE. https://doi.org/10.1109/IEMCON56893.2022.9946565
    [107] Yang W, Li Z, Wang C, Li J (2020) A multi-task Faster R-CNN method for 3D vehicle detection based on a single image. Appl Soft Comput 95: 106533. https://doi.org/10.1016/j.asoc.2020.106533 doi: 10.1016/j.asoc.2020.106533
    [108] Liang T, Bao H, Pan W, Pan F (2022) Traffic sign detection via improved sparse R-CNN for autonomous vehicles. J Adv Transport 2022: 1-16. https://doi.org/10.1155/2022/3825532 doi: 10.1155/2022/3825532
    [109] Kukreja R, Rinchen S, Vaidya B, Mouftah HT (2020) Evaluating traffic signs detection using faster r-cnn for autonomous driving. In 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 1-6. IEEE. https://doi.org/10.1109/CAMAD50429.2020.9209289
    [110] Amin S, Galasso F (2017) Geometric proposals for faster R-CNN. In 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 1-6. IEEE. https://doi.org/10.1109/AVSS.2017.8078518
    [111] Chan PH, Huggett A, Souvalioti G, Jennings P, Donzella V (2022) Influence of AVC and HEVC compression on detection of vehicles through Faster R-CNN. IEEE T Intell Transp Syst. https://doi.org/10.36227/techrxiv.19808566.v1
    [112] Kortmann F, Talits K, Fassmeyer P, Warnecke A, Meier N, Heger J, et al. (2020) Detecting various road damage types in global countries utilizing faster r-cnn. In 2020 IEEE International Conference on Big Data (Big Data), 5563-5571. IEEE. https://doi.org/10.1109/BigData50022.2020.9378245
    [113] Qian R, Liu Q, Yue Y, Coenen F, Zhang B (2016) Road surface traffic sign detection with hybrid region proposal and fast R-CNN. In 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), 555-559. IEEE. https://doi.org/10.1109/FSKD.2016.7603233
    [114] Nabati R, Qi H (2019) Rrpn: Radar region proposal network for object detection in autonomous vehicles. In 2019 IEEE International Conference on Image Processing (ICIP), 3093-3097. IEEE. https://doi.org/10.1109/ICIP.2019.8803392
    [115] Cheng P, Liu W, Zhang Y, Ma H (2018) LOCO: local context based faster R-CNN for small traffic sign detection. In MultiMedia Modeling: 24th International Conference, MMM 2018, Bangkok, Thailand, February 5-7, 2018, Proceedings, Part I 24, 329-341. Springer International Publishing. https://doi.org/10.1007/978-3-319-73603-7_27
    [116] Bi R, Xiong J, Tian Y, Li Q, Choo KKR (2022) Achieving lightweight and privacy-preserving object detection for connected autonomous vehicles. IEEE Internet Things 10: 2314-2329.
    [117] Fan Q, Brown L, Smith J (2016) A closer look at Faster R-CNN for vehicle detection. In 2016 IEEE intelligent vehicles symposium (IV), 124-129. IEEE. https://doi.org/10.1109/IVS.2016.7535375
    [118] Chen L, Lin S, Lu X, Cao D, Wu H, Guo C, et al. (2021) Deep neural network based vehicle and pedestrian detection for autonomous driving: A survey. IEEE T Intell Transp Syst 22: 3234-3246. https://doi.org/10.1109/TITS.2020.2993926 doi: 10.1109/TITS.2020.2993926
    [119] Saleh K, Hossny M, Hossny A, Nahavandi S (2017) Cyclist detection in lidar scans using faster r-cnn and synthetic depth images. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 1-6. IEEE. https://doi.org/10.1109/ITSC.2017.8317599
    [120] Carranza-García M, Torres-Mateo J, Lara-Benítez P, García-Gutiérrez J (2020) On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data. Remote Sensing 13: 89. https://doi.org/10.3390/rs13010089 doi: 10.3390/rs13010089
    [121] Adam K, Mohd II, Younis YM (2021) The impact of the soft errors in convolutional neural network on GPUs: Alexnet as case study. Procedia Computer Science 182: 89-94. https://doi.org/10.1016/j.procs.2021.02.012 doi: 10.1016/j.procs.2021.02.012
    [122] Tan L, Yu K, Lin L, Cheng X, Srivastava G, Lin JCW, et al. (2021) Speech emotion recognition enhanced traffic efficiency solution for autonomous vehicles in a 5G-enabled space-air-ground integrated intelligent transportation system. IEEE T Intell Transp Syst 23: 2830-2842. https://doi.org/10.1109/TITS.2021.3119921 doi: 10.1109/TITS.2021.3119921
    [123] Szymak P, Gasiorowski M (2020) Using pretrained alexnet deep learning neural network for recognition of underwater objects. NAŠE MORE: znanstveni časopis za more I pomorstvo 67: 9-13. https://doi.org/10.17818/NM/2020/1.2 doi: 10.17818/NM/2020/1.2
    [124] Gao H, Cheng B, Wang J, Li K, Zhao J, Li D (2018) Object classification using CNN-based fusion of vision and LIDAR in autonomous vehicle environment. IEEE T Ind Inform 14: 4224-4231. https://doi.org/10.1109/TII.2018.2822828 doi: 10.1109/TII.2018.2822828
    [125] Zhu Z, Hu Z, Dai W, Chen H, Lv Z (2022) Deep learning for autonomous vehicle and pedestrian interaction safety. Safety Sci 145: 105479. https://doi.org/10.1016/j.ssci.2021.105479 doi: 10.1016/j.ssci.2021.105479
    [126] Kocić J, Jovičić N, Drndarević V (2019) An end-to-end deep neural network for autonomous driving designed for embedded automotive platforms. Sensors 19: 2064. https://doi.org/10.3390/s19092064 doi: 10.3390/s19092064
    [127] Kumaar S, Mannar S, Omkar SN (2018) Juncnet: A deep neural network for road junction disambiguation for autonomous vehicles. arXiv preprint arXiv: 1809.01011.
    [128] Magee A (2019) Place-based navigation for autonomous vehicles with deep learning neural networks. Doctoral dissertation, Monterey, CA; Naval Postgraduate School.
    [129] Kaymak Ç, Uçar A (2019) Semantic image segmentation for autonomous driving using fully convolutional networks. In 2019 International Artificial Intelligence and Data Processing Symposium (IDAP), 1-8. IEEE. https://doi.org/10.1109/IDAP.2019.8875923
    [130] Xie G, Shangguan A, Fei R, Ji W, Ma W, Hei X (2020) Motion trajectory prediction based on a CNN-LSTM sequential model. Sci China Inform Sci 63: 1-21. https://doi.org/10.1007/s11432-019-2761-y doi: 10.1007/s11432-019-2761-y
    [131] Kortli Y, Gabsi S, Voon LFLY, Jridi M, Merzougui M, Atri M (2022) Deep embedded hybrid CNN-LSTM network for lane detection on NVIDIA Jetson Xavier NX. Knowl-based syst 240: 107941. https://doi.org/10.1016/j.knosys.2021.107941 doi: 10.1016/j.knosys.2021.107941
    [132] Li P, Abdel-Aty M, Yuan J (2020) Real-time crash risk prediction on arterials based on LSTM-CNN. Accident Anal Prev 135: 105371. https://doi.org/10.1016/j.aap.2019.105371 doi: 10.1016/j.aap.2019.105371
    [133] Dong B, Liu H, Bai Y, Lin J, Xu Z, Xu X, Kong Q (2021) Multi-modal trajectory prediction for autonomous driving with semantic map and dynamic graph attention network. arXiv preprint arXiv: 2103.16273.
    [134] Zhao M, Li Y, Asif S, Zhu Y, Tang F (2022) C-LSTM: CNN and LSTM Based Offloading Prediction Model in Mobile Edge Computing (MEC). In 2022 IEEE 23rd International Conference on High Performance Switching and Routing (HPSR), 245-251. IEEE. https://doi.org/10.1109/HPSR54439.2022.9831405
    [135] Li X, Ying X, Chuah MC (2019) Grip++: Enhanced graph-based interaction-aware trajectory prediction for autonomous driving. arXiv preprint arXiv: 1907.07792.
    [136] Zhi Z, Liu D, Liu L (2022) A performance compensation method for GPS/INS integrated navigation system based on CNN-LSTM during GPS outages. Measurement 188: 110516. https://doi.org/10.1016/j.measurement.2021.110516 doi: 10.1016/j.measurement.2021.110516
    [137] Anbalagan S, Raja G, Gurumoorthy S, Suresh RD, Dev K (2023) IIDS: Intelligent Intrusion Detection System for Sustainable Development in Autonomous Vehicles. IEEE T Intell Transp Syst. https://doi.org/10.1109/TITS.2023.3271768
    [138] Ziya TAN, KARAKOSE M (2020) Comparative study for deep reinforcement learning with CNN, RNN, and LSTM in autonomous navigation. In 2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI), 1-5. IEEE.
    [139] Poibrenski A, Klusch M, Vozniak I, Müller C (2021) Multimodal multi-pedestrian path prediction for autonomous cars. ACM SIGAPP Applied Computing Review 20: 5-17. https://doi.org/10.1145/3447332.3447333 doi: 10.1145/3447332.3447333
    [140] Sáez Á, Bergasa LM, López-Guillén E, Romera E, Tradacete M, Gómez-Huélamo C, et al. (2019) Real-time semantic segmentation for fisheye urban driving images based on ERFNet. Sensors 19: 503. https://doi.org/10.3390/s19030503 doi: 10.3390/s19030503
    [141] Breitenstein J, Löhdefink J, Fingscheidt T (2022) Joint Prediction of Amodal and Visible Semantic Segmentation for Automated Driving. In European Conference on Computer Vision, 633-645. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-25056-9_40
    [142] Deng L, Cao H, Dong Q, Jiang Y (2023) Semi-supervised lane detection for continuous traffic scenes. Traffic Inj Prev, 1-6. https://doi.org/10.1080/15389588.2023.2219794
    [143] Yao S, Lan F, Chen J (2022) Visual Odometry Integrated Semantic Constraints towards Autonomous Driving (No. 2022-01-7095). SAE Technical Paper. https://doi.org/10.4271/2022-01-7095
    [144] Divakarla U, Bhat R, Madagaonkar SB, Pranav DV, Shyam C, Chandrashekar K (2023) Semantic Segmentation for Autonomous Driving. In Information and Communication Technology for Competitive Strategies (ICTCS 2022) Intelligent Strategies for ICT, 683-694. Singapore: Springer Nature Singapore. https://doi.org/10.1007/978-981-19-9304-6_61
    [145] Kachhoria R, Jaiswal S, Lokhande M, Rodge J (2023) Lane detection and path prediction in autonomous vehicle using deep learning. In Intelligent Edge Computing for Cyber Physical Applications, 111-127. Academic Press. https://doi.org/10.1016/B978-0-323-99412-5.00012-5
    [146] Chen T, Chen A (2022) Road Sign Recognition Method Based on Segmentation and Attention Mechanism. Mob Inform Syst 2022. https://doi.org/10.1155/2022/6389580
    [147] Song C, Tan SJ, Khor A, Cao P, Zhao Y, Li G (2022) Method of Vehicle Behavior Analysis for Real-Time Video Streaming Based on Mobilenet-YOLOV4 and ERFNET. In 2022 IEEE 7th International Conference on Intelligent Transportation Engineering (ICITE), 473-480. IEEE. https://doi.org/10.1109/ICITE56321.2022.10101430
    [148] Ye D, Han R (2022) Image semantic segmentation method based on improved ERFNet model. The Journal of Engineering 2022: 180-190. https://doi.org/10.1049/tje2.12104 doi: 10.1049/tje2.12104
    [149] Zhang L, Jiang F, Yang J, Kong B, Hussain A (2023) A real‐time lane detection network using two‐directional separation attention. Comput‐Aided Civ Inf. https://doi.org/10.1111/mice.13051
    [150] Fan J, Wang F, Chu H, Hu X, Cheng Y, Gao B (2022) Mlfnet: Multi-level fusion network for real-time semantic segmentation of autonomous driving. IEEE Transactions on Intelligent Vehicles 8: 756-767. https://doi.org/10.1109/TIV.2022.3176860 doi: 10.1109/TIV.2022.3176860
    [151] Mullick K, Jain H, Gupta S, Kale AA (2023) Domain Adaptation of Synthetic Driving Datasets for Real-World Autonomous Driving. arXiv preprint arXiv: 2302.04149.
    [152] Zhang L, Jiang F, Yang J, Kong B, Hussain A, Gogate M, et al. (2022) DNet-CNet: A novel cascaded deep network for real-time lane detection and classification. J Amb Intel Hum Comput 14: 10745-10760. https://doi.org/10.1007/s12652-022-04346-2 doi: 10.1007/s12652-022-04346-2
    [153] Florea H, Petrovai A, Giosan I, Oniga F, Varga R, Nedevschi S (2022) Enhanced perception for autonomous driving using semantic and geometric data fusion. Sensors 22: 5061. https://doi.org/10.3390/s22135061 doi: 10.3390/s22135061
    [154] Bouzidi W, Bouaafia S, Hajjaji MA, Bergasa LM, Enhanced U-Net Approach: Semantic Segmentation for Self-Driving Cars Applications.
    [155] PETROVAI A (2022) Deep Learning-based Visual Perception for Autonomous Driving. Doctoral dissertation, Technical University of Cluj-Napoca.
    [156] Breitenstein J, Fingscheidt T (2022) Amodal Cityscapes: A New Dataset, its Generation, and an Amodal Semantic Segmentation Challenge Baseline. In 2022 IEEE Intelligent Vehicles Symposium (IV), 1018-1025. IEEE. https://doi.org/10.1109/IV51971.2022.9827342
    [157] An TH, Kang J, Min KW (2023) Network adaptation for color image semantic segmentation. IET Image Process.
    [158] Karine A, Napoléon T, Jridi M (2022) Semantic Images Segmentation for Autonomous Driving Using Self-Attention Knowledge Distillation. In 2022 16th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), 198-202. IEEE. https://doi.org/10.1109/SITIS57111.2022.00044
    [159] Yang X, Yu Y, Zhang Z, Huang Y, Liu Z, Niu Z, et al. (2023) Lightweight lane marking detection CNNs by self soft label attention. Multimedia Tools and Applications 82: 5607-5626. https://doi.org/10.1007/s11042-022-13442-6 doi: 10.1007/s11042-022-13442-6
    [160] Chniti H, Mahfoudh M (2022) Designing a Model of Driving Scenarios for Autonomous Vehicles. In Knowledge Science, Engineering and Management: 15th International Conference, KSEM 2022, Singapore, August 6-8, 2022, Proceedings, Part II, 396-405. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-031-10986-7_32
    [161] Zhuang Y, Pu Z, Yang H, Wang Y (2022) Edge-Artificial Intelligence-Powered Parking Surveillance With Quantized Neural Networks. IEEE Intel Transp Syst Mag 14: 107-121. https://doi.org/10.1109/MITS.2022.3182358 doi: 10.1109/MITS.2022.3182358
    [162] Serras B, Gonçalves C, Dias T, Osório AL (2022) Extending the Synoptics of Things (SoT) framework to manage iSoS technology landscapes. In 2022 International Young Engineers Forum (YEF-ECE), 80-85. IEEE. https://doi.org/10.1109/YEF-ECE55092.2022.9849899
    [163] Atar S, Singh S, Agrawal S, Chaurasia R, Sule S, Gadamsetty S, et al. (2022) LCPP: Low Computational Processing Pipeline for Delivery Robots. ICAART (3), 130-138. https://doi.org/10.5220/0010786300003116
    [164] Geldhauser C, Matt AD, Stussak C (2022) I AM AI Gradient Descent-an Open-Source Digital Game for Inquiry-Based CLIL Learning. In Proceedings of the AAAI Conference on Artificial Intelligence 36: 12751-12757). https://doi.org/10.1609/aaai.v36i11.21553
    [165] Li J, Xu W, Deng L, Xiao Y, Han Z, Zheng H (2023) Deep learning for visual recognition and detection of aquatic animals: A review. Rev Aquacult 15: 409-433. https://doi.org/10.1111/raq.12726 doi: 10.1111/raq.12726
    [166] Ramalingam S, A Study and Review of Classical, Machine Learning and Deep Learning Methods of Software Reliability Estimation for Safety-Critical Systems.
    [167] Gamal O, Imran M, Roth H, Wahrburg J (2020) Assistive parking systems knowledge transfer to end-to-end deep learning for autonomous parking. In 2020 6th International conference on mechatronics and robotics engineering (ICMRE), 216-221. IEEE. https://doi.org/10.1109/ICMRE49073.2020.9065014
    [168] Kashyap A, Iqbal M, Pattabiraman K, Seltzer M (2021) ReLUSyn: Synthesizing Stealthy Attacks for Deep Neural Network Based Cyber-Physical Systems. arXiv preprint arXiv: 2105.10393.
    [169] Heinen, MR, Osório FS, Heinen FJ, Kelber C (2006) Seva3d: Using arti cial neural networks to autonomous vehicle parking control. In The 2006 IEEE International Joint Conference on Neural Network Proceedings, 4704-4711. IEEE. https://doi.org/10.1109/IJCNN.2006.247124
    [170] Heinen MR, Osório FS, Heinen FJ, Kelber C (2006) Autonomous vehicle parking and pull out using artificial neural networks. In Proceedings of the I Workshop on Computational Intelligence (WCI).
    [171] Min C, Xu J, Xiao L, Zhao D, Nie Y, Dai B (2021) Attentional graph neural network for parking-slot detection. IEEE Robot Autom Lett 6: 3445-3450. https://doi.org/10.1109/LRA.2021.3064270 doi: 10.1109/LRA.2021.3064270
    [172] Zhang W, Liu H, Liu Y, Zhou J, Xiong H (2020) Semi-supervised hierarchical recurrent graph neural network for city-wide parking availability prediction. In Proceedings of the AAAI Conference on Artificial Intelligence 34: 1186-1193. https://doi.org/10.1609/aaai.v34i01.5471
    [173] Park J, Chun J, Kim SH, Kim Y, Park J (2021) Learning to schedule job-shop problems: representation and policy learning using graph neural network and reinforcement learning. Int J Prod Res 59: 3360-3377. https://doi.org/10.1080/00207543.2020.1870013 doi: 10.1080/00207543.2020.1870013
    [174] Lee H, Lee S, Kim J, Jung H, Yoon KJ, Gandla S, et al. (2023) Stretchable array electromyography sensor with graph neural network for static and dynamic gestures recognition system. npj Flex Electron 7: 20. https://doi.org/10.1038/s41528-023-00246-3 doi: 10.1038/s41528-023-00246-3
    [175] Meyer E, Brenner M, Zhang B, Schickert M, Musani B, Althoff M (2023) Geometric deep learning for autonomous driving: Unlocking the power of graph neural networks with CommonRoad-Geometric. 2023 IEEE Intelligent Vehicles Symposium (IV), 1-8. https://doi.org/10.1109/IV55152.2023.10186741
    [176] Singh D, Srivastava R (2022) Graph Neural Network with RNNs based trajectory prediction of dynamic agents for autonomous vehicle. Appl Intell 52: 12801-12816. https://doi.org/10.1007/s10489-021-03120-9 doi: 10.1007/s10489-021-03120-9
    [177] Singh D, Srivastava R (2022) Multi-scale graph-transformer network for trajectory prediction of the autonomous vehicles. Intel Serv Robot 15: 307-320. https://doi.org/10.1007/s11370-022-00422-w doi: 10.1007/s11370-022-00422-w
    [178] Klimke M, Völz B, Buchholz M (2022) Cooperative Behavior Planning for Automated Driving using Graph Neural Networks. In 2022 IEEE Intelligent Vehicles Symposium (IV), 167-174. IEEE. https://doi.org/10.1109/IV51971.2022.9827230
    [179] Lee D, Gu Y, Hoang J, Marchetti-Bowick M (2019) Joint interaction and trajectory prediction for autonomous driving using graph neural networks. arXiv preprint arXiv: 1912.07882.
    [180] Jin K, Wang H, Liu C, Zhai Y, Tang L (2022) Graph neural network based relation learning for abnormal perception information detection in self-driving scenarios. In 2022 International Conference on Robotics and Automation (ICRA), 8943-8949. IEEE. https://doi.org/10.1109/ICRA46639.2022.9812411
    [181] Yang F, Li X, Liu Q, Li Z, Gao X (2022) Generalized single-vehicle-based graph reinforcement learning for decision-making in autonomous driving. Sensors 22: 4935. https://doi.org/10.3390/s22134935 doi: 10.3390/s22134935
    [182] Cao D, Li J, Ma H, Tomizuka M (2021) Spectral temporal graph neural network for trajectory prediction. In 2021 IEEE International Conference on Robotics and Automation (ICRA), 1839-1845. IEEE. https://doi.org/10.1109/ICRA48506.2021.9561461
    [183] Diehl F, Brunner T, Le MT, Knoll A (2019) Graph neural networks for modelling traffic participant interaction. In 2019 IEEE Intelligent Vehicles Symposium (IV), 695-701. IEEE. https://doi.org/10.1109/IVS.2019.8814066
    [184] Ma C, Li Y, Yang F, Zhang Z, Zhuang Y, Jia H, et al. (2019) Deep association: End-to-end graph-based learning for multiple object tracking with conv-graph neural network. In Proceedings of the 2019 on International Conference on Multimedia Retrieval, 253-261.
    [185] Wang L, Zhang X, Zeng W, Liu W, Yang L, Li J, et al. (2022) Global perception-based robust parking space detection using a low-cost camera. IEEE Transactions on Intelligent Vehicles 8: 1439-1448. https://doi.org/10.1109/TIV.2022.3186035 doi: 10.1109/TIV.2022.3186035
    [186] Shi R, Yang S, Chen Y, Wang R, Zhang M, Lu J, et al. (2023) CNN‐Transformer for visual‐tactile fusion applied in road recognition of autonomous vehicles. Pattern Recogn Lett 166: 200-208. https://doi.org/10.1016/j.patrec.2022.11.023 doi: 10.1016/j.patrec.2022.11.023
    [187] Singh D, Srivastava R (2022) Multi-scale graph-transformer network for trajectory prediction of the autonomous vehicles. Intel Serv Robot 15: 307-320. https://doi.org/10.1007/s11370-022-00422-w doi: 10.1007/s11370-022-00422-w
    [188] Zhang H, Yang Z, Xiong H, Zhu T, Long Z, Wu W (2023) Transformer Aided Adaptive Extended Kalman Filter for Autonomous Vehicle Mass Estimation. Processes 11: 887. https://doi.org/10.3390/pr11030887 doi: 10.3390/pr11030887
    [189] Li G, Qiu Y, Yang Y, Li Z, Li S, Chu W, et al. (2022) Lane change strategies for autonomous vehicles: a deep reinforcement learning approach based on transformer. IEEE Transactions on Intelligent Vehicles.
    [190] Rafiq G, Rafiq M, Choi, GS (2023) Spectral representation learning and fusion for autonomous vehicles trip description exploiting recurrent transformer. IEEE Access. https://doi.org/10.1109/ACCESS.2023.3287783
    [191] Shao H, Wang L, Chen R, Li H, Liu Y (2023) Safety-enhanced autonomous driving using interpretable sensor fusion transformer. In Conference on Robot Learning, 726-737. PMLR.
    [192] Tseng CH, Zhang J, Sun MT, Sakai K, Ku WS (2022) Multi-modal Transformer Path Prediction for Autonomous Vehicle. arXiv preprint arXiv: 2208.07256.
    [193] Hu H, Wang Q, Zhang Z, Li Z, Gao Z (2023) Holistic transformer: A joint neural network for trajectory prediction and decision-making of autonomous vehicles. Pattern Recogn 141: 109592. https://doi.org/10.1016/j.patcog.2023.109592 doi: 10.1016/j.patcog.2023.109592
    [194] Mozaffari S, Koufos K, Dianati M (2023) Multimodal Manoeuvre and Trajectory Prediction for Autonomous Vehicles Using Transformer Networks. IEEE Robot Autom Lett 8: 6123-6130. https://doi.org/10.1109/LRA.2023.3301720 doi: 10.1109/LRA.2023.3301720
    [195] Tian Y, Wang J, Wang Y, Zhao C, Yao F, Wang X (2022) Federated vehicular transformers and their federations: Privacy-preserving computing and cooperation for autonomous driving. IEEE Transactions on Intelligent Vehicles. https://doi.org/10.1109/TIV.2022.3197815
    [196] Xu R, Xiang H, Tu Z, Xia X, Yang MH, Ma J (2022) V2X-ViT: Vehicle-to-everything cooperative perception with vision transformer. In Computer Vision-ECCV 2022: 17th European Conference, 107-124. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-19842-7_7
    [197] Chen W, Wang F, Sun H (2021) S2tnet: Spatio-temporal transformer networks for trajectory prediction in autonomous driving. In Asian Conference on Machine Learning 454-469. PMLR.
    [198] Postnikov A, Gamayunov A, Ferrer G (2021) Transformer based trajectory prediction. arXiv preprint arXiv: 2112.04350.
    [199] Ngiam J, Caine B, Vasudevan V, Zhang Z, Chiang HT, Ling J, et al. (2021) Scene Transformer: A unified architecture for predicting multiple agent trajectories. arXiv preprint arXiv: 2106.08417.
    [200] Khosyi'in M, Budisusila EN, Prasetyowati SAD, Suprapto BY, Nawawi Z (2021) Design of Autonomous Vehicle Navigation Using GNSS Based on Pixhawk 2.1. In 2021 8th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), 175-180. IEEE. https://doi.org/10.23919/EECSI53397.2021.9624244
    [201] Schütz A, Sánchez-Morales DE, Pany T (2020) Precise positioning through a loosely-coupled sensor fusion of GNSS-RTK, INS and LiDAR for autonomous driving. In 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), 219-225. IEEE. https://doi.org/10.1109/PLANS46316.2020.9109934
    [202] Swaminathan HB, Sommer A, Becker A, Atzmueller M (2022) Performance Evaluation of GNSS Position Augmentation Methods for Autonomous Vehicles in Urban Environments. Sensors 22: 8419. https://doi.org/10.3390/s22218419 doi: 10.3390/s22218419
    [203] Elsayed H, El-Mowafy A, Wang K (2023) Bounding of correlated double-differenced GNSS observation errors using NRTK for precise positioning of autonomous vehicles. Measurement 206: 112303. https://doi.org/10.1016/j.measurement.2022.112303 doi: 10.1016/j.measurement.2022.112303
    [204] Jianghui GE, Hua CH, Jiang GU, Guangcai LI, Na WE (2020) Three multi-frequency and multi-system GNSS high-precision point positioning methods and their performance in complex urban environment. Acta Geodaetica et Cartographica Sinica 49: 1.
    [205] Lee W, Geneva P, Yang Y, Huang G (2022) Tightly-coupled GNSS-aided Visual-Inertial Localization. In 2022 International Conference on Robotics and Automation (ICRA), 9484-9491. IEEE. https://doi.org/10.1109/ICRA46639.2022.9811362
    [206] Wen W, Hsu LT (2021) 3D LiDAR aided GNSS real-time kinematic positioning. In Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), 2212-2220. https://doi.org/10.33012/2021.18072
    [207] Li T, Zhang H, Gao Z, Chen Q, Niu X (2018) High-accuracy positioning in urban environments using single-frequency multi-GNSS RTK/MEMS-IMU integration. Remote sensing 10: 205. https://doi.org/10.3390/rs10020205 doi: 10.3390/rs10020205
    [208] Jia M, Lee H, Khalife J, Kassas ZM, Seo J (2021) Ground vehicle navigation integrity monitoring for multi-constellation GNSS fused with cellular signals of opportunity. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), 3978-3983. IEEE. https://doi.org/10.1109/ITSC48978.2021.9564686
    [209] Sadli R, Afkir M, Hadid A, Rivenq A, Taleb-Ahmed A (2022) Map-Matching-Based Localization Using Camera and Low-Cost GPS for Lane-Level Accuracy. Sensors 22: 2434. https://doi.org/10.3390/s22072434 doi: 10.3390/s22072434
    [210] Somogyi H, Soumelidis A (2020) Comparison of High-Precision GNSS systems for development of an autonomous localization system. In 2020 23rd International Symposium on Measurement and Control in Robotics (ISMCR), 1-6. IEEE. https://doi.org/10.1109/ISMCR51255.2020.9263762
    [211] Geng J, Guo J, Chang H, Li X (2019) Toward global instantaneous decimeter-level positioning using tightly coupled multi-constellation and multi-frequency GNSS. J Geodesy 93: 977-991. https://doi.org/10.1007/s00190-018-1219-y doi: 10.1007/s00190-018-1219-y
    [212] Liu S (2020) Engineering autonomous vehicles and robots: the dragonfly modular-based approach. John Wiley & Sons. https://doi.org/10.1002/9781119570516
    [213] Meng Q, Hsu LT (2021) Integrity for autonomous vehicles and towards a novel alert limit determination method. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235: 996-1006. https://doi.org/10.1177/0954407020965760 doi: 10.1177/0954407020965760
    [214] Abosekeen A, Noureldin A, Korenberg MJ (2019) Improving the RISS/GNSS land-vehicles integrated navigation system using magnetic azimuth updates. IEEE T Intell Transp Syst 21: 1250-1263. https://doi.org/10.1109/TITS.2019.2905871 doi: 10.1109/TITS.2019.2905871
    [215] Rodriguez-Solano C, Nick T, Gleb Z, Xiaoming C, Ken D, Lorenz G (2021) Protection level of the trimble RTX positioning engine for autonomous applications. In Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), 1577-1595. https://doi.org/10.33012/2021.17889
    [216] Bressler J, Reisdorf P, Obst M, Wanielik G (2016) GNSS positioning in non-line-of-sight context—A survey. In 2016 IEEE 19th international conference on intelligent transportation systems (ITSC), 1147-1154. IEEE. https://doi.org/10.1109/ITSC.2016.7795701
    [217] Patel RH, Härri J, Bonnet C (2017) Impact of localization errors on automated vehicle control strategies. In 2017 IEEE Vehicular Networking Conference (VNC), 61-68. IEEE. https://doi.org/10.1109/VNC.2017.8275649
    [218] Tao Z, Bonnifait P (2016) Sequential data fusion of GNSS pseudoranges and Dopplers with map-based vision systems. IEEE Transactions on Intelligent Vehicles 1: 254-265. https://doi.org/10.1109/TIV.2017.2658185 doi: 10.1109/TIV.2017.2658185
    [219] Kato S, Tokunaga S, Maruyama Y, Maeda S, Hirabayashi M, Kitsukawa Y, et al. (2018) Autoware on board: Enabling autonomous vehicles with embedded systems. In 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), 287-296. IEEE. https://doi.org/10.1109/ICCPS.2018.00035
    [220] Kato S, Takeuchi E, Ishiguro Y, Ninomiya Y, Takeda K, Hamada T (2015) An open approach to autonomous vehicles. IEEE Micro 35: 60-68. https://doi.org/10.1109/MM.2015.133 doi: 10.1109/MM.2015.133
    [221] Raju VM, Gupta V, Lomate S (2019) Performance of open autonomous vehicle platforms: Autoware and Apollo. In 2019 IEEE 5th International Conference for Convergence in Technology (I2CT), 1-5. IEEE. https://doi.org/10.1109/I2CT45611.2019.9033734
    [222] Tsukada M, Oi T, Ito A, Hirata M, Esaki H (2020) AutoC2X: Open-source software to realize V2X cooperative perception among autonomous vehicles. In 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall), 1-6. IEEE. https://doi.org/10.1109/VTC2020-Fall49728.2020.9348525
    [223] Kawabata N, Kuwabara Y, Kawasaki T (2021) Self-Localization of Autonomous Car Using Autoware. IEICE Technical Report 120: 103-108.
    [224] Carballo A, Wong D, Ninomiya Y, Kato S, Takeda K (2019) Training engineers in autonomous driving technologies using autoware. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 3347-3354. IEEE. https://doi.org/10.1109/ITSC.2019.8917152
    [225] Dhakal S, Qu D, Carrillo D, Yang Q, Fu S (2021) Oasd: An open approach to self-driving vehicle. In 2021 Fourth International Conference on Connected and Autonomous Driving (MetroCAD), 54-61. IEEE. https://doi.org/10.1109/MetroCAD51599.2021.00017
    [226] Tun WN, Kim S, Lee JW, Darweesh H (2019) Open-source tool of vector map for path planning in autoware autonomous driving software. 2019 IEEE International Conference on Big Data and Smart Computing (BigComp), 1-3. IEEE. https://doi.org/10.1109/BIGCOMP.2019.8679340
    [227] Rong G, Shin BH, Tabatabaee H, Lu Q, Lemke S, Možeiko M, et al. (2020) Lgsvl simulator: A high fidelity simulator for autonomous driving. In 2020 IEEE 23rd International conference on intelligent transportation systems (ITSC), 1-6. IEEE. https://doi.org/10.1109/ITSC45102.2020.9294422
    [228] Akai N, Morales LY, Yamaguchi T, Takeuchi E, Yoshihara Y, Okuda H, et al. (2017) Autonomous driving based on accurate localization using multilayer LiDAR and dead reckoning. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 1-6. IEEE. https://doi.org/10.1109/ITSC.2017.8317797
    [229] Garcia J, Feng Y, Shen J, Almanee S, Xia Y, Chen AQA (2020) A comprehensive study of autonomous vehicle bugs. In Proceedings of the ACM/IEEE 42nd international conference on software engineering, 385-396. https://doi.org/10.1145/3377811.3380397
    [230] Tsukada M, Oi T, Kitazawa M, Esaki H (2020) Networked roadside perception units for autonomous driving. Sensors 20: 5320. https://doi.org/10.3390/s20185320 doi: 10.3390/s20185320
    [231] Chishiro H, Suito K, Ito T, Maeda S, Azumi T, Funaoka K, et al. (2019) Towards heterogeneous computing platforms for autonomous driving. In 2019 IEEE International Conference on Embedded Software and Systems (ICESS), 1-8. IEEE. https://doi.org/10.1109/ICESS.2019.8782446
    [232] Pang S, Kent D, Cai X, Al-Qassab H, Morris D, Radha H (2018) 3d scan registration based localization for autonomous vehicles-a comparison of ndt and icp under realistic conditions. In 2018 IEEE 88th vehicular technology conference (VTC-Fall), 1-5. IEEE. https://doi.org/10.1109/VTCFall.2018.8690819
    [233] Munir F, Azam S, Sheri AM, Ko Y, Jeon M (2019) Where Am I: Localization and 3D Maps for Autonomous Vehicles. In VEHITS, 452-457. https://doi.org/10.5220/0007718400002179
    [234] Wen W, Hsu LT, Zhang G (2018) Performance analysis of NDT-based graph SLAM for autonomous vehicle in diverse typical driving scenarios of Hong Kong. Sensors 18: 3928. https://doi.org/10.3390/s18113928 doi: 10.3390/s18113928
    [235] Lin X, Wang F, Yang B, Zhang W (2021). Autonomous vehicle localization with prior visual point cloud map constraints in GNSS-challenged environments. Remote Sensing 13: 506. https://doi.org/10.3390/rs13030506 doi: 10.3390/rs13030506
    [236] Akai N, Morales LY, Takeuchi E, Yoshihara Y, Ninomiya Y (2017) Robust localization using 3D NDT scan matching with experimentally determined uncertainty and road marker matching. In 2017 IEEE Intelligent Vehicles Symposium (IV), 1356-1363. IEEE. https://doi.org/10.1109/IVS.2017.7995900
    [237] Akai N, Morales LY, Yamaguchi T, Takeuchi E, Yoshihara Y, Okuda H, et al. (2017) Autonomous driving based on accurate localization using multilayer LiDAR and dead reckoning. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 1-6. IEEE. https://doi.org/10.1109/ITSC.2017.8317797
    [238] Li Q, Queralta JP, Gia TN, Zou Z, Westerlund T (2020) Multi-sensor fusion for navigation and mapping in autonomous vehicles: Accurate localization in urban environments. Unmanned Systems 8: 229-237. https://doi.org/10.1142/S2301385020500168 doi: 10.1142/S2301385020500168
    [239] Saarinen J, Andreasson H, Stoyanov T, Lilienthal, AJ (2013) Normal distributions transform Monte-Carlo localization (NDT-MCL). In 2013 IEEE/RSJ international conference on intelligent robots and systems, 382-389. IEEE. https://doi.org/10.1109/IROS.2013.6696380
    [240] Ahmed SZ, Saputra VB, Verma S, Zhang K, Adiwahono AH (2019) Sparse-3D lidar outdoor map-based autonomous vehicle localization. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1614-1619. IEEE. https://doi.org/10.1109/IROS40897.2019.8967596
    [241] Elhousni M, Huang X (2020) A survey on 3d lidar localization for autonomous vehicles. In 2020 IEEE Intelligent Vehicles Symposium (IV), 1879-1884. IEEE. https://doi.org/10.1109/IV47402.2020.9304812
    [242] Srinara S, Lee CM, Tsai S, Tsai GJ, Chiang KW (2021) Performance analysis of 3D NDT scan matching for autonomous vehicles using INS/GNSS/3D LiDAR-SLAM integration scheme. In 2021 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL), 1-4. IEEE. https://doi.org/10.1109/INERTIAL51137.2021.9430476
    [243] Poulose A, Baek M, Han DS (2022) Point Cloud Map Generation and Localization for Autonomous Vehicles Using 3D Lidar Scans. In 2022 27th Asia Pacific Conference on Communications (APCC), 336-341. IEEE. https://doi.org/10.1109/APCC55198.2022.9943630
    [244] Javanmardi E, Javanmardi M, Gu Y, Kamijo S (2020) Pre-estimating self-localization error of NDT-based map-matching from map only. IEEE T Intell Transp Syst 22: 7652-7666. https://doi.org/10.1109/TITS.2020.3006854 doi: 10.1109/TITS.2020.3006854
    [245] Javanmardi E, Javanmardi M, Gu Y, Kamijo S (2018) November. Adaptive resolution refinement of NDT map based on localization error modeled by map factors. 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 2237-2243. IEEE. https://doi.org/10.1109/ITSC.2018.8569236
    [246] Jang KW, Jeong WJ, Kang Y (2022) Development of a GPU-Accelerated NDT Localization Algorithm for GNSS-Denied Urban Areas. Sensors 22: 1913. https://doi.org/10.3390/s22051913 doi: 10.3390/s22051913
    [247] Javanmardi E, Javanmardi M, Gu Y, Kamijo S (2017) Autonomous vehicle self-localization based on multilayer 2D vector map and multi-channel LiDAR. 2017 IEEE Intelligent Vehicles Symposium (IV), 437-442. IEEE. https://doi.org/10.1109/IVS.2017.7995757
    [248] Wen W, Zhan W, Hsu LT (2019) Robust Localization Using 3D NDT Matching and Beam Model for Autonomous Vehicles in an Urban Scenario with Dynamic Obstacles. Proceedings of Mobile Mapping Technology, Shenzhen, China.
    [249] Javanmardi E, Gu Y, Javanmardi M, Kamijo S (2019) Autonomous vehicle self-localization based on abstract map and multi-channel LiDAR in urban area. IATSS research 43: 1-13. https://doi.org/10.1016/j.iatssr.2018.05.001 doi: 10.1016/j.iatssr.2018.05.001
    [250] Kan YC, Hsu LT, Chung E (2021) Performance evaluation on map-based NDT scan matching localization using simulated occlusion datasets. IEEE Sensors Letters 5: 1-4. https://doi.org/10.1109/LSENS.2021.3060097 doi: 10.1109/LSENS.2021.3060097
    [251] Fayyad J, Jaradat MA, Gruyer D, Najjaran H (2020) Deep learning sensor fusion for autonomous vehicle perception and localization: A review. Sensors 20: 4220. https://doi.org/10.3390/s20154220 doi: 10.3390/s20154220
    [252] Laconte J, Kasmi A, Aufrère R, Vaidis M, Chapuis R (2021) A survey of localization methods for autonomous vehicles in highway scenarios. Sensors 22: 247. https://doi.org/10.3390/s22010247 doi: 10.3390/s22010247
    [253] Spangenberg R, Goehring D, Rojas R (2016) Pole-based localization for autonomous vehicles in urban scenarios. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2161-2166. IEEE. https://doi.org/10.1109/IROS.2016.7759339
    [254] Reid TG, Houts SE, Cammarata R, Mills G, Agarwal S, Vora A, et al. (2019) Localization requirements for autonomous vehicles. SAE Intl J CAV 2: 173-190. https://doi.org/10.4271/12-02-03-0012 doi: 10.4271/12-02-03-0012
    [255] Elhousni M, Huang X (2020) October. A survey on 3d lidar localization for autonomous vehicles. 2020 IEEE Intelligent Vehicles Symposium (IV), 1879-1884. IEEE. https://doi.org/10.1109/IV47402.2020.9304812
    [256] de Miguel MÁ, García F, Armingol JM (2020) Improved LiDAR probabilistic localization for autonomous vehicles using GNSS. Sensors 20: 3145. https://doi.org/10.3390/s20113145 doi: 10.3390/s20113145
    [257] Wang L, Zhang Y, Wang J (2017) Map-based localization method for autonomous vehicles using 3D-LIDAR. IFAC-PapersOnLine 50: 276-281. https://doi.org/10.1016/j.ifacol.2017.08.046 doi: 10.1016/j.ifacol.2017.08.046
    [258] Meng X, Wang H, Liu B (2017) A robust vehicle localization approach based on gnss/imu/dmi/lidar sensor fusion for autonomous vehicles. Sensors 17: 2140. https://doi.org/10.3390/s17092140 doi: 10.3390/s17092140
    [259] Kamijo S, Gu Y, Hsu L (2015) Autonomous vehicle technologies: Localization and mapping. Fundam Rev 9: 131-141, 2015. https://doi.org/10.1587/essfr.9.2_131
    [260] Lin X, Wang F, Yang B, Zhang W (2021) Autonomous vehicle localization with prior visual point cloud map constraints in GNSS-challenged environments. Remote Sensing 13: 506. https://doi.org/10.3390/rs13030506 doi: 10.3390/rs13030506
    [261] Werries A, Dolan J (2016) Adaptive Kalman filtering methods for low-cost GPS/INS localization for autonomous vehicles (No. CMU-RI-TR-16-18). Carnegie-Mellon University.
    [262] Jalal F, Nasir F (2021) Underwater navigation, localization and path planning for autonomous vehicles: A review. 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), 817-828. IEEE. https://doi.org/10.1109/IBCAST51254.2021.9393315
    [263] Luo Q, Cao Y, Liu J, Benslimane A (2019) Localization and navigation in autonomous driving: Threats and countermeasures. IEEE Wirel Commun 26: 38-45. https://doi.org/10.1109/MWC.2019.1800533 doi: 10.1109/MWC.2019.1800533
    [264] Wang H, Xue C, Zhou Y, Wen F, Zhang H (2021) Visual semantic localization based on hd map for autonomous vehicles in urban scenarios. 2021 IEEE International Conference on Robotics and Automation (ICRA), 11255-11261. IEEE. https://doi.org/10.1109/ICRA48506.2021.9561459
    [265] Park M, Kang Y (2021) Experimental verification of a drift controller for autonomous vehicle tracking: A circular trajectory using LQR method. Int J Control Autom Syst 19: 404-416. https://doi.org/10.1007/s12555-019-0757-2 doi: 10.1007/s12555-019-0757-2
    [266] Pang H, Liu N, Hu C, Xu Z (2022) A practical trajectory tracking control of autonomous vehicles using linear time-varying MPC method. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 236: 709-723. https://doi.org/10.1177/09544070211022904 doi: 10.1177/09544070211022904
    [267] Borrelli F, Morari M (2007) Offset free model predictive control. 2007 46th IEEE conference on decision and control, 1245-1250. IEEE. https://doi.org/10.1109/CDC.2007.4434770
    [268] Cheng S, Li L, Chen X, Wu J (2020) Model-predictive-control-based path tracking controller of autonomous vehicle considering parametric uncertainties and velocity-varying. IEEE T Ind Electron 68: 8698-8707. https://doi.org/10.1109/TIE.2020.3009585 doi: 10.1109/TIE.2020.3009585
    [269] Williams G, Drews P, Goldfain B, Rehg JM, Theodorou EA (2018) Information-theoretic model predictive control: Theory and applications to autonomous driving. IEEE T Robot 34: 1603-1622. https://doi.org/10.1109/TRO.2018.2865891 doi: 10.1109/TRO.2018.2865891
    [270] Petrovskaya A, Thrun S (2008) Model based vehicle tracking for autonomous driving in urban environments. Proceedings of robotics: science and systems IV, Zurich, Switzerland, 34. https://doi.org/10.15607/RSS.2008.IV.023
    [271] Galceran E, Olson E, Eustice RM (2015) Augmented vehicle tracking under occlusions for decision-making in autonomous driving. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3559-3565. IEEE. https://doi.org/10.1109/IROS.2015.7353874
    [272] Wang H, Wang B, Liu B, Meng X, Yang G (2017) Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle. Robot Auton Syst 88: 71-78. https://doi.org/10.1016/j.robot.2016.11.014 doi: 10.1016/j.robot.2016.11.014
    [273] Kothare MV, Balakrishnan V, Morari M (1996) Robust constrained model predictive control using linear matrix inequalities. Automatica 32: 1361-1379. https://doi.org/10.1016/0005-1098(96)00063-5 doi: 10.1016/0005-1098(96)00063-5
    [274] Falcone P, Borrelli F, Tseng HE, Asgari J, Hrovat D (2008) Linear time‐varying model predictive control and its application to active steering systems: Stability analysis and experimental validation. International Journal of Robust and Nonlinear Control: IFAC‐Affiliated Journal 18: 862-875. https://doi.org/10.1002/rnc.1245 doi: 10.1002/rnc.1245
    [275] Wang Y, Shao Q, Zhou J, Zheng H, Chen H (2020). Longitudinal and lateral control of autonomous vehicles in multi-vehicle driving environments. IET Intell Transp Syst 14: 924-935. https://doi.org/10.1049/iet-its.2019.0846 doi: 10.1049/iet-its.2019.0846
    [276] Cui J, Liew LS, Sabaliauskaite G, Zhou F (2019) A Review on Safety Failures, Security Attacks, and Available Countermeasures for Autonomous Vehicles. Ad Hoc Networks 90: 101823. https://doi.org/10.1016/j.adhoc.2018.12.006 doi: 10.1016/j.adhoc.2018.12.006
    [277] Ferdowsi A, Challita U, Saad W, Mandayam NB (2018) Robust Deep Reinforcement Learning for Security and Safety in Autonomous Vehicle Systems. 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 307-312. https://doi.org/10.1109/ITSC.2018.8569635
    [278] Xu W, Yan C, Jia W, Ji X, Liu J (2018) Analyzing and Enhancing the Security of Ultrasonic Sensors for Autonomous Vehicles. IEEE Internet Things 5: 5015-5029. https://doi.org/10.1109/JIOT.2018.2867917 doi: 10.1109/JIOT.2018.2867917
  • This article has been cited by:

    1. Honghui Zhang, Rile Wu, Ning Yang, Jinjie Xie, Yang Hou, Retracted: Research on individualized distribution approach of coronary resting blood flow for noninvasive calculation of fractional flow reserve, 2023, 240, 01692607, 107704, 10.1016/j.cmpb.2023.107704
    2. Daniel J. Taylor, Harry Saxton, Ian Halliday, Tom Newman, D. R. Hose, Ghassan S. Kassab, Julian P. Gunn, Paul D. Morris, Systematic review and meta-analysis of Murray’s law in the coronary arterial circulation, 2024, 327, 0363-6135, H182, 10.1152/ajpheart.00142.2024
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5062) PDF downloads(322) Cited by(2)

Figures and Tables

Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog