Loading [MathJax]/jax/output/SVG/jax.js
Research article

Determinants of switching behavior to wear helmets when riding e-bikes, a two-step SEM-ANFIS approach


  • E-bikes have become one of China's most popular travel modes. The authorities have issued helmet-wearing regulations to increase wearing rates to protect e-bike riders' safety, but the effect is unsatisfactory. To reveal the factors influencing the helmet-wearing behavior of e-bike riders, this study constructed a theoretical Push-Pull-Mooring (PPM) model to analyze the factor's relationship from the perspective of travel behavior switching. A two-step SEM-ANFIS method is proposed to test relationships, rank importance and analyze the combined effect of psychological variables. The Partial Least Squares Structural Equation Model (PLS-SEM) was used to obtain the significant influencing factors. The Adaptive Network-based Fuzzy Inference System (ANFIS), a nonlinear approach, was applied to analyze the importance of the significant influencing factors and draw refined conclusions and suggestions from the analysis of the combined effects. The PPM model we constructed has a good model fit and high model predictive validity (GOF = 0.381, R2 = 0.442). We found that three significant factors tested by PLS-SEM, perceived legal norms (β = 0.234, p < 0.001), perceived inconvenience (β = -0.117, p < 0.001) and conformity tendency (β = 0.241, p < 0.05), are the most important factors in the effects of push, mooring and pull. The results also demonstrated that legal norm is the most important factor but has less effect on people with low perceived vulnerability, and low subjective norms will make people with high conformity tendency to follow the crowd blindly. This study could contribute to developing refined interventions to improve the helmet-wearing rate effectively.

    Citation: Peng Jing, Weichao Wang, Chengxi Jiang, Ye Zha, Baixu Ming. Determinants of switching behavior to wear helmets when riding e-bikes, a two-step SEM-ANFIS approach[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 9135-9158. doi: 10.3934/mbe.2023401

    Related Papers:

    [1] Yongli Yan, Tiansheng Sun, Teng Ren, Li Ding . Enhanced grip force estimation in robotic surgery: A sparrow search algorithm-optimized backpropagation neural network approach. Mathematical Biosciences and Engineering, 2024, 21(3): 3519-3539. doi: 10.3934/mbe.2024155
    [2] Xiaoming Liu, Zhonghan Liu, Yaqin Pan, Yufeng Huang, Desheng Wu, Zhaoyu Ba . Full-endoscopic transforaminal procedure to treat the single-level adjacent segment disease after posterior lumbar spine fusion: 1–2 years follow-up. Mathematical Biosciences and Engineering, 2019, 16(6): 7829-7838. doi: 10.3934/mbe.2019393
    [3] Lufeng Yao, Haiqing Wang, Feng Zhang, Liping Wang, Jianghui Dong . Minimally invasive treatment of calcaneal fractures via the sinus tarsi approach based on a 3D printing technique. Mathematical Biosciences and Engineering, 2019, 16(3): 1597-1610. doi: 10.3934/mbe.2019076
    [4] Xuefei Deng, Yu Liu, Hao Chen . Three-dimensional image reconstruction based on improved U-net network for anatomy of pulmonary segmentectomy. Mathematical Biosciences and Engineering, 2021, 18(4): 3313-3322. doi: 10.3934/mbe.2021165
    [5] Akim Kapsalyamov, Shahid Hussain, Prashant K. Jamwal . A novel compliant surgical robot: Preliminary design analysis. Mathematical Biosciences and Engineering, 2020, 17(3): 1944-1958. doi: 10.3934/mbe.2020103
    [6] Hafiz Muhammad Muzzammil, Yong-De Zhang, Hassan Ejaz, Qihang Yuan, Muhammad Muddassir . A review on tissue-needle interaction and path planning models for bevel tip type flexible needle minimal intervention. Mathematical Biosciences and Engineering, 2024, 21(1): 523-561. doi: 10.3934/mbe.2024023
    [7] Wenlu Zhang, Ziyue Ma, Hong Wang, Juan Deng, Pengfei Li, Yu Jia, Yabin Dong, Hong Sha, Feng Yan, Wenjun Tu . Study on automatic ultrasound scanning of lumbar spine and visualization system for path planning in lumbar puncture surgery. Mathematical Biosciences and Engineering, 2023, 20(1): 613-623. doi: 10.3934/mbe.2023028
    [8] Maher Alwuthaynani, Raluca Eftimie, Dumitru Trucu . Inverse problem approaches for mutation laws in heterogeneous tumours with local and nonlocal dynamics. Mathematical Biosciences and Engineering, 2022, 19(4): 3720-3747. doi: 10.3934/mbe.2022171
    [9] Tianran Yuan, Hongsheng Zhang, Hao Liu, Juan Du, Huiming Yu, Yimin Wang, Yabin Xu . Watertight 2-manifold 3D bone surface model reconstruction from CT images based on visual hyper-spherical mapping. Mathematical Biosciences and Engineering, 2021, 18(2): 1280-1313. doi: 10.3934/mbe.2021068
    [10] Tian Ma, Boyang Meng, Jiayi Yang, Nana Gou, Weilu Shi . A half jaw panoramic stitching method of intraoral endoscopy images based on dental arch arrangement. Mathematical Biosciences and Engineering, 2024, 21(1): 494-522. doi: 10.3934/mbe.2024022
  • E-bikes have become one of China's most popular travel modes. The authorities have issued helmet-wearing regulations to increase wearing rates to protect e-bike riders' safety, but the effect is unsatisfactory. To reveal the factors influencing the helmet-wearing behavior of e-bike riders, this study constructed a theoretical Push-Pull-Mooring (PPM) model to analyze the factor's relationship from the perspective of travel behavior switching. A two-step SEM-ANFIS method is proposed to test relationships, rank importance and analyze the combined effect of psychological variables. The Partial Least Squares Structural Equation Model (PLS-SEM) was used to obtain the significant influencing factors. The Adaptive Network-based Fuzzy Inference System (ANFIS), a nonlinear approach, was applied to analyze the importance of the significant influencing factors and draw refined conclusions and suggestions from the analysis of the combined effects. The PPM model we constructed has a good model fit and high model predictive validity (GOF = 0.381, R2 = 0.442). We found that three significant factors tested by PLS-SEM, perceived legal norms (β = 0.234, p < 0.001), perceived inconvenience (β = -0.117, p < 0.001) and conformity tendency (β = 0.241, p < 0.05), are the most important factors in the effects of push, mooring and pull. The results also demonstrated that legal norm is the most important factor but has less effect on people with low perceived vulnerability, and low subjective norms will make people with high conformity tendency to follow the crowd blindly. This study could contribute to developing refined interventions to improve the helmet-wearing rate effectively.



    Compared to traditional surgery, the advantages of minimally invasive surgery are minor trauma, minor infection, and minor pain [1], a trend of surgical treatment in modern medicine. In modern minimally invasive surgery technology, the endoscope and specialist instruments are inserted into the internal cavity through a small incision in the surgical site. The images captured by the endoscope are displayed on the screen in real-time [2]. The surgeon can monitor the patient's entire operation through the endoscopic transmission of images and can determine the patient's surgical condition [3]. For traditional minimally invasive surgery, there are still limitations in obtaining image information from the endoscope, such as the narrow field of view, uneven illumination and high reflection intensity, and the difficulty in obtaining depth information of the target area and locating the endoscope accurately. Under such complex environmental conditions, higher requirements are put forward for computer vision processing systems. Therefore, SLAM system is innovatively applied to the field of minimally invasive surgery to solve the bottleneck faced by modern minimally invasive surgery [4]. The endoscope acts as a robot in the field of minimally invasive surgery. It moves through the cavity of the human body, measures spatial information of the visible area of the cavity, completes the reconstruction of soft tissue, and displays it to the doctor. Minimally invasive surgery can help doctors make the right decisions by combining SLAM with better positioning of the endoscope and depth of the target area. It has crucial research significance and application value [5].

    The concept of minimally invasive surgery and the issue of visual SLAM were first proposed in 1983 and 1986, respectively. The use of vision technology based on endoscope images in minimally invasive surgery has obvious advantages without introducing additional equipment to the already very complex surgical device [6]. SLAM is most frequently used in minimally invasive surgery in the abdomen, where deformation and tissue movement are minimal. At the initial stage of application, SLAM was added to the algorithm as an auxiliary means for positioning [7]. In 2009, Grasa et al. used monocular laparoscopy to generate a sparse map of the environment from SLAM and then estimated the three-dimensional information of the scene through sequential images. In 2009 and 2010, the Mountney P team respectively proposed the application of an SLAM system based on the monocular endoscope and binocular endoscope in minimally invasive surgery for endoscope positioning and 3D modeling, and the monocular system based on the framework of SLAM to establish a motion model to estimate the motion of the camera and soft tissue effectively [8]. The binocular system constructs a 3D texture model of a minimally invasive surgery environment, a method model of dynamic view expansion [9]. In recent years, there have been more and more studies on the application of visual SLAM in minimally invasive surgery [10,11,12], most of which are devoted to solving the bottleneck of minimally invasive surgery. Vision-based techniques are good at restoring 3D structures and estimating endoscope motion. However, this research has a long way to go to solve the problems of accuracy [13].

    This paper mainly studies visual SLAM in minimally invasive surgery. For the deficiencies of traditional minimally invasive surgery, visual SLAM technology is used to complete the positioning and 3D mapping of the endoscope [14,15]. In combination with the K-means algorithm and the Super point algorithm, the feature information of the image in the inner cavity is extracted. It solves the problem that it is difficult to obtain the depth information of the target area and accurately locate the endoscope position in the environment with narrow vision, uneven illumination and high reflection intensity of minimally invasive surgical endoscope. Then, the iterative nearest point method was used to estimate the position and attitude of the endoscope. Finally, the stereo matching method was used to reconstruct the lumen image, and the point cloud image of the surgical area was recovered. Improve surgical accuracy and shorten the surgical time, thus optimizing the operation of the entire system [16].

    The structure of this paper is as follows. Section 2 discusses the visual SLAM and the method of extracting image features through Superpoint algorithm, as well as the method of extracting image feature points based on Superpoint algorithm and k-means algorithm. In Section 3, experiments are carried out to verify the effectiveness of our proposed method. Section 4 summarizes the whole article.

    The final output of the Super point (self supervised interest point detection and description) algorithm is the feature point algorithm with descriptors [17]. The network diagram is shown in Figure 1.

    Figure 1.  Super point network diagram.

    In this network, the initial feature detector is obtained by training the basic graph with undisputed feature points in a self-supervised way, and the image feature information is further extracted by the neural network. The stability of feature extraction is improved while the sensitivity of the feature extraction algorithm to illumination is reduced. The network is divided into two parts [18], the network for detecting basic graph corners and the network for extracting image features and outputting descriptors. The specific steps are as follows:

    (1) Feature point detector pre-training. For an image, it is difficult to define which points are the real feature points, but if it is a simple graph with defined corner positions, it is easy to determine the truth. Therefore, the pre-training is carried out by extracting undisputed feature points from an unlabelled virtual simple image, obtaining pseudo-truths, and using the resulting pseudo-truths to determine the true feature points. Then, the pseudo-true values are combined with the true values to retrain the feature detector to obtain the extracted feature. The resulting pseudo-truths are combined with the true values to retrain the feature detector to obtain the extracted feature point model.

    (2) The feature point detection network is divided into two parts: encoding and decoding. Image coding is to input the image to be processed into the shared coding network and reduce the dimension of the image to reduce the computation of the network. After dimensionality reduction, the image size is 1/8 of the original image. The main function of the encoder in the network is to map the input image to the spatial tensor, which has a smaller spatial dimension and larger channel depth. A probability is calculated for each pixel, which represents the probability that this pixel is a feature point. When decoding feature points, subpixel convolution is adopted to avoid the possibility of network overload caused by excessive computation in the process of extracting feature points. The input dimension of the decoder is RHc×Wc×65 (65 is the number of channels, and A non-feature point is added to the local area of the original picture 8×8), and the output dimension is RH×W. After normalization of the exponential function, When the non-feature points are removed, the image is deformed to change the dimension from RHc×Wc×64 to RH×W. As shown in Figure 2. The true value of the feature points at the position of, and then the truth value of the feature detector is trained to get the optimized detector. Then the optimized detector is used to re-detect the features and get the image with the feature points, and the image of the superposition feature points is taken as the final output feature graph [19].

    Figure 2.  Homographic adaptation.

    (3) Description sub detection network. Extracting the descriptor is a decoding operation. The descriptors are obtained from feature points. First, the image size and feature point position are normalized. (x,y) and K are used to represent the coordinates and number of feature points, respectively, and the normalized feature points are formed into A tensor with scale 1×1×K×(x,y). The actual positions of feature points on a certain channel in the tensor are obtained by inverse normalization, and the pixel positions are complemented by bilinear interpolation to avoid non-integer pixel positions. Output a complete descriptor with a complete dimension 1×C×1×K, where C is the number of channels. Finally, the unit length is described by the L2 normalization operation. As shown in Figure 3. The descriptors of feature points can be calculated through the above steps and output by the deep learning network. However, this result may not satisfy the characteristics that the distance between the same feature points is as close as possible, the distance between different feature point descriptors is as far as possible, and the truth value cannot be determined. We know the pose transformation between a pair of images for homography transformation, and we can calculate the corresponding relationship of feature points. It can calculate the loss of any pair of feature points, and further optimization can make the obtained descriptor conform to its feature

    Figure 3.  Descriptor extraction.

    K-Means clustering algorithm will use all the data as a whole, the initial clustering center is K randomly selected points in the whole, and the other data into the center of the minimum Euclidean distance, but this is not the final result of dividing. Each data needs to recalculate the bunch distance between the point and point and average to recalculate the clustering center for K a new round of the division of data clusters. The clustering stops until the clustering result of each time remains unchanged or reaches the preset maximum number of iterations. This clustering algorithm presets K categories in clustering without knowing the specific information of these C categories and the type of data to be clustered. The algorithm divides the data to be clustered into these K categories on the basis of minimizing the error function. In the case of processing massive data, the clustering does not depend on the pre-defined classes [20].

    After the Super point algorithm extracts the feature points and descriptors of the image, the heatmap of the feature map can be obtained. Each feature point in the heatmap has an index value, and the K-means method is used to cluster these index values. When feature points are matched, a confidence probability of a correct match is set for each index value. According to this probability, whether this point is the correct match of the points to be matched is determined. Set the maximum number of matching point pairs and match according to the index of feature points. When all feature points are matched, or the maximum number of matching points is reached, the matching graph is output. The K-means clustering algorithm is applied to avoid the successful matching between the reflective spot and the strong light point in the inner cavity and reduce the probability of mismatching. Because the k-means algorithm is very simple, very fast, time complexity is nearly linear, and it is suitable for mining large-scale datasets, combining it with the super point algorithm not only improves the matching degree, but also shortens the time it takes to compute the k-means algorithm.

    The pose estimation process is actually to find the corresponding point between the 2D image and the 3D spatial position. When representing an object's 3D spatial position and orientation, a coordinate system rotating with the object is usually established with the object as the coordinate origin. The transformation relationship between the rotating coordinate system and the reference coordinate system can determine the spatial position of the object. That is, the solution of pose estimation discusses the transformation relationship between coordinate systems.

    In visual SLAM, there are usually two methods to estimate the pose of the sensor. One is to use a linear method to estimate the pose information of the visual sensor. After obtaining the initial pose, the pose information is further optimized by constructing and solving the least square problem. The other is to directly integrate the position of the space point and the position of the sensor to solve the pose information. In the estimation of the pose information of the endoscope, since the position of the endoscope is not continuous and uniform, the first method is selected to solve the pose information of the endoscope. The initial pose is estimated first, and then the results are optimized to increase the accuracy and robustness of the SLAM system.

    The iterative closest point (ICP) is a standard algorithm that estimates pose between completed 3D Point pairs. Suppose there are a pair of 3D point pairs matched and their centroids:

    P={p1,,pn},P={p1,,pn}p=1nni=1(pi),p=1nni=1(pi) (1)

    where: p1,p2,...pn is the two-dimensional coordinate of the target point.

    To perform pose estimation is actually to find a Euclidean transform that satisfies:

    i,pi=Rpi+t (2)

    The result of the above formula is solved through the algebraic method of singular value decomposition, and the error term of the point i is defined as:

    ei=pi(Rpi+t) (3)

    Constructing the least-squares problem:

    minR,tJ=12ni=1pi(Rpi+t)22=12ni=1pipR(pip)22+pRpt22 (4)

    Solve the above equation and get R and t, minimizing the sum of squares of errors. Calculate the centroids coordinates of each point:

    qi=pipqi=pip (5)

    The rotation matrix and translation vector are calculated according to the following formula:

    {R=argminR12ni=1qiRqi22t=pRp (6)

    Define matrix:

    W=ni=1qiqiT (7)

    W=UVT can be obtained by singular value decomposition of Eq (7), where is the diagonal matrix of singular values. When the rank is full, the rotation matrix can be solved as:

    R=UVT (8)

    Substitute the solved rotation matrix into Eq (8) to solve the translation vector.

    Firstly, the disparity value of the target region is obtained and optimized by the SGBM stereo matching method; Secondly, the depth value is calculated by using the parallax value to form the depth image; Finally, the 3D point cloud data are obtained by further calculation.

    Parallax refers to that when binocular stereo vision obtains two frames of images at the same time through the left and right cameras, the corresponding relationship between features is established at the imaging points on the left and right imaging planes of the camera through the corresponding three-dimensional space points so that the human eye can feel the depth. Observe the imaging difference of the image on the camera plane. This difference is called parallax. In short, parallax describes the horizontal pixel difference between the corresponding imaging points of the left and right eyes [21].

    Figure 4.  Disparity generation process.

    There is a point P(X,Y,Z) in the world coordinate system, stereo vision to capture the image parallax produce process as shown in Figure 3, when the left and right eye of the camera to get the world coordinates of the same spatial point P, the position of the two imaging points due to the existence of the error will not identical, when choosing the left camera as a benchmark, to obtain the current position of parallax need to put the right mesh projection to the left eye, The distance between the right target coordinate and the left target is obtained by calibration, so this projection process is equivalent to the projection of the left target to the point (xTx,y) in the world coordinate system. We use the similar triangle mathematics principle; the parallax obtained is d=xlxr. After stereoscopic correction of binocular vision, a point is identified on the left, and a matching point can be found along the opposite pole line.

    The parallax values in the parallax map can be calculated to obtain the corresponding depth values. The depth map shows the form of the image by integrating these depth values. The depth map contains the depth information of each pixel value, and the pixel value in the depth map is the depth value. According to the geometric relation of parallel binocular vision, the relation between parallax value and object depth information is as follows:

    bdep=(b+xr)xldepf (9)

    where: dep represents image depth information; f is the normalized focal length, which is the parameter fx in the endoscope's internal parameter matrix; b is the baseline length, that is, the distance between the optical centers of the two cameras left and right of the endoscope; xlandxrare the distances between the imaging points on the left and right imaging planes and the left and right edges of the image. The calculation formula of parallax value and depth value can be derived as follows:

    dep=b×fxlxr=f×bd (10)

    where, d is the parallax value of pixel points. Since the right side of the equation is known, the depth value is easy to calculate. By analyzing the above formula, it can be concluded that the closer the point in the image is to the imaging plane, the greater the parallax in the left and right cameras, and vice versa.

    According to the coordinate transformation relation, the transformation relation between the world coordinate system and the pixel coordinate system can be obtained as follows:

    zp[uv1]=[fx0cx0fycy001][R,t][XYZ1] (11)

    where: The first matrix on the right of the equation is the endoscope internal parameter; matrix R and t are rotation matrices and shift vectors, respectively. The camera coordinate system coincides with the origin of the world coordinate system. If there is no translation or rotation during the transition between the two coordinate systems, the object has the same depth in the two coordinate systems. The depth information of each point is directly presented in the depth map. The depth information can be used to calculate the corresponding point cloud data, namely the three-dimensional spatial information of pixels, through the coordinate transformation relationship between the above world coordinate system and the image coordinate system:

    {z=zpx=(ucx)×zp/fxy=(vcy)×zp/fy (12)

    The calculated data can be saved in the created point cloud data, and the point cloud image can be displayed by using some libraries. The depth image of point cloud data can be calculated through the reverse calculation of the above operations.

    In order to verify whether the improved algorithm has better feature extraction and matching effect in the strong reflection environment in minimally invasive surgery, the feature extraction results of the improved algorithm and the improved algorithm in the lumen environment are compared, and the simulation results are compared and analyzed. The original image pairs for feature extraction are shown in a) and b) Figure 5. By comparing the feature extraction results of various methods on the image pairs, the effectiveness of the improved extraction network is shown [22].

    Figure 5.  Super point algorithm feature extraction and matching results.

    The results of image feature extraction using the original Super point algorithm are shown in c) in Figure 5, and the results of feature extraction using the improved Super point algorithm are shown in d) in Figure 5. As can be seen from the figure, the number of feature points extracted by the Super point algorithm in the inner cavity environment under a strong reflection environment is more and more uniform than that extracted by the traditional algorithm, but there are too many failed matching points and many invalid points. These matching failed points and invalid points are caused by the narrow field of vision, uneven lighting and high reflection intensity of minimally invasive surgical endoscope, The numerical value of the depth map physically means the distance from the camera. A lot of black and a lot of white are similar distances from you. The feature points are difficult to extract, and the resolution of the depth map is low, which reduces the number of feature points that can be extracted. False matching situations are reasonable. However, the distribution of feature points extracted by the improved Super point algorithm is more uniform, and effective features can be extracted from the edge part of the insufficient light. e) in Figure 5 is the matching result of feature points extracted by the original Super point algorithm, and f) in Figure 5 is the matching result of feature points extracted by the improved Super point algorithm. It can be easily seen from the figure that the improved algorithm has a better matching effect on feature points, and the number of invalid feature points is 0, which is better than the original algorithm and the traditional algorithm. As can be seen from Figure 5 and Table 1, after the Super point algorithm and clustering algorithm are combined and improved, the number of feature extraction effects of the inner cavity image increases and is evenly distributed, without any redundant points that fail to be matched successfully, and the percentage of successfully matched points in the total number of extractions is the highest and the false matching rate is the lowest. Although the proportion of effective points extracted by SIFT, SURF and ORB algorithms can reach 100%, their speed is very slow when there are too many matching points, and there are many wrong matching point pairs. The original Superpoint algorithm can not only ensure the number of feature points, but also ensure the speed, but it has a very low proportion of effective points. Compared with the traditional feature extraction algorithm and the original Super point algorithm, it shows the best effect.

    Table 1.  The results of each extraction algorithm.
    SIFT SURF ORB Superpoint Improving Superpoint
    Extracting time /s 1.282 0.182 0.125 0.103 0.101
    Match time /s 0.187 0.078 0.031 0.018 0.021
    Extract point pairs/pairs 334 121 268 352 349
    Match point pairs/pairs 334 121 268 263 349
    False match point pair/pair 79 25 31 13 15
    Effective point proportion /% 100.00 100.00 100.00 74.72 100.00
    False match rate /% 23.65 20.66 11.57 4.94 4.30

     | Show Table
    DownLoad: CSV

    Table 1 shows the numerical results of each algorithm. From the two indicators of Extracting time and Match time in the table, the time of the improved algorithm is significantly shorter than that of other algorithms, which also proves that the improved algorithm has the minimum time consumption, the maximum number of matching success points and the minimum mismatch rate, and the effective point ratio is significantly improved compared with that before the improvement.

    Different from large-scale scenarios, when the endoscope is in the special environment of minimally invasive surgery, the estimation of the position information of the endoscope only needs numerical values. Since the movement process of the endoscope is irregular, it is not meant to form the map of its position information [23]. In this topic, will the SLAM system of camera pose estimation algorithm used in endoscopic minimally invasive surgery, using the image information of the real environment of minimally invasive surgery for endoscopic frame only when the estimate judgment in endoscopic movement between rotation and translation in the process of calculation is accurate, lumen mapping results of image feature points as shown in Figure 6.

    Figure 6.  Feature point mapping of cavity image.

    The blue point in Figure 6 is the feature point of the current frame, and the green point is the feature point of the next frame. The current frame is directly processed without any operation. The position difference between blue and green is why the camera moves when shooting different frames, and the red point is the position of the green point after the rotation matrix and the motion vector move left. The distance between blue dots and red dots is smaller than that between green dots, with an average error of 5.43 pixels.

    The disparity image and depth image before and after optimization are obtained by using the algorithm in this chapter as follows:

    Figure 7 shows the point cloud before and after optimization of the lumen image in the actual scene, which is displayed in the gray form, and the cavity area is more obvious. The image information with too dark illumination in the original image has the condition that the visual difference has not been calculated, so the depth value cannot be calculated, which is reflected as the whole area in the point cloud image. The point cloud image before optimization has obvious segmentation lines. After optimization, this phenomenon is weakened, and most of the information is effectively restored. The details of the main areas are more delicate, and some smaller blood vessels that are not easy to extract can also be restored [24].

    Figure 7.  PointCloud image.

    Figure 8 is a pure point cloud image without adding mesh and texture. The color depth in the figure indicates the distance between the point and the endoscope. Although the edge results of the disparity map and depth map are not satisfactory, it can be seen from b) of Figures 7 and 8 that the reconstruction method adopted can have a good effect on the restoration of images in the lumen environment.

    Figure 8.  Pure point cloud image.

    As can be seen from a) and b) in Figure 9, except for the empty part with extremely weak light, the image has a filling effect, and the edge is partially optimized. Although the "boundary" between the core area and the fat part of the image does not have an excellent restoration result, the edge is smoothed. c) is a pure point cloud image without mesh and texture, and the depth of color in the figure represents the distance between the point and the endoscope. d) is the point cloud diagram of the inner cavity image in the real scene. In the case of the image information with a too-dark light in the original image, the parallax value is not calculated, so the depth value cannot be calculated, which is reflected in the point cloud image as the empty area in the figure. Most of the information in the image can be effectively restored. The details of the main areas are delicate, and some small vessels that are not easy to be extracted can also be recovered. The reconstruction method used in this paper can restore the image in the inner cavity with good effect.

    Figure 9.  Comprehensive experimental results.

    In view of the problem that it is difficult to obtain depth information on the target area and accurately locate the position of the endoscope in the environment of the narrow field of vision, uneven illumination, and high reflection intensity of minimally invasive surgery endoscope, the method based on the combination of K-means and Super point algorithm is first used to extract image feature points and match. Compared with the traditional algorithm, the simulation results are more accurate and stable. Compared with the Super point, the logarithm of successful matching points is increased by 32.69%, the proportion of effective points is increased by 25.28%, the false matching rate is reduced by 0.64%, and the extraction time is reduced by 1.98%. In the real minimally invasive surgery scene, the feature points in the endovascular image were mapped to verify the accuracy of the calculated rotation and translation, with an average error of 5 pixels. Finally, the stereo matching algorithm is used to calculate the disparity map of the original image, and then the disparity map is used to calculate the depth information so as to calculate the point cloud image with depth information, which can well reconstruct the small features such as blood vessels in the image, and also have a good reconstruction effect on the edge part of the insufficient light [25].

    Funding: This research was supported by the Natural Science Foundation of Heilongjiang Province (LH2019F024), China, 2019–2022, and Key R & D Program Guidance Projects of Heilongjiang Province (Grant No. GZ20210065), 2021–2024

    The authors declare there is no conflict of interest.



    [1] J. A. Zhu, S. Dai, X. Y. Zhu, Characteristics of Electric Bike Accidents and Safety Enhancement Strategies, Urban Transp. China, 2018 (2018), 15–20.
    [2] CNBN, Electric Bicycles Are Nearly 300 Million in China, Available from: http://news.cnr.cn/rebang/20211011/t20211011_525629773.shtml.
    [3] D. Zhang, T. F. Ren, M. M. Zhang, Y. C. Zheng, H. Y Zhou, Analysis and prevention of the causes of electric bicycle accidents based on safety checklists (in Chinese), Sci. Technol. Innovation, 07 (2021), 37–39. https://doi.org/10.15913/j.cnki.kjycx.2021.07.011 doi: 10.15913/j.cnki.kjycx.2021.07.011
    [4] H. Leijdesdorff, J. van Dijck, P. Krijnen, C. Vleggeert-Lankamp, I. Schipper, Injury pattern, hospital triage, and mortality of 1250 patients with severe traumatic brain injury caused by road traffic accidents, J. Neurotrauma, 31 (2014), 459–465. https://doi.org/10.1089/neu.2013.3111 doi: 10.1089/neu.2013.3111
    [5] M. F. Zavareh, A. M. Hezaveh, T. Nordfjærn, Intention to use bicycle helmet as explained by the Health Belief Model, comparative optimism and risk perception in an Iranian sample, Transp. Res. Part F Psychol. Behav., 54 (2018), 248–263. https://doi.org/10.1016/j.trf.2018.02.003 doi: 10.1016/j.trf.2018.02.003
    [6] J. Olivier, P. Creighton, Bicycle injuries and helmet use: a systematic review and meta-analysis, Int. J. Epidemiol., 46 (2017), 278–292. https://doi.org/10.1093/ije/dyw153 doi: 10.1093/ije/dyw153
    [7] Y. N. Song, W. W. Ma, J. Shen, J. G. Shen, Analysis of the relationship between helmet wearing and casualty among electric vehicle drivers (in Chinese), Urban Rural Enterp. Health China, 35 (2020), 7–9. https://doi.org/10.16286/j.1003-5052.2020.12.003 doi: 10.16286/j.1003-5052.2020.12.003
    [8] J. Kumphong, T. Satiennam, W. Satiennam, The determinants of motorcyclists helmet use: urban arterial road in Khon Kaen City, Thailand, J. Saf. Res., 67 (2018), 93–97. https://doi.org/10.1016/j.jsr.2018.09.011 doi: 10.1016/j.jsr.2018.09.011
    [9] N. Xu, N. Gao, J. H. Su, Y. Yan, D. D. Zhou, J. J. Peng, Investigation on knowledge, attitude and behavior of electric bicycle drivers and riders wearing safety helmets based on wechat public account (in Chinese), Shanghai J. Preventative Med., 30 (2018), 744–749. https://doi.org/10.19428/j.cnki.sjpm.2018.18803 doi: 10.19428/j.cnki.sjpm.2018.18803
    [10] C. X. Ma, D. Yang, J. B. Zhou, Z. X. Feng, Q. Yuan, Risk riding behaviors of urban e-bikes: a literature review, Int. J. Environ. Res. Public Health, 16 (2019), 2308. https://doi.org/10.3390/ijerph16132308 doi: 10.3390/ijerph16132308
    [11] J. B. Zhou, Y. Y. Guo, Y. Wu, S. Dong, Assessing factors related to e-bike crash and e-bike license Plate Use, J. Transp. Syst. Eng. Inf. Technol., 17 (2017), 229–234.
    [12] L. T. Truong, H. T. T. Nguyen, C. De Gruyter, Mobile phone use among motorcyclists and electric bike riders: a case study of Hanoi, Vietnam, Accid. Anal. Prev., 91 (2016), 208–215. https://doi.org/10.1016/j.aap.2016.03.007 doi: 10.1016/j.aap.2016.03.007
    [13] N. Haworth, A. K. Debnath, How similar are two-unit bicycle and motorcycle crashes, Accid. Anal. Prev., 58 (2013), 15–25. https://doi.org/10.1016/j.aap.2013.04.014 doi: 10.1016/j.aap.2013.04.014
    [14] J. Zhou, T. Zheng, S. Dong, X. Mao, C. Ma, Impact of helmet-wearing policy on e-bike safety riding behavior: a bivariate ordered probit analysis in Ningbo, China, Int. J. Environ. Res. Public Health, 19 (2022), 2830. https://doi.org/10.3390/ijerph19052830 doi: 10.3390/ijerph19052830
    [15] X. S. Wang, J. Chen, M. Quddus, W. Zhou, M. Shen, Influence of familiarity with traffic regulations on delivery riders' e-bike crashes and helmet use: two mediator ordered logit models, Accid. Anal. Prev., 159 (2021), 106277. https://doi.org/10.1016/j.aap.2021.106277 doi: 10.1016/j.aap.2021.106277
    [16] The Ministry of Public Security of the People's Republic of China, The Traffic Management Bureau of the Ministry of Public Security has deployed a "One Helmet, One Belt" security operation, 2020, Available from: http://www.gov.cn/xinwen/2020-04/21/content_5504613.htm.
    [17] M. Karkhaneh, Effectiveness of bicycle helmet legislation to increase helmet use: a systematic review, Inj. Prev., 12 (2006), 76–82. https://doi.org/10.1136/ip.2005.010942 doi: 10.1136/ip.2005.010942
    [18] Guangzhou Public Security Bureau, In January, these cities in Guangdong had the lowest helmet wearing rate, 2021, Available from: https://baijiahao.baidu.com/s?id = 1691405130964298455 & wfr = spider & for = pc
    [19] T. Tang, H. Wang, B. Guo, Study on helmet wearing intention of electric bicycle riders (in Chinese), J. Transp. Eng. Inf., 20 (2022), 1–17. https://doi.org/10.19961/j.cnki.1672-4747.2021.11.024 doi: 10.19961/j.cnki.1672-4747.2021.11.024
    [20] B. Foroughi, P. V. Nhan, M. Iranmanesh, M. Ghobakhloo, M. Nilashi, E. Yadegaridehkordi, Determinants of intention to use autonomous vehicles: findings from PLS-SEM and ANFIS, J. Retailing Consum. Serv., 70 (2023), 103158. https://doi.org/10.1016/j.jretconser.2022.103158 doi: 10.1016/j.jretconser.2022.103158
    [21] Q. F. Li, O. Adetunji, C. V. Pham, N. T. Tran, E. Chan, A. M. Bachani, Helmet use among motorcycle riders in ho chi minh city, vietnam: results of a five-year repeated cross-sectional study, Accid. Anal. Prev., 144 (2020), 105642. https://doi.org/10.1016/j.aap.2020.105642 doi: 10.1016/j.aap.2020.105642
    [22] K. Brijs, T. Brijs, S. Sann, T. A. Trinh, G. Wets, R. A. C. Ruiter, Psychological determinants of motorcycle helmet use among young adults in Cambodia, Transp. Res. Part F Traffic Psychol. Behav., 26 (2014), 273–290. https://doi.org/10.1016/j.trf.2014.08.002 doi: 10.1016/j.trf.2014.08.002
    [23] Y. C. Ho, C. T. Tsai, Comparing ANFIS and SEM in linear and nonlinear forecasting of new product development performance, Expert Syst. Appl., 38 (2011), 6498–6507. https://doi.org/10.1016/j.eswa.2010.11.095 doi: 10.1016/j.eswa.2010.11.095
    [24] E. Yadegaridehkordi, M. Nilashi, M. H. N. B. M. Nasir, O. Ibrahim, Predicting determinants of hotel success and development using structural equation modelling (SEM)-ANFIS method, Tourism Manage., 66 (2018), 364–386. https://doi.org/10.1016/j.tourman.2017.11.012 doi: 10.1016/j.tourman.2017.11.012
    [25] B. Moon, Paradigms in migration research: exploring 'moorings' as a schema, Prog. Hum. Geogr., 19 (1995), 504–524. https://doi.org/10.1177/030913259501900404 doi: 10.1177/030913259501900404
    [26] J. K. Hsieh, Y. C. Hsieh, H. C. Chiu, Y. C. Feng, Post-adoption switching behavior for online service substitutes: a perspective of the push–pull–mooring framework, Comput. Hum. Behav., 28 (2012), 1912–1920. https://doi.org/10.1016/j.chb.2012.05.010 doi: 10.1016/j.chb.2012.05.010
    [27] Y. Sun, D. Liu, S. J. Chen, X. R. Wu, X. L. Shen, X. Zhang, Understanding users' switching behavior of mobile instant messaging applications: an empirical study from the perspective of push-pull-mooring framework, Comput. Hum. Behav., 75 (2017), 727–738. https://doi.org/10.1016/j.chb.2017.06.014 doi: 10.1016/j.chb.2017.06.014
    [28] J. Y. Lai, J. Wang, Switching attitudes of taiwanese middle-aged and elderly patients toward cloud healthcare services: an exploratory study, Technol. Forecasting Social Change, 92 (2015), 155–167. https://doi.org/10.1016/j.techfore.2014.06.004 doi: 10.1016/j.techfore.2014.06.004
    [29] H. S. Bansal, S. Taylor, Y. St. James, "Migrating" to new service providers: toward a unifying framework of consumers' switching behaviors, J. Acad. Mark. Sci., 33 (2005), 96–115. https://doi.org/10.1177/0092070304267928 doi: 10.1177/0092070304267928
    [30] S. Wang, J. Wang, F. Yang, From willingness to action: do push-pull-mooring factors matter for shifting to green transportation, Transp. Res. Part D Transp. Environ., 79 (2020), 102242. https://doi.org/10.1016/j.trd.2020.102242 doi: 10.1016/j.trd.2020.102242
    [31] H. Kim, The role of legal and moral norms to regulate the behavior of texting while driving, Transp. Res. Part F Traffic Psychol. Behav., 52 (2018), 21–31. https://doi.org/10.1016/j.trf.2017.11.004 doi: 10.1016/j.trf.2017.11.004
    [32] M. J. Paschall, J. W. Grube, S. Thomas, C. Cannon, R. Treffers, Relationships between local enforcement, alcohol availability, drinking norms, and adolescent alcohol use in 50 california cities, J. Stud. Alcohol Drugs, 73 (2012), 657–665. https://doi.org/10.15288/jsad.2012.73.657 doi: 10.15288/jsad.2012.73.657
    [33] M. Limayem, S. G. Hirt, W. W. Chin, Intention does not always matter: the contingent role of habit on it usage behavior, in the 9th European Conference on Information Systems, 13 (2001).
    [34] C. Barbarossa, P. De Pelsmacker, Positive and negative antecedents of purchasing eco-friendly products: a comparison between green and non-green consumers, J. Bus. Ethics, 134 (2016), 229–247. https://doi.org/10.1007/s10551-014-2425-z doi: 10.1007/s10551-014-2425-z
    [35] A. Mehrabian, C. A. Stefl, Basic temperament components of loneliness, shyness, and conformity, Social Behav. Pers. Int. J., 23 (1995), 253–263. https://doi.org/10.2224/sbp.1995.23.3.253 doi: 10.2224/sbp.1995.23.3.253
    [36] R. Zhou, W. J. Horrey, Predicting adolescent pedestrians' behavioral intentions to follow the masses in risky crossing situations, Transp. Res. Part F Traffic Psychol. Behav., 13 (2010), 153–163. https://doi.org/10.1016/j.trf.2009.12.001 doi: 10.1016/j.trf.2009.12.001
    [37] T. P. Tang, Y. T. Guo, X. Z. Zhou, S. Labi, S. L. Zhu, Understanding electric bike riders' intention to violate traffic rules and accident proneness in China, Travel Behav. Soc., 23 (2021), 25–38. https://doi.org/10.1016/j.tbs.2020.10.010 doi: 10.1016/j.tbs.2020.10.010
    [38] P. Janmaimool, Application of protection motivation theory to investigate sustainable waste management behaviors, Sustainability, 9 (2017), 1079. https://doi.org/10.3390/su9071079 doi: 10.3390/su9071079
    [39] K. Chamroonsawasdi, S. Chottanapund, R. A. Pamungkas, P. Tunyasitthisundhorn, B. Sornpaisarn, O. Numpaisan, Protection motivation theory to predict intention of healthy eating and sufficient physical activity to prevent Diabetes Mellitus in Thai population: A path analysis, Diabetes Metab. Syndr. Clin. Res. Rev., 15 (2021), 121–127. https://doi.org/10.1016/j.dsx.2020.12.017 doi: 10.1016/j.dsx.2020.12.017
    [40] L. Ajzen, From intentions to actions: a theory of planned behavior, in Action Control, Springer, (1985), 11–39. https://doi.org/10.1007/978-3-642-69746-3_2
    [41] A. Shafiei, H. Maleksaeidi, Pro-environmental behavior of university students: application of protection motivation theory, Glob. Ecol. Conserv., 22 (2020), e00908. https://doi.org/10.1016/j.gecco.2020.e00908 doi: 10.1016/j.gecco.2020.e00908
    [42] R. Meade, W. Barnard, Conformity and anticonformity among Americans and Chinese, J. Social Psychol., 89 (1973), 15–24. https://doi.org/10.1080/00224545.1973.9922563 doi: 10.1080/00224545.1973.9922563
    [43] M. N. Borhan, A. N. H. Ibrahim, M. A. A. Miskeen, Extending the theory of planned behaviour to predict the intention to take the new high-speed rail for intercity travel in Libya: Assessment of the influence of novelty seeking, trust and external influence, Transp. Res. Part A Policy Pract., 130 (2019), 373–384. https://doi.org/10.1016/j.tra.2019.09.058 doi: 10.1016/j.tra.2019.09.058
    [44] L. Ross, T. Ross, S. Farber, C. Davidson, M. Trevino, A. Hawkins, The theory of planned behavior and helmet use among college students, Am. J. Health Behav., 35 (2011), 581–590. https://doi.org/10.5993/AJHB.35.5.7 doi: 10.5993/AJHB.35.5.7
    [45] S. O. Olsen, J. Scholderer, K. Brunsø, W. Verbeke, Exploring the relationship between convenience and fish consumption: A cross-cultural study, Appetite, 49 (2007), 84–91. https://doi.org/10.1016/j.appet.2006.12.002 doi: 10.1016/j.appet.2006.12.002
    [46] T. N. Nguyen, A. Lobo, S. Greenland, Pro-environmental purchase behaviour: the role of consumers' biospheric values, J. Retailing Consum. Serv., 33 (2016), 98–108. https://doi.org/10.1016/j.jretconser.2016.08.010 doi: 10.1016/j.jretconser.2016.08.010
    [47] National Bureau of Statistics, China Statistical Yearbook (2022). Available from: http://www.stats.gov.cn/tjsj./ndsj/.
    [48] SUHO.COM, Characteristics of electric bicycle traffic accidents and safety improvement measures, (2019). Available from: https://www.sohu.com/a/289681314_782444.
    [49] J. Mandhani, J. K. Nayak, M. Parida, Interrelationships among service quality factors of Metro Rail Transit System: An integrated Bayesian networks and PLS-SEM approach, Transp. Res. Part A Policy Pract., 140 (2020), 320–336. https://doi.org/10.1016/j.tra.2020.08.014 doi: 10.1016/j.tra.2020.08.014
    [50] T. F. Golob, Structural equation modeling for travel behavior research, Transp. Res. Part B Methodol., 37 (2003), 1–25. https://doi.org/10.1016/S0191-2615(01)00046-7 doi: 10.1016/S0191-2615(01)00046-7
    [51] J. Henseler, G. Hubona, P. A. Ray, Using PLS path modeling in new technology research: updated guidelines, Ind. Manage. Data Syst., 116 (2016), 2–20. https://doi.org/10.1108/IMDS-09-2015-0382 doi: 10.1108/IMDS-09-2015-0382
    [52] A. Leguina, A primer on partial least squares structural equation modeling (PLS-SEM), Int. J. Res. Method Educ., 38 (2015), 220–221. https://doi.org/10.1080/1743727X.2015.1005806 doi: 10.1080/1743727X.2015.1005806
    [53] J. F. Hair Jr, M. Sarstedt, L. Hopkins, V. Kuppelwieser, Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research, Eur. Bus. Rev., 26 (2014), 106–121. https://doi.org/10.1108/EBR-10-2013-0128 doi: 10.1108/EBR-10-2013-0128
    [54] M. Tenenhaus, V. E. Vinzi, Y. M. Chatelin, C. Lauro, PLS path modeling, Comput. Stat. Data Anal., 48 (2005), 159–205. https://doi.org/10.1016/j.csda.2004.03.005 doi: 10.1016/j.csda.2004.03.005
    [55] J. R. Jang, ANFIS: adaptive-network-based fuzzy inference system, IEEE Trans. Syst. Man Cybern., 23 (1993), 665–685.
    [56] D. Nauck, F. Klawonn, R. Kruse, Foundation of Neuro-fuzzy Systems, Wiley, 1997.
    [57] S. Zhang, P. Jing, D. Yuan, C. Yang, On parents' choice of the school travel mode during the covid-19 pandemic, Math. Biosci. Eng., 19 (2022), 9412–9436. https://doi.org/10.3934/mbe.2022438 doi: 10.3934/mbe.2022438
    [58] W. Chin, A. Gopal, W. D. Salisbury, Advancing the theory of adaptive structuration: the development of a scale to measure faithfulness of appropriation, Inf. Syst. Res., 8 (1997), 342–367. https://doi.org/10.1287/isre.8.4.342 doi: 10.1287/isre.8.4.342
    [59] C. Fornell, D. F. Larcker, Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res., 18 (1981), 39–50. https://doi.org/10.2307/3151312 doi: 10.2307/3151312
    [60] A. García-Ferrer, A. de Juan, P. Poncela, Forecasting traffic accidents using disaggregated data. Int. J. Forecasting, 22 (2006), 203–222. https://doi.org/10.1016/j.ijforecast.2005.11.001 doi: 10.1016/j.ijforecast.2005.11.001
    [61] H. Zhou, S. B. Romero, X. Qin, An extension of the theory of planned behavior to predict pedestrians' violating crossing behavior using structural equation modeling, Accid. Anal. Prev., 95 (2016), 417–424. https://doi.org/10.1016/j.aap.2015.09.009 doi: 10.1016/j.aap.2015.09.009
  • This article has been cited by:

    1. Le Ren, Peng Yang, Rongchuan Sun, Shumei Yu, Gang Wang, Lining Sun, 2024, Semi-dense Map Reconstruction of Bronchus Based on Prior Feature Correlation, 979-8-3503-6479-8, 263, 10.1109/ISoIRS63136.2024.00058
    2. Bowen Deng, 2024, 3144, 0094-243X, 030008, 10.1063/5.0215489
    3. Hao Qu, 2024, 3194, 0094-243X, 020017, 10.1063/5.0222938
    4. Hao Qu, 2024, 3194, 0094-243X, 020006, 10.1063/5.0222939
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2131) PDF downloads(110) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog