Research article

Three-dimensional mandibular motion trajectory-tracking system based on BP neural network

  • Received: 08 July 2020 Accepted: 23 August 2020 Published: 28 August 2020
  • The aim of this study was to develop a prototype three-dimensional optical motion capture system based on binocular stereo vision, Back-propagation (BP) Neural Network and 3D compen-sation method for accurate and real-time recording of mandibular movement. A specialized 3D method of compensation to eliminate the involuntary vibration motions by human heart beating and respiration. A kind of binocular visual 3D measurement method based on projection line and a calibration method based on BP neural network is proposed to solve the problem of the high complexity of camera calibration process and the low accuracy of 3D measurement. The accuracy of the proposed system is systematically evaluated by means of electric platform and clinical trials, and the root-mean-square is 0.0773 mm. Finally, comparisons with state-of-the-art methods demonstrate that our system has higher reliability and accuracy. Meanwhile, the motion trajectory-tracking system is expected to be used in the diagnosis of clinical oral diseases and digital design of restoration.

    Citation: Sukun Tian, Ning Dai, Linlin Li, Weiwei Li, Yuchun Sun, Xiaosheng Cheng. Three-dimensional mandibular motion trajectory-tracking system based on BP neural network[J]. Mathematical Biosciences and Engineering, 2020, 17(5): 5709-5726. doi: 10.3934/mbe.2020307

    Related Papers:

    [1] Jian Zhang, Yan Zhang, Cong Wang, Huilong Yu, Cui Qin . Binocular stereo matching algorithm based on MST cost aggregation. Mathematical Biosciences and Engineering, 2021, 18(4): 3215-3226. doi: 10.3934/mbe.2021160
    [2] Yongli Yan, Tiansheng Sun, Teng Ren, Li Ding . Enhanced grip force estimation in robotic surgery: A sparrow search algorithm-optimized backpropagation neural network approach. Mathematical Biosciences and Engineering, 2024, 21(3): 3519-3539. doi: 10.3934/mbe.2024155
    [3] Qingwei Wang, Xiaolong Zhang, Xiaofeng Li . Facial feature point recognition method for human motion image using GNN. Mathematical Biosciences and Engineering, 2022, 19(4): 3803-3819. doi: 10.3934/mbe.2022175
    [4] Yan Liu, Bingxue Lv, Yuheng Wang, Wei Huang . An end-to-end stereo matching algorithm based on improved convolutional neural network. Mathematical Biosciences and Engineering, 2020, 17(6): 7787-7803. doi: 10.3934/mbe.2020396
    [5] Hongji Xu, Shi Li, Shidi Fan, Min Chen . A new inconsistent context fusion algorithm based on BP neural network and modified DST. Mathematical Biosciences and Engineering, 2021, 18(2): 968-982. doi: 10.3934/mbe.2021051
    [6] Biyun Hong, Yang Zhang . Research on the influence of attention and emotion of tea drinkers based on artificial neural network. Mathematical Biosciences and Engineering, 2021, 18(4): 3423-3434. doi: 10.3934/mbe.2021171
    [7] Jia-Gang Qiu, Yi Li, Hao-Qi Liu, Shuang Lin, Lei Pang, Gang Sun, Ying-Zhe Song . Research on motion recognition based on multi-dimensional sensing data and deep learning algorithms. Mathematical Biosciences and Engineering, 2023, 20(8): 14578-14595. doi: 10.3934/mbe.2023652
    [8] Su-na Zhao, Yingxue Cui, Yan He, Zhendong He, Zhihua Diao, Fang Peng, Chao Cheng . Teleoperation control of a wheeled mobile robot based on Brain-machine Interface. Mathematical Biosciences and Engineering, 2023, 20(2): 3638-3660. doi: 10.3934/mbe.2023170
    [9] Ming Chen, Yan Qi, Xinxing Zhang, Xueyong Jiang . An intelligent decision support approach for quantified assessment of innovation ability via an improved BP neural network. Mathematical Biosciences and Engineering, 2023, 20(8): 15120-15134. doi: 10.3934/mbe.2023677
    [10] Hongmei Jin, Ning He, Boyu Liu, Zhanli Li . Research on gesture recognition algorithm based on MME-P3D. Mathematical Biosciences and Engineering, 2024, 21(3): 3594-3617. doi: 10.3934/mbe.2024158
  • The aim of this study was to develop a prototype three-dimensional optical motion capture system based on binocular stereo vision, Back-propagation (BP) Neural Network and 3D compen-sation method for accurate and real-time recording of mandibular movement. A specialized 3D method of compensation to eliminate the involuntary vibration motions by human heart beating and respiration. A kind of binocular visual 3D measurement method based on projection line and a calibration method based on BP neural network is proposed to solve the problem of the high complexity of camera calibration process and the low accuracy of 3D measurement. The accuracy of the proposed system is systematically evaluated by means of electric platform and clinical trials, and the root-mean-square is 0.0773 mm. Finally, comparisons with state-of-the-art methods demonstrate that our system has higher reliability and accuracy. Meanwhile, the motion trajectory-tracking system is expected to be used in the diagnosis of clinical oral diseases and digital design of restoration.


    The research of mandibular movements system is based on the theory of mathematics, mechanics and physiology. It is a new research field of multi-subject intersection such as digital image processing, computer graphics and human anatomy [1,2,3]. The condyle is an important development priority zone of the temporomandibular joint (TMJ), and its movement characteristics are commonly used as an important index for mandibular function assessment, and they are one of the parts of our physiological activities, such as mastication and linguistic function [4,5]. In clinical practice, three specific movements are selected as the functional evaluation of mandible: opening-closing, protrusion and laterotrusion. Though discussing the characteristic of mandibular movements, which can provide a reference for orthognathic surgery and virtual occlusion adjustment. Meanwhile, it is of great significance for studying occlusal morphology, restoration of complete denture, inlay restoration and coordination of joint function [6,7].

    The recording of 3D movements of the mandible was first reported in the mid-1960s using a bulky and sophisticated articulator device with two face bows [1]. The mechanical articulator device was commonly used to analyze human mandibular movement, but it was not a quantitative analysis and could only detect 70% of the mandibular movement occlusal point [8]. Walker [9] proposed a graphic recording method to record masticatory motion. However, the common weakness of this method is the invasive of track probe, which affected reliability and accuracy of recording. Pinheiro [1] adopted a computing method of mathematical compensation for head movements to describe mandibular movements in 2D space, but the mean error is higher. In addition, due to the lack of computer-assisted, the sample size and effective quantitative analysis are naturally limited. Since then, researchers have been looking for many precise techniques to describe mandible movements and record the certain targets movements to reflect the actual occlude movements in vivo veritably [10,11]. At present, the measuring precision of mandibular movement trajectory has always been the research focus in the stomatological field [12,13,14]. To improve the measuring accuracy, a number of methods and techniques for recording lower jaw movements have been proposed. Arcus DigMa is a new type of ultrasonic capture system, which is comprised of ultrasonic sensors and emitters. It has many advantages, such as, minimally invasive, easy to operate and higher precision. Instead of fixing on the skull directly, the ultrasonic sensors are attached to a facial arc, which is fixed on the head of subjects. So, the facial morphology maybe influence measured accuracy of the system. Meanwhile, the ultrasound is vulnerable to the external environmental temperature and noise. By means of this system, the authors reported the mean deviations of measured condylar angles were less than 1.5 degrees, and the measured positional accuracy of the system was ~0.1mm [15]. On the basis of this precision, Enciso [16] achieved 3D mandibular movement simulation by obtaining the lower jaw tracking data. Similarly, the electromagnetic tracking device is a sensor that gains the information of the location and orientation with the electromagnetic induction theory, but the precision of the device is easy to be influenced by many external factors, such as metal block and mobile telephone [4]. To evaluate the tracking accuracy of the recording equipment, Yoon [17] introduced a new technique for recording the kinematics of the TMJ with an electromagnetic system and custom dental appliance. The accuracy of the technique in determine the precision of digitizing static target points was 0.32 ± 0.60 mm after calibration. However, with this approach, the measuring tools with head frame or facial arc possibly make the patients feel unnatural and cause inaccuracy of trajectory capture. Similarly, facial markers move relative to the skeleton during mandibular movements, the measured TMJ kinematics is subject to soft-tissue artifacts. So, it is essential to select correct position of facial markers to describe maxillary kinematics. Aim at this problem, Chen [12] systematically evaluated soft-tissue artifacts for facial markers placed on an optical frame, using an optoelectronic movement tracking system. The results showed that nose-bridge markers were preferred to replace maxillary. In addition, there are some reports to evaluate mandibular motions with inexpensive CCD cameras [18,19,20,21]. Fang [22] developed a feasible method to reconstruct 3D mandible movement modelling of the individual and evaluated occlusion surface for crown restoration by using a pair of CCD cameras. Considering the simultaneous effects of rotation and translation motion, the model can show exactly the natural occlusion behavior of an individual. The root-mean-square (RMS) accuracy of 0.198 mm is given in this paper. Mostashiri [23] developed a portable, compact and low-cost motion-capture system (PFMS) for the acquisition of the spatial mandibular motion by using two generic webcams. Although the PFMS can obtain a more accurate mandibular trajectory, it only records the trajectory of the mandible in three two-dimensional planes, which is inconsistent with the actual motion analysis. With the unceasing development of 3D tracking technique, some other new methods and ideas have appeared in succession. Chen [24] reported a method for measuring 3D in vivo using single-plane fluoroscopy. Tanaka [25] developed a markerless three-dimensional system. More importantly, the measurement errors of these systems are higher than 1.0 mm in dynamic conditions, so that they are almost impossible to use in clinical practice.

    In this paper, we propose a specialized 3D method of compensation for associated head and human body motions to improve the accuracy of the capture system. More specifically, the non-invasive system is established with two low cost cameras and two ceramic tracking plates, which can supply real-time 3D collection, reconstruction, real-time motion simulation and generation of trajectory surfaces. Finally, the precision of the system is well evaluated by experimental and clinical trials. It is expected to be used for the clinical diagnosis of oral diseases and will assist specialists in treatment of TMDs.

    The whole trajectory-tracking capture procedure proposed in this study includes the following steps shown in Figure 1. (1) The camera calibration is completed using BP neural network to satisfy reconstruction of 3D trajectories. (2) The iterative closest point algorithm is used to achieve the registration between 3D digital dental model and the front teeth region data. (3) The impact of involuntary vibration of the head and body movement is compensated to obtain 3D relative trajectories based on the proposed trajectory compensation method. (4) The registration digital dental models and the relative trajectories are fused to conduct the personalized design of occlusal surfaces.

    Figure 1.  Outline of the proposed method.

    The purpose of camera calibration is to establish the corresponding relationship between the object point in the space coordinate system and its pixel point on the image plane. Zhang's method is the most classical calibration method. It has good robustness and high precision, and reduces the dependence on the calibration object [26]. However, it is impossible to get a more accurate initial value in the initial linear calculation, so it cannot provide an ideal initial value for nonlinear optimization. Artificial neural network is an emerging technology developed in recent years. It has strong self-adaptation and self-learning capabilities, and can deal with systems that are difficult to describe with mathematical models. BP network is a kind of multilayer feedforward network trained according to the error back propagation algorithm, and it is one of the most widely used neural network models [27]. BP network can learn a large number of input and output mode mapping relations without revealing the mathematical equations describing the mapping relationship in advance. Aiming at the complex imaging and distortion model in camera calibration process, the camera in binocular vision three-dimensional measurement system is calibrated implicitly by using the powerful approximation ability of BP network to complex nonlinear mapping relationship [28]. So that the measurement system can directly recover the 3D information of target feature points without complicated camera calibration.

    Considering the trajectory accuracy of the lower jaw and the operational ease of capture system, we propose a specialized optical motion capture system based on computer binocular stereo vision and BP neural network. The proposed binocular vision measurement system integrates two engineering methods. One is an optical motion capture technique for recording the maxillary/mandibular movements by means of two cameras. The other is an individual three-dimensional point cloud reconstruction method of anterior teeth region by using a projection module and two cameras. The optical tracking system is shown in Figure 2. These cameras synchronously take images from two sets of ceramic targets and can be natively able to capture the center of the circles in the images with the Canny image edge detection [29]. Each camera is capable of capturing continuously up to 100 frames per second, which has adequately high accuracy to satisfy reconstruction of 3D trajectories and registration of point cloud data of front teeth region with the aid of projector. Registration is a necessary process for the reliability and accuracy of the optical motion capture system. Meanwhile, the registration between subject’s 3D digital dental model and the point cloud data of front teeth region can be done by the system. Considering the authenticity of virtual occlusion motion simulation, we use the least squares method to realize the matching between the initial positions of the trajectories and the target points.

    Figure 2.  Schematic diagram of the optical tracking system.

    Before recording three-dimensional data, both cameras need to be calibrated. Camera calibration is the key process in binocular vision 3D capture system. The process aims to determine the position orientation in 3D space and intrinsic parameters of the camera. To avoid the influence of nonlinear distortion in camera calibration, we propose a new calibration method based on the nonlinear approach ability of BP neural network to fit the complicated nonlinear mapping relation. According to Kolmogorov's theorem and BP network principle, it can be known that a three-layer BP neural network with a hidden layer can approximate any continuous function when the number of hidden layer neurons can be set arbitrarily [28]. Therefore, the BP neural network is composed of three layers: the input layer, the hidden layer and the output layer, as the model described in Figure 3.

    Figure 3.  BP neural network model.

    To obtain the training samples of BP neural network, a calibration board with black background color and white circle array evenly distributed at 5 mm interval is made, as shown in Figure 4. The number of white marking circles is 316, and the arrangement direction is consistent with the coordinate direction (x, y) of the calibration board. Therefore, the coordinates of each circle on the calibration board coordinate system can be determined.

    Figure 4.  Calibration board diagram.

    With reference to Figure 4, the image pixel coordinates of the white circle are the input data of BP neural network, and the two-dimensional actual coordinates of the white circle on the calibration board are the output data of BP neural network. ωji and ωkj are the weights of the BP neural network. When the expected value of the output node is tk, the error of the output node is:

    E=12Nk=1(tkyk)2=12Nk=1{tkf2[lj=1[ωkjf1(Ni=1ωjixi)]]} (2.1)

    where f1 is the function of the input node, f2 is the function of the output node.

    According to the characteristics of camera imaging and implicit calibration of two cameras in near and far calibration plane, combined with the measurement principle of system in Figure 2, the projection straight lines (L and R) equation was calculated by four points (Plf, Pln, Prf, Prn) on the calibration plane, so that the measurement system directly restore 3D space coordinate information of the target features points.

    Mandibular movements are a combination of head and mandibular condyle motions [4]. Heart beating, arterial pulse and respiration can cause involuntary movements of the human body. Because the deformation of the mandible and supramaxilla in motion is very small and can be neglected, the functional mandibular behaviors can be regarded as 3D rigid-body during movement [22,30]. According to helical motion of rigid-body, 3D position of the mandible in space during natural occlusion movement is expressed as a combination of rotation and translation from its original position relative to the coordinate axes. The local coordinate system is constructed by point U0 and L0 respectively as shown in Figure 5. From the rigid-body screw motion principle, the affine transformation is performed on points L1(U1), L2(U2), L3(U3).

    Figure 5.  Schematic representation of the screw theory.

    With reference to the occlussion of maxillary and mandible in Figure 5. During the occlusal movement, the head and human body are moving involuntarily due to human heart beating and respiration. To eliminate the involuntary vibration motions, we should translate and rotate maxillary and mandible back to these initial positions (U0 and L0).

    PUn=MUnPU0+dU0 (2.2)
    PLn=MLnPL0+dL0 (2.3)

    where M is rotation matrix 3×3, d is displacement vector 3×1, n is numbers of trajectory point. And then establish the motion of mandible relative to the head:

    P(Lower,Upper)n=P(L,U)n=M(L,U)nPL0+d(L,U)0 (2.4)

    where in index (Lower, Upper), the first word denotes mandible and the second word denotes maxillary, M(L,U)n is relative rotation matrix, d(L,U) is relative displacement vector.

    According to Chasles’ theorem [22].

    P(L,U)n=(MUn)1(PLndUn) (2.5)

    As we should translate and rotate maxillary back to its original position. Note that M1=MT, the transposed matrix of M, as M is orthogonal.

    Through combining Eqs (2.2)-(2.5), the relative rotation matrix and relative displacement vector can be obtained:

    M(L,U)n=[(MUn)1(MUn1)1(MU2)1(MU1)1]MLn[(MU1)(MU2)(MUn1)(MUn)]=[m11m12m13m21m22m23m31m32m33] (2.6)
    d(L,U)n=(MUn)1(dLndUn)=[dxdydz] (2.7)

    Substituting Eqs (2.6) and (2.7) into Eq (2.4) for the relative motion, we obtain the fully expanded equation:

    P(L,U)=[m11m12m13m21m22m23m31m32m33]PL0+[dxdydz] (2.8)

    From the relative rotation matrix thus obtained, the rotation angles can directly be determined, using relevant elements of the matrix 6×6.

    This research has been approved by the Bioethics Committee of Peking University School and the Hospital of Stomatology, China. Written informed consent was obtained from the study participant.

    To record mandibular motion, the 3D trajectory tracking system for recording functional mandi-bular motions was set up by two industrial cameras (DMK 33GP1300, Germany) and an projection module (DLP3000, Texas Instruments). The projection module was mounted equidistant between the two industrial cameras at the same angle (55.0°) with horizontal plane, and the experiment system is shown in Figure 6. The trajectory compensation method was developed by C++ language with Visual Studio 2015 platform, and was tested on a computer with an Intel i5-4460 core and 8GB RAM. Mean-while, image processing, rendered and displayed based on OpenCV3.0 library and OpenGL3.2 library.

    Figure 6.  3D trajectory-tracking testing experimental system.

    Prior to trajectory collection, the subjects must be instructed to sit at a distance of approximately 400 mm from the cameras, and then reposition the trunk at an approximately 90° to the ground. The occlusal splint was made on the dentition plaster model of subjects and her teeth to connect and steady the ceramic tracking plate. In addition, the occlusal splint is small, light and has non-invasive with occlusal movement. Maxillary occlusal splint is firmly fixed to the upper jaw to estimate the movement of head, and mandibular occlusal splint, identical to that used on the lower jaw. The ceramic tracking plate is used to perform transformations between the optical system and the image-based dental model. With reference to Figure 5, the optical motion capture system tracks the position in space of six points defining two triangular planes (U and L-triangle), and the target points coincide with the corresponding points on the two triangular planes. The target points (A, B, C) are 5.2-mm-diameter black and white circles that corresponds to the vertexes of a right triangle (the right-angle side is 6 mm), as described in Figure 6.

    In this paper, a three-layer BP neural network model is constructed to calibrate binocular stereo vision to analyze the accuracy of trajectory tracking system. The input layer and output layer are composed of two nodes respectively, which represent the coordinate value and actual coordinate of the pixel on the image. Experimental results show that the network training is optimal when the number of hidden layer nodes is 7. For rapid convergence, the training samples are put into the BP neural network for repeated training by using Levenberg-Marquardt (LM) algorithm until the satisfactory output errors of y1 and y2 are obtained.

    To verify the three-dimensional measurement accuracy of our system, the calibration boards shown in Figure 7 are used to test, and the calibration board is placed in different positions within the depth-of-field of the camera. In our experiment, 3160 marking circles on 10 plane calibration plates were obtained. 1930 data with x < 0 and y < 0 (Figure 4) were taken as training samples, and the remaining 1230 data were used as test samples. To train the network, we optimize the networks with the following main hyper-parameters: learning rate = 0.02, target error = 0.000001, image refresh rate = 10, and iterations = 20000. By extracting the corner coordinates, the intrinsic and extrinsic parameters of the camera are obtained, as shown in Table 1.

    Figure 7.  10 positions of the calibration board.
    Table 1.  The intrinsic and extrinsic parameters of camera.
    Intrinsic parameter Extrinsic parameter
    Rotation matrix Translation matrix
    [3219.0450531.15703212.062373.211001] [0.1120.9200.3790.9420.0280.3430.3320.4160.861] [23.48133.183611.021]

     | Show Table
    DownLoad: CSV

    To verify the accuracy of the calibration results, Table 2 shows the error values between the network test results and the actual values at 10 positions (Figure 7) of the calibration board. The results show that the error values of the test results are all within 10 μm. Therefore, it is proved that the BP neural network is effective for camera calibration, and can obtain relatively high calibration accuracy.

    Table 2.  The error values between the network test results and the actual values.
    Position 1 2 3 4 5 6 7 8 9 10
    Error (μm) 9.0 8.7 8.1 7.8 9.1 7.7 7.9 8.2 8.0 8.1

     | Show Table
    DownLoad: CSV

    By tracking the target points, reconstructing the center of the target, and calculating the distance between the two centers, we obtain the deviation between the measured value and the standard value, as shown in Figure 8. From Figure 8, we can see that the accuracy of target center reconstruction is higher, and the absolute error range is stable within 10 μm. We also find that the deviations of any two measurement results is less than 16 μm, and which satisfy reconstruction of 3D trajectories.

    Figure 8.  Variation trend of trajectory tracking error.

    To evaluate the accuracy of the optical motion capture system, an electronic translation platform (Figure 9a) was used to carry rectangular quadrilateral and circular movements. In our experiment, the translation stage is set in right-angle (10.0 mm cathetus) and semi-circle(5.0 mm radius) movement at 1.0 mm/s in three coordinate planes (XOY, YOZ, XOZ), respectively. By tapping the platform slightly to simulate the person's own shaking, and in the meantime, the trajectories of the three ceramic targets (L) are recorded five times (I, II, III, IV, and V) in real time. We calculated the movement distance and radius, the distance between every point and fitted circles, as well as the right-angle movements respectively, using Imageware 13.0 software. Figure 9b, c show the relative trajectory of a single ceramic target calculated by Eq (2.6), and the distribution of trajectory points can be clearly seen from the graph. We can see that the deviations of the relative trajectory are much smaller than the deviation of experimental trajectory.

    Figure 9.  Three-axis electric displacement platform.

    The difference values between the lengths of the fitted the radii of curves and the set value (5.0 mm) are calculated by Imageware 13.0 software in Table 3. From the table, we can find that the deviation in the XOZ plane is the largest, and the accuracy is fitted for the clinical application [20,21]. So we choose XOZ plane as the object of analysis.

    Table 3.  The distance between the lengths of the fitted curves’ radii and the set value (5.0 mm).
    Plane Ceramic target Data (μm) Average value (μm)
    I II III IV V
    XOY A 5 15 3 4 7 7.80
    B 3 17 4 10 3
    C 7 16 9 6 8
    YOZ A 12 4 7 9 11 11.33
    B 15 20 19 10 13
    C 14 6 10 12 8
    XOZ A 6 4 13 12 18 13.27
    B 15 8 10 20 19
    C 14 14 17 16 13

     | Show Table
    DownLoad: CSV

    Table 4 shows the difference value of movement angles of the fitted curves of relative trajectory (Figure 9b) and the set value (90°). It also shows that the average values are less than 0.2°. More importantly, the maximum errors of measured angles is less than 0.4° reported in [30] for their system.

    Table 4.  The difference value of movement angles and the set value (90°).
    Plane Ceramic target Edge ID Data (°) Average value (°)
    I II III IV V
    XOZ A 1 0.2490 0.3026 -0.0859 0.3206 0.1409 0.1943
    2 0.2232 0.1287 -0.1694 -0.3633 0.1182
    3 -0.0065 0.0473 0.1309 0.1883 -0.3586
    4 -0.0040 0.2212 0.2144 -0.2304 -0.3818
    B 1 0.2093 0.2793 -0.0414 0.5195 -0.0669 0.1900
    2 0.1417 0.1483 -0.1189 -0.5258 0.0159
    3 0.0570 0.1225 0.1029 0.1867 -0.2144
    4 0.1248 0.2556 0.1797 -0.1923 -0.2972
    C 1 0.1471 0.2423 -0.0582 0.5241 -0.1293 0.1988
    2 0.1840 0.1204 -0.1302 -0.4892 -0.0916
    3 0.0289 0.0692 0.0919 0.2882 -0.3666
    4 -0.0018 0.1910 0.1647 -0.2526 -0.4045

     | Show Table
    DownLoad: CSV

    Our experiments are performed by measuring the known trajectories (linear and circular), and analyzing the deviation of the measured results from the known trajectories. The accuracy is evaluated by determining the RMS error associated to the measured distances and the precision is estimated by calculating the standard deviation of those distances and the maximum distance error. The calculated deviation values are shown in Table 5.

    Table 5.  The results of the trajectory-tracking system validation test.
    Mean deviation(mm) RMS deviation(mm) Standard deviation(mm) Minimum deviation(mm) Maximum deviation(mm)
    0.0387 0.0773 0.0252 0.002623 0.2039

     | Show Table
    DownLoad: CSV

    To further verify the accuracy and research the clinical application of the trajectory-tracking system, 20 subjects were selected to evaluate mandibular movements, and which presented no masticatory system disorders, no symptoms of temporomandibular disorders and no signs of malocclusion, the results are shown in Figure 10. Figure 10a shows the upper and lower jaw trajectories of two subjects in a stationary state, we can find that the distribution of the upper and lower jaw trajectory points (white and red) are very messy, but the calculated relative trajectory points (yellow) are relatively concentrated by subtracting head and mandible movement. Opening-closing movements performed by a subject are shown in Figure 10c. We can see that human subjects have a slight forward motion during the movement process, which affect the accuracy of upper/lower jaw trajectory. It is also interesting to see that the relative trajectories calculated by the compensation method is “8-shape” [22] (Figure 10b), and which are quite similar to the conventional sagittal schematic diagrams of the envelope movement [30]. The composite motion results of opening-closing, protrusion and laterotrusion are obtained, as shown in Figure 10d.

    Figure 10.  The upper/lower jaw trajectories in different states: (a) stationary state, (b) sagittal path of the lower jaw incisor, (c) opening-closing, (d) composite motions.

    Due to the complexity of the occlusal surface morphology and the randomness of the missing tooth position, it is difficult to achieve the occlusal surface design of the missing tooth simply relying on the residual feature information of the tooth [31]. However, our system drives mandibular movements on the basis of the calculated relative trajectories, so as to realize the real-time motion simulation (Figure 11a-c) and the personalized design of occlusal surfaces (Figure 11e-j).

    Figure 11.  Movement simulation and occlusal surfaces construction: (a) stationary state, (b) Mandibular movement to the right, (c) Move from right to left, (d) Mandibular movement to the left, (e) Extraction of spee curve and compensation curve, (f) enveloping surface generated by laterotrusion movement, (g) enveloping surface generated by protrusion movement, (h) zoomed view of (f), (i) another view of (h), (j) zoomed view of (g).

    The spee curve on the incisor and the compensation curve on the molar are chosen as the generatrix to study the movement of the curve along the relative trajectory (Figure 11e). The selected curves are used as the simulation objects, through the single-step simulation function in the system, the curve at each trajectory points is outputted. The generated enveloping surface can provide an important reference for the individualized design of the occlusal surfaces of missing teeth, and can also be used for the diagnosis of occlusal interference and bad shape. Figure 11f illustrates the trajectory surface of the compensation curve of its opposing teeth on the upper or lower jaw in lateral excursion. The enveloping surface of the compensation curve on the mandible and maxillary respectively is described in Figure 11h, i. Figure 11g, j show the trajectory surface generated by the spee curve on the incisor during protrusion motion. The results show that this method can be used effectively in the provision of dental restorations as a dynamic virtual articulator to identify eccentric premature occlusal contacts during mastication.

    In recent years, with the development of optics and electromagnetics, as well as the popularization and application of computers, the mandibular kinesiography (MKG) has been continuously improved and developed in terms of performance and quality. Currently, there are many kinds of mandibular movement recording devices, which are mainly complicated to use, low accuracy and the high cost of systems [18]. And most equipment and methods are limited to accurately track mandible motions, without regarding to the vibration of the head and body. Yuan et al. [20] and Zhao et al. [21] presented a trajectory recording system for the acquisition of the 2D single-jaw movement by using an electronic translator. But the system cannot get human real motion trajectory in space. Dai et al. [30] presented a 3D optical motion capture system binocular stereo vision and subtraction algorithm, and the mean error of the system was 0.057 mm. But the trajectory reconstruction method based on the edge detection of the canny operator and entre location has low robustness. Unlike other existing mandibular movement recording systems, our system can get the mandibular movement relative to head without the need of complicated mechanical devices on the patient's head, and the trajectory reconstruction method based BP neural network has higher reliability and robustness. The comparison results are shown in Table 6.

    Table 6.  A comparison between our system and some recently published techniques.
    Camera calibration Algorithm stability Object Real-time Mean error (mm) RMS error (mm) Compensa-tion object
    Zhao [21] Dual plane method high 2D 0.089 - ×
    Pinheiro [1] - low 2D × 0.4 - head
    Furtado [18] Direct linear Transformation method moderate 3D 0.156 0.259 ×
    Dai [30] Canny operator low 3D 0.0572 0.0974 head
    Ours BP neural network high 3D 0.0387 0.0773 head and mandible

     | Show Table
    DownLoad: CSV

    Compared with other systems and technologies mentioned, our self-developed trajectory-tracking system has a higher accuracy and stability. To evaluate the accuracy of our system, we make a comparison with Dai’s method in [30], which perform very well in 3D mandibular analyzing. Figure 12a shows the upper and lower jaw trajectory of a subject in laterotrusion motion. The standard deviation between each relative trajectory point and the corresponding fitted curves of relative trajectory are measured by Imageware 13.0 software, as shown in Figure 12b, c. We can find that the deviations of the relative trajectory in Figure 12b is much smaller than the deviations of relative trajectory in Figure 12c, and the difference between the maximum deviation of the two methods is 0.133mm. Therefore, it is shown that the accuracy of our method is higher than literature [30].

    Figure 12.  The standard deviation between trajectory point and the fitted curves of relative trajectory: (a) the upper and lower jaw trajectory in laterotrusion motion, (b) relative trajectory and fitted curve calculated by our method, (b) relative trajectory and fitted curve calculated by literature [30].

    In this paper, we propose a specialized three-dimensional optical motion capture system based on computer binocular stereo vision and a 3D method of compensation based on helical motion of rigid-body. The accuracy of our system is systematically evaluated through an electronic translation platform and clinical trials. Extensive experiments demonstrate that the proposed 3D compensation method is effective and the RMS accuracy of the system was 0.0773 mm, and the accuracy is increased by 50% compared to other systems. The system will provide crucial technological support for the personalized design of occlusion plane as well as a tool for the clinical diagnosis and treatment. Although our method achieves considerable performance accuracy and efficiency, there are, nevertheless, several limitations to the current study. (1) The electronic translation platform can only be used to measure the accuracy of the motion trajectory on the plane, and cannot truly evaluate the three-dimensional motion in space. (2) The current network training time is relatively long, we will optimize the network to improve training efficiency in future.

    This study was supported by the National Natural Science Foundation of China (No. 51775273), the Funding from the National Key R & D Projects (2018YFB1106903), Jiangsu Province science and technology support plan project (No. BE2018010-2). We also thanks very much for editors with your attention to our paper.

    The authors declare that we have no conflict of interests regarding this paper.



    [1] A. P. Pinheiro, A. A. Pereira, A. O. Andrade, D. Bellomo, Measurement of jaw motion: the proposal of a simple and accurate method, J. Med. Eng. Technol., 35 (2011), 125-133.
    [2] I. C. Santos, J. M. Tavares, J. Mendes, M. P. Paulo, Acquisition and analysis of 3D mandibular movement using a device based on electromagnetic sensors and a neural network, J. Med. Eng. Technol., 33 (2009), 437-441.
    [3] A. Boccaccio, P. J. Prendergast, C. Pappalettere, D. J. Kelly, Tissue differentiation and bone regeneration in an osteotomized mandible: a computational analysis of the latency period, Med. Biol. Eng. Comput., 46 (2008), 283-298.
    [4] B. Wiesinger, B. Haggmanhenrikson, A. Wanman, M. Lindkvist, F. Hellstrom, Jaw-opening accuracy is not affected by masseter muscle vibration in healthy men, Exp. Brain Res., 232 (2014), 3501-3508.
    [5] P. F. Pinheiro, D. A. Cunha, M. G. Filho, A. S. Caldas, T. M. Melo, H. J. Silva, The Use of Electrognathography in Jaw Movement Research: A Literature Review, CRANIO, 30 (2012), 293-303.
    [6] M. O. Ahlers, O. Bernhardt, H. A. Jakstat, B. Kordas, J. C. Turp, H. J. Schindler, et al., Motion analysis of the mandible: guidelines for standardized analysis of computer-assisted recording of condylar movements, Int. J. Comput. Dent., 18 (2015), 201-223.
    [7] J. P. Baeyens, H. Gilomen, B. Erdmann, R. Clijsen, J. Cabri, D. Vissers, In vivo measurement of the 3D kinematics of the temporomandibular joint using miniaturized electromagnetic trackers: technical report, Med. Biol. Eng. Comput., 51 (2013), 479-484.
    [8] B. Jankelson, G. M. Hoffman, J. A. Hendron, The physiology of the stomatognathic system, J. Am. Dent. Assoc., 46 (1953), 375-386.
    [9] W. E. Walker, Movements of the mandibular condyles and dental articulation, JADA, 6 (1897), 254-259.
    [10] F. P. M. Oliveira, T. C. Pataky, J. M. R. S. Tavares, Registration of pedobarographic image data in the frequency domain, Comput. Methods Biomech. Biomed. Eng., 13 (2010), 731-740.
    [11] T. C. S. Azevedo, J. M. R. S. Tavares, M. A. P. Vaz, Three-dimensional reconstruction and characterization of human external shapes from two-dimensional images using volumetric methods, Comput. Method Biomec., 13 (2010), 359-369.
    [12] C. C. Chen, Y. J. Chen, S.C. Chen, H. S. Lin, T. W. Lu, Evaluation of soft-tissue artifacts when using anatomical and technical markers to measure mandibular motion, J. Dent. Sci., 6 (2011), 95-101.
    [13] I. C. T. Santos, J. M. R. S. Tavares, J. Mendes, M. P. F. Paulo, A prototype system for acquisition and analysis of 3D mandibular movement, Int. J. Mech. Mater. Des., 4 (2008), 173-180.
    [14] D. Kim, S. Choi, S. Lee, M. Heo, K. Huh, S. Hwang, et al., Correlation between 3-dimensional facial morphology and mandibular movement during maximum mouth opening and closing, Oral Surg. Oral Med. O., 110 (2010), 648-658.
    [15] M. M. O. Mazzetto, M. M. A. Anacleto, M. C. A. Rodrigues, R. M. F. Braganca, G. Paiva, M. L. V. Magri, Comparison of mandibular movements in TMD by means of a 3D ultrasonic system and digital caliper rule, CRANIO, 35 (2017), 46-51.
    [16] R. Enciso, A. Memon, D. Fidaleo, U. Neumann, J. Mah, The virtual craniofacial patient: 3D jaw modeling and animation, Studies Health Technol. Inf., 94 (2003), 65-71.
    [17] H. J. Yoon, K. D. Zhao, J. Rebellato, K. N. An, E. E.Keller, Kinematic study of the mandible using an electromagnetic tracking device and custom dental appliance: introducing a new technique, J. Biomech., 39 (2006), 2325-2330.
    [18] D. A. Furtado, A. A. Pereira, A. O. Andrade, D. P. Junior, M. R. Silva, A specialized motion capture system for real-time analysis of mandibular movements using infrared cameras, Biomed. Eng. Online, 12 (2013), 1-17.
    [19] I. C. T. Santos, J. M. R. S. Tavares, J. Mendes, M. P. F. Paulo, A system for analysis of the 3D mandibular movement using magnetic sensors and neuronal networks, Proceedings of the 2nd International Workshop on Artificial Neural Networks and Intelligent Information Processing, 2006. Available from: https://www.scitepress.org.
    [20] F. S. Yuan, H. X. Sui, Z. K. Li, H. F. Yang, P. J. Lu, Y. Wang, et al., A Method of Three-Dimensional Recording of Mandibular Movement Based on Two-Dimensional Image Feature Extraction, Plos One, 10 (2015), e0137507.
    [21] T. Zhao, H. F. Yang, H. X. Sui, S. S. Salvi, Y. Wang, Y. C. Sun, Computerized, Binocular, Three-Dimensional Trajectory-Tracking Device for Recording Functional Mandibular Movements, Plos One, 11 (2016), e0163934.
    [22] J. J. Fang, T. H. Kuo, Modelling of mandibular movement, Comput. Biol. Med., 38 (2008), 1152-1162.
    [23] N. Mostashiri, J. S. Dhupia, A. Verl, W. L. Xu, A Novel Spatial Mandibular Motion-Capture System Based on Planar Fiducial Markers, IEEE Sens. J., 18 (2018), 10096-10104.
    [24] C. C. Chen, C. C. Lin, Y. J. Chen, S. W. Hong, T. W. Lu, A method for measuring three-dimensional mandibular kinematics in vivo using single-plane fluoroscopy, Dentomaxillofacial Radiol., 42 (2012), 95958184-95958184.
    [25] Y. Tanaka, T. Yamada, Y. Maeda, K. Ikebe, Markerless three-dimensional tracking of masticatory movement, J. Biomech., 49 (2016), 442-449.
    [26] L. L. Li, Y. C. Sun, Y. Wang, W. W. Li, N. Dai, S. K. Tian, et al., Accuracy of a novel virtual articulator for recording three-dimensional dentition, Int. J. Prosthodont., 33 (2020), 441-451.
    [27] C. Cheng, X. S. Cheng, N. Dai, X. T. Jiang, Y. C. Sun, W. W. Li, Prediction of facial deformation after complete denture prosthesis using BP neural network, Comput. Biol. Med., 66 (2015), 103-112.
    [28] K. Funahashi, On the approximate realization of continuous mappings by neural networks, Neural Networks, 2 (1989), 183-192.
    [29] C. John, A Computational Approach to Edge Detection - Readings in Computer Vision, IEEE T. Pattern. Anal., 8 (1986), 679-698.
    [30] S. K. Tian, N. Dai, X. S. Cheng, L. L. Li, Y. C. Sun, H. H Cui, Relative trajectory-driven virtual dynamic occlusal adjustment for dental restorations, Med. Biol. Eng. Comput., 57 (2019), 59-70.
    [31] C. D. Zhang, T. T. Liu, W. H. Liao, T. Yang, L. Y. Jiang, Computer-aided design of dental inlay restoration based on dual-factor constrained deformation, Adv. Eng. Soft., 114 (2017), 71-84.
  • This article has been cited by:

    1. Lei Yang, Li Feng, Longqing Zhang, Liwei Tian, Predicting freshmen enrollment based on machine learning, 2021, 0920-8542, 10.1007/s11227-021-03763-y
    2. Mingyue Sun, 2022, Image Monitoring Method of Tennis Player Arm Sports Trajectory, 978-1-7281-8115-8, 1064, 10.1109/ICETCI55101.2022.9832360
    3. Yiqin Bao, Hongbing Lu, Qiang Zhao, Zhongxue Yang, Wenbin Xu, Detection system of dead and sick chickens in large scale farms based on artificial intelligence, 2021, 18, 1551-0018, 6117, 10.3934/mbe.2021306
    4. Jiaxuan Li, Tong Gao, Zihao Zhang, Guanghai Wu, Hao Zhang, Jianbin Zheng, Yifan Gao, Yu Wang, 2022, A Novel Method of Pattern Recognition Based on TLSTM in lower limb exoskeleton in Many Terrains, 978-1-6654-8658-3, 733, 10.1109/ICMSP55950.2022.9859005
    5. Hongyan Fan, Youhong Hu, Jianfeng Zhang, Wei Fang, Automatic Capture Processing Method of Basketball Shooting Trajectory Based on Background Elimination Technology, 2022, 2022, 1687-9317, 1, 10.1155/2022/7884528
    6. Wenlong Qin, Ming Cong, Dong Liu, Xiang Ren, Yu Du, CPG-based generation strategy of variable rhythmic chewing movements for a dental testing chewing robot, 2022, 236, 0954-4119, 711, 10.1177/09544119221078102
    7. Lan Li, Mu-Yen Chen, Predicting the Investment Risk in Supply Chain Management Using BPNN and Machine Learning, 2022, 2022, 1530-8677, 1, 10.1155/2022/4340286
    8. Zhifeng Wang, Xiantao Jiang, Exploration on the Construction of Cross-Border E-Commerce Logistics System Using Deep Learning, 2022, 2022, 1563-5147, 1, 10.1155/2022/3713268
    9. Weiwei Li, Hu Chen, Yong Wang, Qiufei Xie, Yuchun Sun, Digital Determination and Recording of Edentulous Maxillomandibular Relationship Using a Jaw Movement Tracking System, 2022, 31, 1059-941X, 663, 10.1111/jopr.13529
    10. Qiu Chenghao, Ning Lei, Zhang Tingyi, Zhou Yifei, Tai Yuting, 2022, Chapter 81, 978-3-030-97873-0, 622, 10.1007/978-3-030-97874-7_81
    11. Franco Marinelli, Maria Florencia Lezcano, Josefa Alarcón, Pablo Navarro, Ramón Fuentes, A Novel Technique to Accurately Measure Mouth Opening Using 3D Electromagnetic Articulography, 2022, 9, 2306-5354, 577, 10.3390/bioengineering9100577
    12. Taseef Hasan Farook, Farah Rashid, Mohammad Khursheed Alam, James Dudley, Variables influencing the device-dependent approaches in digitally analysing jaw movement—a systematic review, 2022, 27, 1436-3771, 489, 10.1007/s00784-022-04835-w
    13. Yun Wang, Yong Han, Qiang Wang, Mohammad Ayoub Khan, The Application of Motion Trajectory Acquisition and Intelligent Analysis Technology in Physical Education Teaching in Colleges and Universities, 2022, 2022, 1939-0122, 1, 10.1155/2022/1917469
    14. Yunna Song, Wenjing Zhang, Qingjiang Li, Wenhui Ma, Osamah Ibrahim Khalaf, Medical Data Acquisition and Internet of Things Technology-Based Cerebral Stroke Disease Prevention and Rehabilitation Nursing Mobile Medical Management System, 2022, 2022, 1748-6718, 1, 10.1155/2022/4646454
    15. Shanguang Zhao, Fangfang Long, Xin Wei, Xiaoli Ni, Hui Wang, Bokun Wei, Evaluation of a Single-Channel EEG-Based Sleep Staging Algorithm, 2022, 19, 1660-4601, 2845, 10.3390/ijerph19052845
    16. Cuiyun Wu, Dahui Zha, Hong Gao, Yao Chen, Prediction of Bronchopneumonia Inpatients’ Total Hospitalization Expenses Based on BP Neural Network and Support Vector Machine Models, 2022, 2022, 1748-6718, 1, 10.1155/2022/9275801
    17. Marta Revilla‐León, Dean E. Kois, Jonathan M. Zeitler, Wael Att, John C. Kois, An overview of the digital occlusion technologies: Intraoral scanners, jaw tracking systems, and computerized occlusal analysis devices, 2023, 1496-4155, 10.1111/jerd.13044
    18. Zsolt Nagy, Akos Mikolicz, Janos Vag, In-vitro accuracy of a novel jaw-tracking technology, 2023, 138, 03005712, 104730, 10.1016/j.jdent.2023.104730
    19. Francesco Grande, Luca Lepidi, Fabio Tesini, Alessio Acquadro, Chiara Valenti, Stefano Pagano, Santo Catapano, Investigation of the precision of a novel jaw tracking system in recording mandibular movements: A preliminary clinical study, 2024, 146, 03005712, 105047, 10.1016/j.jdent.2024.105047
    20. Wei Zhao, Yue Feng, Rongkai Cao, Jiyu Sun, Jiayao Zhang, Xinhuan Zhao, Weicai Liu, Comparative analysis of three jaw motion tracking systems: A study on precision and trueness, 2024, 1059-941X, 10.1111/jopr.13953
    21. Si‐peng Ke, Kang‐jie Cheng, Russell Wang, Xian‐feng Jiang, Yun‐feng Liu, Experimental validation of a new model for mandibular motions, 2023, 39, 2040-7939, 10.1002/cnm.3716
    22. Olivia Bobeică, Denis Iorga, Artificial neural networks development in prosthodontics - a systematic mapping review, 2024, 151, 03005712, 105385, 10.1016/j.jdent.2024.105385
    23. Marta Revilla‐León, Miguel Gómez‐Polo, Irena Sailer, John C. Kois, Rata Rokhshad, An overview of artificial intelligence based applications for assisting digital data acquisition and implant planning procedures, 2024, 36, 1496-4155, 1666, 10.1111/jerd.13249
    24. Taseef Hasan Farook, Saif Ahmed, Md Shoriful Islam Talukder, James Dudley, Boyen Huang, A 3D printed electronic wearable device to generate vertical, horizontal and phono-articulatory jaw movement parameters: A concept implementation, 2023, 18, 1932-6203, e0290497, 10.1371/journal.pone.0290497
    25. Chuanqin Zheng, Qingshuang Zhuang, Shu-Juan Peng, Efficient motion capture data recovery via relationship-aggregated graph network and temporal pattern reasoning, 2023, 20, 1551-0018, 11313, 10.3934/mbe.2023501
    26. Karyna Isaieva, Justine Leclère, Guillaume Paillart, Guillaume Drouot, Jacques Felblinger, Xavier Dubernard, Pierre-André Vuissoz, Extraction of 3D trajectories of mandibular condyles from 2D real-time MRI, 2024, 1352-8661, 10.1007/s10334-024-01214-2
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6081) PDF downloads(249) Cited by(26)

Figures and Tables

Figures(12)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog