Research article

A fuzzy set theory-based fast fault diagnosis approach for rotators of induction motors

  • Academic editor: Vladimir Mityushev
  • Induction motors have been widely used in industry, agriculture, transportation, national defense engineering, etc. Defects of the motors will not only cause the abnormal operation of production equipment but also cause the motor to run in a state of low energy efficiency before evolving into a fault shutdown. The former may lead to the suspension of the production process, while the latter may lead to additional energy loss. This paper studies a fuzzy rule-based expert system for this purpose and focuses on the analysis of many knowledge representation methods and reasoning techniques. The rotator fault of induction motors is analyzed and diagnosed by using this knowledge, and the diagnosis result is displayed. The simulation model can effectively simulate the broken rotator fault by changing the resistance value of the equivalent rotor winding. And the influence of the broken rotor bar fault on the motors is described, which provides a basis for the fault characteristics analysis. The simulation results show that the proposed method can realize fast fault diagnosis for rotators of induction motors.

    Citation: Tangsheng Zhang, Hongying Zhi. A fuzzy set theory-based fast fault diagnosis approach for rotators of induction motors[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 9268-9287. doi: 10.3934/mbe.2023406

    Related Papers:

    [1] Meiting Qu, Shaohui Liu, Lei Li . Urban public health spatial planning using big data technology and visual communication in IoT. Mathematical Biosciences and Engineering, 2023, 20(5): 8583-8600. doi: 10.3934/mbe.2023377
    [2] Gayathri Vivekanandhan, Mahtab Mehrabbeik, Karthikeyan Rajagopal, Sajad Jafari, Stephen G. Lomber, Yaser Merrikhi . Applying machine learning techniques to detect the deployment of spatial working memory from the spiking activity of MT neurons. Mathematical Biosciences and Engineering, 2023, 20(2): 3216-3236. doi: 10.3934/mbe.2023151
    [3] Song Liu, Gusong Luo, Yonglong Cai, Wenjie Wu, Weitao Liu, Rong Zou, Wenxuan Tan . Determinants of consumer intention to adopt a self-service technology strategy for last-mile delivery in Guangzhou, China. Mathematical Biosciences and Engineering, 2024, 21(2): 3262-3280. doi: 10.3934/mbe.2024144
    [4] Jun Zhai, Bilin Xu . Research on meme transmission based on individual heterogeneity. Mathematical Biosciences and Engineering, 2021, 18(5): 5176-5193. doi: 10.3934/mbe.2021263
    [5] Xiang Wang, Yongcheng Wang, Limin He . An intelligent data analysis-based medical management method for lower limb health of football athletes. Mathematical Biosciences and Engineering, 2023, 20(8): 14005-14022. doi: 10.3934/mbe.2023624
    [6] Massimo Fioranelli, O. Eze Aru, Maria Grazia Roccia, Aroonkumar Beesham, Dana Flavin . A model for analyzing evolutions of neurons by using EEG waves. Mathematical Biosciences and Engineering, 2022, 19(12): 12936-12949. doi: 10.3934/mbe.2022604
    [7] Chrysovalantis Malesios, Nikoleta Jones, Alfie Begley, James McGinlay . Methodological approaches to exploring the spatial variation in social impacts of protected areas: An intercomparison of Bayesian regression modeling approaches and potential implications. Mathematical Biosciences and Engineering, 2024, 21(3): 3816-3837. doi: 10.3934/mbe.2024170
    [8] B. Spagnolo, D. Valenti, A. Fiasconaro . Noise in ecosystems: A short review. Mathematical Biosciences and Engineering, 2004, 1(1): 185-211. doi: 10.3934/mbe.2004.1.185
    [9] Zhipeng Ding, Hongxia Yun, Enze Li . A multimedia knowledge discovery-based optimal scheduling approach considering visual behavior in smart education. Mathematical Biosciences and Engineering, 2023, 20(3): 5901-5916. doi: 10.3934/mbe.2023254
    [10] Juncai Yao, Jing Shen, Congying Yao . Image quality assessment based on the perceived structural similarity index of an image. Mathematical Biosciences and Engineering, 2023, 20(5): 9385-9409. doi: 10.3934/mbe.2023412
  • Induction motors have been widely used in industry, agriculture, transportation, national defense engineering, etc. Defects of the motors will not only cause the abnormal operation of production equipment but also cause the motor to run in a state of low energy efficiency before evolving into a fault shutdown. The former may lead to the suspension of the production process, while the latter may lead to additional energy loss. This paper studies a fuzzy rule-based expert system for this purpose and focuses on the analysis of many knowledge representation methods and reasoning techniques. The rotator fault of induction motors is analyzed and diagnosed by using this knowledge, and the diagnosis result is displayed. The simulation model can effectively simulate the broken rotator fault by changing the resistance value of the equivalent rotor winding. And the influence of the broken rotor bar fault on the motors is described, which provides a basis for the fault characteristics analysis. The simulation results show that the proposed method can realize fast fault diagnosis for rotators of induction motors.



    The visual system enables humans to see and observe the events happening in the environment by converting the electromagnetic waves (belonging to a specific frequency range, also known as the visible spectrum) into electrochemical signals (neuronal activation) and processing them to extract useful information. The process of extracting and organizing information about the surroundings through the visual system is known as visual perception. Visual perception profoundly affects the interaction between a person and their surroundings. From driving a car on a busy highway to playing sports, the visual perception of a moving object is crucial for survival in the natural environment. Modulating the perception of moving stimuli could have appreciable consequences and can affect the quality of life. Therefore, it is vital to understand the underlying neuronal dynamics and anatomical correlates, along with the quantifiable model of perception of the moving object.

    In the case of a moving object, the position of the object changes with time. Therefore, the visual perception of a moving object involves visual- spatial perception, as well as time perception. The process of perceiving a moving object starts with light from the visual field (surroundings) entering the eye and projecting onto the retina. The photosensitive cells of the retina convert the light into neural signals, which transmit through the optic nerve to the brain regions. These neuronal signals are projected into the neuropil forming the retinotopic map, where the relationship between the adjacent location in the visual field is maintained in terms of the nearby neuronal activation [1]. Retinotopic maps are isomorphic representations of the outside world based on information sensed by the retinal system, where the spatiotemporal neuronal activation represents the spatiotemporal movement of the object in the physical space (visual field) [2]. Retinal images are represented nonlinearly on the retinotopic map have been analyzed by Schwartz [3]. The retinotopic maps exist in several brain regions, e.g., the primary visual cortex (V1) [4], lateral occipital cortex [5] and cerebellum [6]. High-resolution functional magnetic resonance imaging (fMRI) imaging enables experimental observation of human retinotopic maps [7,8]. Retinotopic mapping of the visual field is the initial stage of the processing of visual perception functions such as motion [9], object recognition [10] and color [11]. The retinotopic map belongs to a broad category of the topographic map, which is a fundamental neuroanatomical feature in the cerebral cortex of humans and other primates [12,13]. We will refer to 'retinotopic map' as 'retinotopic space' for linguistic convenience.

    Visual motion perception has been studied to discover underlying mechanisms over the years [14,15,16]. Various experiments have been carried out to find the anatomical and physiological basis of the perception of moving objects using techniques such as fMRI and transcranial magnetic stimulation [17,18,19,20]. In addition, theoretical models addressing the different aspects of motion perception are available, namely, the Reichardt detector for motion detection [21] and the spatiotemporal energy model [22]. However, much work has been done to explore the dominant brain regions active while perceiving a moving object, and providing theoretical models. Nevertheless, very few studies have addressed the modulation of the perception of the moving object, but they mainly focused on the perceived speed of the moving object. For example, Mashour found that the perceived speed and actual speed are related to each other by the power law [23]. Algom and Cohen-Raz also reported similar results in another study [24]. Therefore, as far as we know, an integrative global mathematical and theoretical framework for the modulation of the perception of a moving object is yet to be readily formalized, especially from a causality perspective.

    In this paper, we quantify and provide a mathematical model to describe how the speed of a moving object modulates visual-spatial perception and time perception when perceiving a moving object. The changes in the position of the object occur in the temporal order and indicate a temporal causality relationship between the position of the object. We will take the constancy of temporal causality in the retinotopic space and perceptual space and then find the relationship between the spatiotemporal coordinates of the moving object represented in the retinotopic space and perceptual space. Here, perceptual space is the subjective experience of the physical space, and it represents the geometry of the perception. Thereafter, we will apply our mathematical model to different experimental settings to predict the results and compare them with actually observed results to validate the outcomes of our investigation. After validation, we will demarcate the anatomical regions where this perceptual transformation occurs, and experimental corroboration will be done by performing diffusion MRI-based tractography studies. Furthermore, we will form a mathematical framework to model the neuronal-level biochemical mechanism responsible for the modulation of visual-spatial perception and time perception, based on neurotransmitter interaction and dynamics.

    We have organized our paper as follows:

    (i) In Section 2, we have formulated the perception of a moving object as the mapping from retinotopic space to perceptual space. Then, we will derive the mathematical model of the modulation of perception of a moving object.

    (ii) In Section 3, we have described the methodology used for experimental investigations.

    (iii) Section 4 is divided into four subsections:

    • Section 4.1: We applied our mathematical model to two experiments for empirical validation.

    • Section 4.2: The anatomical correlates of the modulation of perception of a moving object are provided and verified by the MRI tractography investigations.

    • Section 4.3: The neurotransmitter dynamics-based neuronal-level mechanism is delineated, along with a mathematical model.

    • Section 4.4: Formal analysis of the perception of a moving object is undertaken.

    (iv) We conclude the paper by discussing the key results, implications and general significance in Section 5.

    In this section, we will formulate the geometrical representation of the moving object in the retinotopic space and its projection onto perceptual space, presuming invariance of the temporal causality. We will derive a transformation matrix (Z) that consolidates the coordinate transformation from the retinotopic space to the perceptual space when the object changes its position with time. Then, we will show the translation of the transformation equations to the neural systems and demarcate their utilization in the practical scenario.

    Suppose that the position of an object in the visual field varies with time such that the change of position with time (speed) is constant. Curve AB in Figure 1 represents the spatiotemporal activation of the neural tissue in the retinotopic map due to the changes in the position of the moving object with time (T). Curve AB is a straight line because of the constant speed of the moving object. Every point on the curve AB belongs to the instantaneous neural activation in the retinotopic space corresponding to the instantaneous position of the moving object. We assume that the position varies along a single spatial dimension for mathematical convenience. Points A and B in Figure 1 denote the start and end of the projection of a moving object on the human observer's retinotopic space.

    Figure 1.  Changes in spatial position of the moving object with time in the retinotopic map.
    Figure 2.  Transformation of the representation of the moving object from retinotopic space to perceptual space.

    Let us take a random point R between points A and B in retinotopic space (Figure 2(a)). Every point on curve AB appearing before point R in temporal order has a temporal causal connection with point R. Points A and R in the retinotopic space shown in Figure 2(a) are projected in perceptual space as points Ap and Rp (Figure 2(b)), respectively, under the assumption of invariance of the temporal causality. Therefore, points Ap and Rp in perceptual space also have a causal relationship. In Figure 2(b), T* and X* represent the time and position of the object in the perceptual space, respectively.

    Suppose that, in Figure 2(a), the coordinates of points R and A are (t, x) and (0, 0), respectively. As points A and B mark the start and end of the movement of the object in the retinotopic space, the transformation of points A and B in perceptual space should have the same coordinates. An example can explain this: if somebody is watching a ball being thrown in the air, the ball's starting and ending position in the retinotopic and perceptual space will be the same. However, anything in between may be modified. Hence, the coordinates of point Ap in the perceptual space can be taken to be the same as A, i.e., (0, 0), and we can assume the coordinates of point Rp in the perceptual space to be (t*, x*). (However, under ideal conditions, we expect the coordinates of points Rp and R to be the same so that the observer perceives the moving object without any modifications.) For generalization, we can presume that the difference in the coordinates of points Ap and Rp in the perceptual space is a function of the difference in the coordinates of points A and R in the retinotopic space (Eq (1a), (1c)).

    x0=g(t0,x0) (1a)
    i.e.,  x=g(t,x) (1b)
    t0=f(t0,x0) (1c)
    i.e.,  t=f(t,x) (1d)

    In Eq (1b), (1d), the terms f and g are two functions whose formulations we will pursue later. Likewise, the coordinates of A and R can be mathematically calculated from the coordinates of Ap and Rp under the premise of symmetry between the retinotopic and perceptual spaces.

    Therefore,

    x=g(t,x) (2a)
    t=f(t,x) (2b)

    Even if point A does not lie at the origin, Eqs (1a, 1b) and (2a, 2b) must hold. If the coordinates of point A are (x0, t0), then we can respectively rewrite Eqs (1a), (1c), (2a), (2b) as:

    xx0=g(tt0,xx0) (3a)
    tt0=f(tt0,xx0) (3b)
    xx0=g(tt0,xx0) (4a)
    tt0=f(tt0,xx0) (4b)

    Partial differentiation of Eq (1b) with respect to x yields

    δxδx=δg(t,x)δx (5)

    Partial differentiation of Eq (3a) with respect to x yields

    δxδx=δg(tt0,xx0)δx (6)

    From Eqs (5) and (6), we obtain

    δg(t,x)δx|{t0,x0}=δg(tt0,xx0)δx|{t0,x0}
    δg(t,x)δx|{t0,x0}=δg(tt0,xx0)δ(xx0)|{t0,x0}
    δg(t,x)δx|{t0,x0}=δg(t,x)δx|{0,0} (7)

    Using Eqs (1b) and (3a), we derived the Eq (7). Likewise, we can derive, from Eqs (1d), (2a), (2b), (3b), (4a) and (4b), the following Eqs (8)–(10).

    δg(t,x)δt|{t0,x0}=δg(t,x)δt|{0,0} (8)
    δf(t,x)δx|{t0,x0}=δf(t,x)δx|{0,0} (9)
    δf(t,x)δt|{t0,x0}=δf(t,x)δt|{0,0} (10)

    It is evident from Eqs (7)–(10) that functions g and f are linear functions of x and t because the differentiation of functions g and f is the same at the origin or any arbitrary point (t0, x0), which is possible only for a straight line; thus, Eqs (1d), (1d), (2a) and (2b) become

    x=A.x+B.t (11a)
    t=C.x+D.t (11b)
    x=A.x+B.t (12a)
    t=C.x+D.t (12b)

    where A, B, C and D are unknown coefficients in Eqs (11) and (12). In the subsequent mathematical analysis, we will derive mathematical expressions for A, B, C and D.

    Let us represent Eqs (11) and (12) in matrix form which become Eqs (13) and (14), respectively, below:

    From Eqs (11a) and (11b): [xt]=[ABCD][xt]

    i.e.,  X=ZX (13)

    From Eqs (12a) and (12b): [xt]=[ABCD][xt]

    i.e.,  X=ZX (14)

    where X=[xt] ,X=[xt] and Z=[ABCD].

    In Eqs (13) and (14), Z is a transformation matrix that denotes the coordinates transformation from the retinotopic to the perceptual space. From Eqs (13) and (14), Z=Z1 :

    Z=[ABCD] (15)

    Thus, Z1=1BCAD[DBCA]

    For the simplest case, BCAD=1 :

    i.e.,  Z1=[DBCA] (16)

    By comparison of Eqs (15) and (16), D=A.

    Since BCAD=1, by putting D=A, we get BC+A2=1 A=+1BC

    Therefore,

    Z=[+1BCBC+1BC] (17)

    In the retinotopic space, the position of the object changes with time, as the rate of change with time can be obtained by putting x=0 in Eq (11a), under the assumption that the rate of change of position with time is constant. Thus:

    0=A.x+B.t

    i.e., xt=BA

    Let P is the rate of change of position of the moving object in the retinotopic space (i.e., speed):

    Then,

    P=xt=BA

    i.e., P=B+1BC

    or  +1BC=BP (18)
    thereby  C=1BBP2 (19)

    Putting values from Eqs (18) and (19) into Eq (17) yields

    Z=[BPB1BBP2BP] (20)

    Retinotopic maps exist in multiple brain regions. For example, it is present in the primary visual cortex (V1), the cerebellum and the lateral occipital cortex [5,6,25]. Let us take two retinotopic maps (t1 and t2) such that the relationship between the coordinates of the retinotopic map (t1), another retinotopic map (t2) and the perceptual space (m), as per our preceding formulation (Eqs (1)–(20)), is as follows (i.e., Eqs (21)–(23)), where Xt1, Xt2, and Xm are the column matrices whose components are the spatiotemporal coordinates of the moving object in the t1 retinotopic map, t2 retinotopic map and perceptual space (m), respectively; and, Z1, Z2 and Z3 are transformation matrices representing the coordinate transformation. Thus we can come to the following relations.

    Relationship between retinotopic maps t1 and t2:

    Xt2=Z1Xt1 (21)

    Relationship between retinotopic map t2 and perceptual space (m):

    Xm=Z2Xt2 (22)

    Relationship between retinotopic map t1 and perceptual space (m):

    Xm=Z3Xt1 (23)

    By comparing the previous three equations (Eqs (21)–(23)), we obtained the following:

    Z3=Z2Z1

    Applying the transformation matrix Z given in Eq (20) gives

    [B3P3B31B3B3P32B3P3]=[B2P2B21B2B2P22B2P2][B1P1B11B1B1P12B1P1]

    After matrix multiplication and comparison of the matrix components, we obtained four equations. After solving those equations, we obtained the following (please see Supplementary material section S1 for the analytical solution):

    (1B211P21)2=(1B221P22)2=(1B231P23)2 (24)

    In Eq (24), the squared term (with the same mathematical arrangement and equivalent variables) is equal in different cases, showing that this particular term is invariant in different situations. Therefore, let us donate Eq (24) for the general situation of any space (such as perceptual space, cerebral retinotopic space or cerebellar retinotopic space) in Eq (25) below:

    (1B21P2)=1k2 (25)

    In Eq (25), k can be considered as a fidelity parameter denoting the invariance across different representational spaces (such as perceptual space, cerebral retinotopic space or cerebellar retinotopic space).

    By putting the value of B from Eq (25) into Eq (20), we get

    Z=[11(Pk)2P1(Pk)2Pk21(Pk)211(Pk)2] (26)

    Now, putting the value of the transformation matrix Z from Eq (26) into Eq (13) gives

    [xt]=[11(Pk)2P1(Pk)2Pk21(Pk)211(Pk)2][xt] (27)

    Now, in Eqs (11) and (12), the transformation of the coordinates from retinotopic to perceptual space or vice versa is the same and occurs when the spatial dimensions x and x point toward opposite directions. To make them unidirectional, we put x=x in Eq (27). We get the following:

    [xt]=[11(Pk)2P1(Pk)2Pk21(Pk)211(Pk)2][xt]

    Hence, the relationship between the coordinates of retinotopic and perceptual space is

    x=xPt1(Pk)2  and  t=tPxk21(Pk)2 (28)

    The moving object in the visual field (physical space) is projected onto the retina, and then the position of the moving object is mapped from the retinal surface in the retinotopic space. In the transformation equation, Eq (27), x and t are physical coordinates in the retinotopic space. However, the physical size of the visual field (where the object is moving in external space) differs from the physical size of the cortical tissues (where the retinotopic space is situated). Figure 3 illustrates the cortical magnification factor (CMF) [26], which defines the cortical tissue allotted (in mm) for one degree of the visual angle subtended at the eye. The CMF gives the ratio of the size of the neuronal tissue (z) activated due to the movement of an object in the visual field, subtending a particular visual angle (θ) [27]. The visual field projected around the fovea obtains more neural tissue than the peripheral regions, as shown by decreasing values of the CMF in the peripheral region of the retina [28].

    Figure 3.  Cortical magnification factor denotes the cortical tissue involved in the retinotopic representation of the given size of the visual field. The cortical magnification factor is the ratio of the size of the cortical tissue (z) to the visual angle subtended on the eye (θ).

    On the contrary, x represents the perceived extent of the position of the external event in the perceptual space. To make x compatible with the physical size of the moving object, it is necessary to remap x to enable comparison of the x value with the physical value of the position of the moving object. Therefore, applying the CMF in a reverse way will modify the x to become compatible with the physical scale of the visual field. Suppose that γ represents the CMF function (which quantitatively shows the mapping of the position of the moving object from the visual field to the retinotopic map) and w is the physical position of the moving object. Then, the relationship between θ,x,x and ɳ is given by Eq (29), where ɳ represents the perception of the position of the moving object. The inverse function γ1 represents the mechanism by which the brain makes neural activation compatible with the physical extent of external events.

    x=γ(w)  and  ɳ=γ1(x) (29)

    In Eq (29), γ and γ1 are mathematical functions, and the term inside the parenthesis or round brackets is input to the function, while γ1 is the inverse function of the mathematical function γ. Our formulation of the perception of a moving object is shown in Figure 4, where the transformation equations of Eq (28) were derived in the previous section.

    Apart from this, in the transformation equations of Eq (28), P is the rate of change of the position of the moving object in the retinotopic space (i.e., speed). At the same time, k (fidelity parameter) transpires to be a constant. As highlighted by Eqs (24) and (25), the fidelity parameter is constant, irrespective of the frames of reference. Because neural signals are responsible for transferring information between the retinotopic space and perceptual space, the fidelity parameter might be related to the neuronal signals.

    Figure 4.  Our formulation of the perception of a moving object: The moving object is projected onto the retinotopic map through the retinal surface, and follows the cortical magnification factor in terms of mapping from the visual field to the retinotopic map. Then, the transformation equations relate the spatiotemporal coordinates of the moving object in the retinotopic and perceptual spaces. After that, applying the inverse cortical magnification factor provides the coordinates of the perceived moving object at the scale of physical space.

    A luminous arc was formed on a wheel having a diameter of 61 cm. The length of the arc and radial distance of the arc from the center was 13 cm and 20.7 cm, respectively. A flat black box with a horizontal line facing the subject was situated ahead of the center of the wheel. The length of this horizontal line could vary through the rolling shutter as desired by the subject. The presence of the black box did not affect the visibility of the path of the rotating arc. Distance between the subject and the rotating wheel varied between 2 to 4 feet. Different speeds of 0, 0.5, 0.7, 1 and 1.3 revolutions per second were used to rotate the wheel. Speed could not be increased beyond 1.3 revolutions per second, as the subjects could not follow the arc and saw a complete circle. A total of 12 subjects participated in the experiment. Two groups were formed from the subjects; for one group, the speed varied from lowest to highest, and for the other one, it was vice versa. Subjects varied the length of the stationery line to match it with the length of the moving arc [29]. Figure 5 illustrates the experimental setup.

    Figure 5.  Experimental setup for measuring the perceived length of a moving arc at different speeds.

    During the experiment, subjects needed to fixate on the middle of the horizontal line, which coincides with the center of the circle traced by the moving arc, as well as to judge the length of the moving arc at the different rotational speeds of the wheel. Subjects varied the length of the stationary line to make it subjectively equal to the length of the moving arc.

    Matching and reproduction methods were used to measure the perceived time period for eight subjects. Participants fixated at 6.6° above the Gabor patch while a chin rest restrained any head movement. Vertical Gabor patches with a 6° radius displayed on the cathode-ray tube (CRT) monitor acted as stimuli and were placed at 57 cm from the participants. Sinusoidal luminance modulation drifting left or right of the stationary Gaussian contrast envelope in the vertical Gabor patch was used as moving stimuli. In contrast, a vertical Gabor patch without any luminance modulation was used as stationary stimuli.

    Two methods were used to measure the perceived time while perceiving moving stimuli [30]:

    (i) In the matching method, the moving stimulus was displayed for a fixed time duration, followed by a stationary stimulus whose duration was varied so that the subjects assessed it to be subjectively equal to the moving stimulus' duration.

    (ii) In the reproduction method, the moving stimulus was displayed on the screen for some period; after which time, the subjects reproduced the perceived duration by pressing a switch.

    Diffusion-weighted images were acquired on a 3T Philips Achieva scanner at the National Brain Research Centre, Manesar, India, by using the HARDI schema (128 direction, b-value: 2000 s/mm2). The human MRI scanning procedure was approved by the Institutional Human Ethics Committee of the National Brain Research Centre, and informed consent was taken. The in-plane resolution and slice thickness were 2 mm. FSL eddy was used to correct for eddy current distortion through the use of the integrated interface in DSI Studio ("Chen" release). The diffusion MRI data were rotated to align with the AC-PC line. After motion correction, deterministic fiber tracking was performed by using DSI Studio with the following tracking parameters: fractional anisotropic threshold: 0.04162, angular threshold: 75 degrees, step size: 0.1 mm and total seeds: 1, 000, 000. We performed this analysis pipeline for one normal subject (gender: male, age: 24 years).

    Scans were acquired on a 7T Siemens MAGNETOM machine at Maastricht University, Netherlands. Approval was given by the Ethics Committee of the Faculty for Psychology and Neuroscience at Maastricht University, and informed consent was obtained. Diffusion-weighted MRI images were scanned by using multi-band diffusion-weighted spin-echo EPI protocol with the following parameters: b-values = 1000, 2000 and 3000 s/mm2, field of view (FOV) = 200 × 200 mm with partial Fourier 6/8, 132 slices, 1.05 mm isotropic voxel size, repetition time (TR) = 7080 ms, echo time (TE) = 75.6 ms, 66 directions and 11 additional b = 0 volumes for every b-value [31]. The susceptibility artifact was estimated by using reversed phase-encoding b0 by TOPUP from the Tiny FSL package (http://github.com/frankyeh/TinyFSL), a re-compiled version of FSL TOPUP (FMRIB, Oxford) with multi-thread support. FSL eddy was used to correct for eddy current distortion. After preprocessing the MRI image, we used DSI Studio software (http://dsi-studio.labsolver.org) to enable deterministic tractography using the diffusion tensor imaging technique [32]. We used Brainnetome Atlas to locate the region of interest [33]. The tracking parameters were as follows: fractional anisotropy threshold of 0.06, angular threshold of 65 degrees, step size of 0.1 mm and 500, 000 seeds. We performed this analysis pipeline for one normal subject (gender: female, age: 27 years).

    The transformation equations of Eq (28) show that the spatiotemporal coordinates of the moving object in the perceptual space can be different from the spatiotemporal coordinates in the retinotopic space, which thus indicates that the perception of a moving object may deviate from the physical reality. Now, we will analyze two experiments by using our formulation of the perception of the moving stimulus (Figure 4). Then, we will theoretically predict the experimental outcomes and compare them with the actual results to validate our theoretical formulation. We will now investigate two such experiments.

    In this experiment, the perceived length of the moving arc was measured by Ansbacher, using the experimental setup explained in Section 3.1 [29]. Figure 6 shows the experimental measurement of the variation in the perceived length as the rotational speed of the moving arc was varied from 0 to 1.3 revolutions per second. Reduction in the perceived length indicates the modulation of visual perception due to a change in the speed of the arc.

    Figure 6.  Alteration in the perceived length of the moving arc at different rotational speeds of the arc.

    Next, we obtained the relationship between the length of the moving arc in the retinotopic space and perceptual space. The length of the moving arc in the retinotopic space (L) can be described in terms of x1 and x2 which are, respectively, the spatial coordinates of the start and end points of the moving arc in the retinotopic space at a particular moment (t = T0)). Thus:

    L=x2x1 (30a)

    Then, applying the transformation equations of Eq (28), the length of the moving arc in the perceptual space (L*) becomes

    L=x2x1=x2PT01(Pk)2x1PT01(Pk)2
    L=L1(Pk)2 (30b)

    In Eq (30b), P is the speed of the moving arc in the retinotopic space. We can find the length of the moving arc in the retinotopic space (L, using the physical length of the arc) and the length of the moving arc in the perceptual space (L*, using the experimental observation shown in Figure 6), as well as the speed (P), by applying the CMF. However, the CMF is not a constant and it decreases for the peripheral visual field. Therefore, we calculated the CMF for the moving arc (physical length: 0.13 m) based on experimental observations from different studies [5,34,35,36,37,38]. Because the CMF varies as the angular distance from the fixation point increases, we can find the average value of the tissue allocated per degree of the visual field used by the observer to make a judgment.

    In this experiment, the circular path followed by the moving arc subtends an angle of 23.4° on the retina. Therefore, the angular distance from the fovea is 11.7°. The calculated CMF (γ) is 9.55 mm of neural tissue per degree of the visual field, represented as γ in Figure 4. Although the arc followed a circular path while the subjects make length judgments, the circle was mapped to a line in the retinotopic space due to logarithmic mapping [3]. The rotational velocity of the moving arc was converted into linear velocity, followed by the application of γ to find the value of P (i.e., the speed of the arc in the retinotopic space).

    Since we have experimental observations about L,L and P, we can find k (fidelity parameter) by putting these values in Eq (30b). As per our earlier analysis (Eqs (24) and (25)), the value of k (fidelity parameter) should be constant regardless of the speed of the moving arc.

    We calculated the fidelity parameter (k) for different rotational speeds of the moving arc; the results are shown in Figure 7. As evident in Figure 7, the value of the fidelity parameter (k is almost constant, irrespective of the speed of the moving arc in the retinotopic space. From Eq (30b), we find that, as the value of P approaches the value of the fidelity parameter (k, the subject's underestimation of the arc's length (LL increases. From Figure 7, we see that the fidelity parameter (k has a very small coefficient of variation (~ 5%), with an average value of 0.74, and it can be taken as a constant. The chi-square (χ2) goodness-of-fit test is also satisfied (p > 0.99). The constant value of the fidelity parameter across different observation conditions supports our mathematical prediction. This constancy of the fidelity parameter ensures full faithfulness and correspondence between the different representations of the moving object in different spaces (i.e., the perceptual space, cerebral retinotopic space or cerebellar retinotopic space).

    Figure 7.  Constancy of the fidelity parameter, k, while the rotational speed of the moving arc changes. This constancy validates the theoretical prediction of our mathematical model. (Statistical goodness-of-fit test satisfied, p > 0.99).

    Over the years, various researchers have consistently observed temporal overestimation when the external stimulus moves relative to the stationary stimulus [39,40,41]. Now, we will predict the observations of a similar experiment by using our model and compare it with the actual results to validate our model. Kaneko and Murakami performed a similar experiment that measured the perceived time period during stationary and moving stimuli. The perceived time was measured by using two experimental procedures. In the first procedure, the matching method was used such that the duration of the stationary stimuli varied until it was perceived as equal to the duration of the moving stimulus. The second procedure incorporated the reproduction method in which the subjects reproduced the perceived duration of the moving stimulus by pressing the switch [30]. The ratio of perceived time and actual time was used to quantify the perceptual changes in the subjects, termed as the ratio of overestimation.

    Using the transformation equations (Eq (28)), we can derive the mathematical equation Eq (31) below, which provides the time interval transformation from the retinotopic time (Δt) to perceptual time (Δt) at a particular spatial coordinate (x = X0). Thus:

    Δt=t1t2=t1PX0k21(Pk)2t2PX0k21(Pk)2

    i.e., Δt=t1t21(Pk)2

    or  Δt=Δt1(Pk)2 (31)

    Now, to find the speed of the moving stimulus in the retinotopic space, we calculated the CMF for the current experiment. In this experiment, the subjects fixated their eyes 6.6° above the Gabor patch center, whose diameter was 12°. Therefore, the moving stimulus was covering the 12.6° in the visual field. Using the same procedure as that described in Section 4.1.1, we obtained an average CMF equal to 9.51 mm of the neural tissue per degree of the visual field.

    Figure 8.  Experimental observations of the reproduction method (second experiment), validating the theoretical prediction of our mathematical model. (Statistical goodness-of-fit test satisfied, p > 0.99).
    Figure 9.  Experimental observations of the matching method (first experiment), validating the theoretical prediction of our mathematical model. (Statistical goodness-of-fit test satisfied, p > 0.99).

    We used the value of k=0.74, as obtained after analyzing the perception of the moving arc experiment in the previous subsection (Section 4.1.1). After that, we calculated the perceived time interval at different speeds by using Eq (31) and plotted these data along with the corresponding experimental measurements for comparison, as shown in Figures 8 and 9. Then, we performed the chi-square (χ2) goodness-of-fit test to test the congruence of our mathematical prediction with the actual observations. We obtained p>0.99 for both experiments, which indicates that our mathematical model is well corroborated.

    In the previous subsection (Section 4.1), we validated our model based on empirical evidence and showed that our mathematical formulation reliably describes the ongoing perception phenomenon. In this section, we will examine the anatomical region in the brain, which implements the coordinate transformation equations (Eq (28)).

    In the transformation equations (Eq (28)):

    (a) The position in perceptual space (x) is a function of the spatial (x) and temporal (t) coordinates in the retinotopic space.

    (b) The time in perceptual space (t) is a function of the spatial (x) and temporal (t) coordinates in the retinotopic space.

    This indicates that the position in the perceptual space (x depends partly on the time (t) in the retinotopic space. Similarly, the time in the perceptual space (t) depends partly on the position (x in the retinotopic space. These observations are counterintuitive because, when an object is not moving (P = 0), the visual-spatial perception of the object depends only on the spatial position, and the temporal perception depends only on the time information. Nevertheless, the motion of the object causes two different information streams (spatial and temporal) along the motion perception pathways to interact with each other. In other words, temporal information (t) also takes part in the perception of spatial information (x) due to this interaction process (in addition to spatial information (x). Similarly, spatial information (x) also takes part in the perception of time (t) due to this interaction process (in addition to temporal information (t)). This interaction process modulates the perception of spatial position and time, which was also observed in the experiments discussed in the previous subsection (Section 4.1).

    Now, we come to the interaction of the space-time coordinates during motion. Suppose that a pendulum is moving in the fronto-parallel plane of an observer. The observer perceives the oscillating movement of the pendulum in a plane with their naked eyes. However, when the same observer views the scene after placing a neutral density filter in front of one eye, they perceive the pendulum's movement as an elliptical orbit, making the pendulum appear to move close (rightward swing of the pendulum) and then far from them (leftward swing of the pendulum). This phenomenon is known as the Pulfrich illusion, named after its discoverer Carl Pulfrich, who was ironically blind from one eye [42]. A neutral density filter introduces a time delay in processing the retinal image [43,44], and, during perception, a time delay affects spatial perception. The magnitude of the perceived depth depends on the change in the pendulum's position with time (speed) [45]. In contrast to the perception of the moving object, the interaction between the spatial and time dimensions is experimentally observable in the Pulfritch phenomenon [46].

    Now, we come to the neuroanatomical correlate of the interaction between the spatial and temporal information during the perception of the moving object. Several experimental studies [47,48,49] have indicated that the middle temporal visual area (V5) is active during the perception of moving stimuli. Similarly, during the Pulfrich illusion, the middle temporal visual area (V5) is active, as has been observed experimentally [50]. Hence, we can conclude that, during the process of visual motion perception, which involves interaction between time and spatial information, the middle temporal visual area (V5) is the anatomical locus of interaction.

    Although we can conclude by analyzing earlier studies that area V5 of the visual brain takes part in the space-time interaction, perception is a complex process and it involves several cortical areas. Therefore, we investigated the anatomical connectivity between the area V5 and brain regions that are active during visual-spatial and temporal perception by analyzing the diffusion MRI scans to find out the neural tracts. We analyzed earlier literature findings [29,51,52,53,54,55,56,57,58,59,60,61,62] that delineated the brain regions responsible for (i) time perception and (ii) perception of a position of an object located in the visual field (spatial aspect). Based on the literature analysis, we found the differential brain areas that are activated during time perception and visual spatial perception, as mentioned in Table 1.

    Table 1.  Brain regions active during the perception of time and the spatial location of an object.
    Perception of Time Perception of Spatial Location of an Object
    Prefrontal Cortex (Brodmann Area 45) Posterior Parietal Cortex (Brodmann Area 7)
    Premotor Cortex (Brodmann Area 4 & 6) Intraparietal Sulcus
    Inferior Parietal Cortex (Brodmann Area 40) Superior Parietal Lobule
    Putamen (Basal Ganglia)

     | Show Table
    DownLoad: CSV

    Then, we tracked the neural tracts in each hemisphere between the following brain regions:

    (i) brain regions active during time perception and area V5.

    (ii) brain regions active during visual-based spatial perception and area V5.

    (iii) V5 of the left and right hemispheres.

    We performed a tractography experiment for two subjects; the results are shown in Figures 10 and 11. The tractography results suggest that area V5 has anatomical connectivity to brain regions along the time and spatial position information processing streams and can act as a conjoining point or interaction center for these two streams. We also performed a centrality analysis of the network consisting of the brain regions in Table 1 as nodes; we found that area V5 has the highest centrality (please see Tables S1 and S2 in the Supplementary material Section S2).

    Figure 10.  Pathways for spatiotemporal interaction (Subject 1, MRI 3 T scanner). Upper Row: Tracts between the middle temporal visual area (V5) and brain regions active during time perception. Middle Row: Tracts between the middle temporal visual area (V5) and brain regions active during the spatial location of an object. Lower Row: Tracts between the left and right middle temporal visual areas (V5). (The brain regions are listed in Table 1.).
    Figure 11.  Pathways for spatiotemporal interaction (Subject 2, MRI 7 T scanner): Upper Row: Tracts between the middle temporal visual area (V5) and brain regions active during time perception. Middle Row: Tracts between the middle temporal visual area (V5) and brain regions active during the spatial location of an object. Lower Row: Tracts between the left and right middle temporal visual areas (V5). (The brain regions are listed in Table 1.).

    Different experimental studies have pointed out the role of different neurotransmitters during the time and visuospatial perception. While an object moves in the visual field, the perception of time is modulated by dopamine levels [63,64]. Similarly, acetylcholine modulates the spatial perception of the moving object [65]. Therefore, the corresponding biochemical mechanism (of interaction between visual-spatial and temporal information) should be the modulatory effects of acetylcholine and dopamine levels on each other.

    In brain tissue, acetylcholine and dopamine release can affect each other's concentration due to mutual neuromodulation at the synaptic cleft. Here, we show that a similar mechanism will occur in area V5 of the visual cortex. Muscarinic acetylcholine receptors can regulate dopamine release. However, for instance, in the case of low-frequency stimuli (1 to 10 Hz), acetylcholine suppresses dopamine release; however, for high-frequency stimuli (>25 Hz), the dopamine release probability increases [66,67]. Similarly, dopamine can promote the release of acetylcholine through D1 receptors while suppressing acetylcholine release through D2 receptors [68,69,70,71]. It is also known that the dopamine D1 receptor's density is significantly more than the D2 receptor's density in the visual cortex [72,73]. Therefore, dopamine promotes acetylcholine release, and acetylcholine, through muscarinic receptors, suppresses dopamine release in the area V5 because there is usually low-frequency activity.

    To paraphrase, the dopamine-acetylcholine interaction will cause changes in the dopamine and acetylcholine concentrations with time. We can model the interaction with the Lotka-Volterra system, as developed for chemical reaction dynamics.

    Thereby, we can formulate that

    dAdt=mAμAD (32)
    dDdt=mD+βAD (33)

    where

    A = Instantaneous concentration of acetylcholine at the synaptic cleft (mmol);

    D = Instantaneous concentration of dopamine at the synaptic cleft (mmol);

    µ = Density of dopamine D1 receptor on dendrites (µmol/m2);

    β = Density of acetylcholine muscarinic receptor on dendrites (µmol/m2);

    m = Interaction parameter (per millisecond) (0 < m ≤ 1).

    We used the Runge-Kutta 4th-order method to find the numerical solution of Eqs (32) and (33). We calculated the density of the dopamine D1 (µ) and acetylcholine muscarinic receptors (β) in the visual cortex by using experimental observations from another study [74]. We obtained µ = 0.0381 µmol/m2 and β = 0.0996 µmol/m2. Using these values, and keeping the interaction parameter (m) equal to 1, we calculated the dopamine and acetylcholine concentration dynamics, as shown in Figure 12.

    As evident from Figure 12, the resulting concentrations were oscillatory with an orthogonal phase difference (around 90°) between them. When we gradually increased the interaction parameter (m) from 0 to 1, the phase difference increased with the maximum value of 90° at m = 1 (Figure 13a). Thus, the interaction parameter affects the phase shift between oscillatory dopamine and acetylcholine concentrations, and we can formulate that the interaction parameter (m) signifies the interaction between the time and spatial information streams.

    The tilted spatiotemporal receptive field profile of complex neuronal cells in area V5 is tuned to a particular spatiotemporal frequency, resulting in the perception of an equivalent speed [75,76,77,78,79]. Therefore, a change in the perception of a moving object should result from the alterations in neuronal tuning properties in area V5. As the speed of the moving object varies, the interaction between the time and spatial information streams also varies due to changes in the interaction between dopamine and acetylcholine. Dopamine and acetylcholine interact with each other via receptors at dendrites (axon-dendrite synapse), which may modulate the spatiotemporal receptive field of complex cells and change the tuning speed of complex cells. Due to the changes in the tuning properties of the complex cells, the perception of the moving object will be modulated.

    Figure 12.  Temporal dynamics of acetylcholine concentration and dopamine concentration.
    Figure 13.  (a) Alteration of the phase shift between the oscillatory concentrations of dopamine (or acetylcholine) while the acetylcholine-dopamine interaction parameter varies. (b) Alteration of the oscillatory time period of acetylcholine (or dopamine) while the acetylcholine-dopamine interaction parameter varies.

    This dynamic mechanism induces a process by which spatial information and temporal information can interact and modulate the perception of a moving object (graphical illustrated in Figure 14). Thus, we can observe the significance of the axodendritic synapse for spatiotemporal interaction.

    Figure 14.  Biochemical basis of the interaction of the spatial information stream and the temporal information stream during the perception of a moving object; the interaction occurs at the axodendritic synapse.

    Considering that the spatial position of an object in the visual field is constant, based on their perception, an observer can predict that the object is static and for how much time. Even without any change in the spatial position of the object, the time information stream is present in the brain for time perception. When multiple objects are changing positions in the visual field at different rates, an observer can perceive that different objects are changing their positions differently. Therefore, we can infer that time and spatial information streams are represented separately and independently in the brain. However, these two streams interact to link time and change in spatial position during the visual spatiotemporal perception of a moving object. As already mentioned, this integration happens in area V5 of the visual cortex.

    Since temporal information and spatial information are independent information streams in the brain, in the vectorial representation, they should be orthogonal to each other. Figure 15(a) is the pictorial representation of the spatial vector (X) and time vector (T); because of the orthogonality, the magnitude of the resultant vector (S) follows the Pythagorean theorem.

    S=X+T (34a)

    Thus

    |S|2=|X|2+|T|2 (34b)
    Figure 15.  (a) Vectorial representation of the interaction between the spatial information stream (X) and the time information stream (T). (b) Interaction between the orthogonal components of the time information stream and spatial information stream.

    S is mathematically equivalent to displacement or length; therefore, the magnitude of S should be the same, irrespective of the frame of reference (either retinotopic space or perceptual space). Applying the constraint that |S|2 will be equal in the retinotopic space and perceptual space, we found the following (please see Supplementary material Section S3 for derivation):

    |S|2=x2k2t2 (35a)
    Hence,  X=x  and  T=jkt (35b)

    where j=1,j is an imaginary number.

    If we analyze the above equations, we can obtain insight into what happens during the perception of a moving object. X and T represent the temporal and spatial information streams, while S represents the resultant interaction among them. We can now recollect that dopamine and acetylcholine modulate the perception of time and spatial location, respectively. The 90° orthogonal phase shift between oscillatory dopamine and acetylcholine concentrations (shown in Figure 12) is mathematically expressed as j(1) in Eq (35a), (35b), showing the interaction of time perception and spatial perception. Moreover, in Figure 13(a), phase shift varies with changes in the interaction parameter (m). This change in phase shift will result in a variation in interaction level because only the orthogonal components of the spatial vector and time vector will interact, as shown in Figure 15(b). As the value of phase shift approaches 90°, the magnitude of the horizontal component of the spatial information stream will increase. In consequence, the magnitude of the interaction vector S will increase too. Hence, the perception of the moving object will vary as the interaction parameter varies. Therefore, the value of the interaction parameter is proportional to the speed of the moving object (from earlier analysis, we know that the perception of the moving object varies with the speed of the moving object).

    The environment from different vantage points is not a static system but a dynamic one. This dynamic nature is observable along the temporal dimension as different events occur in the spatiotemporal arena. Causality defines the framework for assessing the causal or generative relationship between two events. Deducing the cause-effect relationship between events is an innate feature of human cognition. This is such an essential cognitive ability, as causal understanding is one of the fundamental differences between human and nonhuman brains [80,81]. In this paper, we formulated a theoretical framework for perceiving a moving object under the condition of invariant representation of temporal causality in the retinotopic space and perceptual space. For this, we represented a change in the position of a moving object with time as the spatiotemporal coordinates in the retinotopic space and perceptual space. Thus, we could derive transformation equations that explain the transformation of the spatiotemporal coordinates of the moving object from retinotopic space to perceptual space.

    In the transformation equations (Eq (28)), P (speed) quantifies the dynamic nature of the position of the moving object and the fidelity parameter (k delineates the possible role of the anatomical characteristics of the brain during the perception. The transformation equation predicted that the perception of a moving object would vary with speed, and it was later observed in the moving arc and time perception experiment. Equation (25) predicted that the numerical value of the fidelity parameter k would be constant, which was subsequently verified by analyzing the experimental findings of the moving arc experiment using our approach. We calculated the value of the fidelity parameter and showed the value to be robustly constant (k=0.74, as predicted by our theoretical model. The fidelity parameter represents the conformity and correspondence between the different representations of the moving object in different spaces, such as the perceptual space, cerebral retinotopic space and cerebellar retinotopic space. We investigated another experimental study (that measured the perceived time) to validate our mathematical model further. Using the transformation equations and k = 0.74 (the value obtained after analyzing the moving arc experiment), we predicted the perceived time, which satisfactorily followed the experimental outcomes (goodness-of-fit test, p > 0.99). Thus, we verified the transformation equations based on the empirical analysis. Our results indicate that the conservation of causality between the retinotopic and perceptual spaces shapes the observer's perception of a moving object. The novel findings provide a new dimension to understanding perception by developing an innovative multi-scale mathematical formulation.

    The transformation equations show that the position of the object in the perceptual space (x*) depends on both the position (x) and time (t) in the retinotopic space. Similarly, the time in the perceptual space (t*) depends on both the position (x) and time (t) in the retinotopic space. Thus, we could conclude that, in the perceptual space, the time (t) and position (x) information interact during the visual- spatial and temporal perception. However, during the perception of a moving object, the interaction between the temporal and spatial information is not explicitly observable. But, the interaction between the temporal and spatial information is explicitly observable during the perception of a moving pendulum with and without a neutral density filter in front of one eye (Pulfrich phenomenon). The neutral density filter affects the luminance and introduces a time delay in processing the retinal image [43,44], and the pendulum is perceived to move in an elliptical path.

    As per our proposed model, the change in temporal information affects the interaction between the spatial and temporal information, which modulates the perception of the position of the moving pendulum. During the perception of the moving object and Pulfrich illusion, the same visual area V5 is active, which confirms that both the perception of the moving object and the Pulfrich illusion involve interaction between the spatial and temporal information. Furthermore, we verified that the area V5 is where the visual- spatial information and temporal information interact by performing the MRI tractography investigations. Then, we found the neural tracts between the area V5 and relevant brain regions, thus linking the areas of visual-spatial perception and temporal perception. The centrality analysis of the network (considering the brain regions as nodes and neural tracts as the connection between them) yields that area V5 is the most important node.

    The neurons in the area V5 are tuned to the particular speed of the moving object [79,82]. We elucidated that spatial and temporal information interaction should occur at the visual area V5. We devised a mathematical model based on the Lotka-Volterra system to quantify the interaction between the visual- spatial and temporal perception mediated by acetylcholine and dopamine neurotransmitters in area V5. In our model, the interaction parameter (m) denotes the level of the interaction. For m = 1, we obtained the oscillatory nature of the acetylcholine and dopamine concentrations with a phase difference of 90°. The phase difference decreases and the period of oscillatory concentration increases as the interaction parameter (m) increases. Using the concepts of vector algebra, we represented the spatial and temporal perception as orthogonal vectors. Further mathematical analysis yielded that the spatiotemporal perception of moving objects can be represented as a complex number (real part: spatial information, imaginary part: temporal information). The 90° phase difference between the acetylcholine and dopamine concentration is denoted by j(1) in the complex number representation. We showed that the orthogonal components of the spatial information and temporal information interact. Therefore, the phase difference decreases as the interaction parameter (m) decreases and, thus, there is reduction in the interaction between the spatial and temporal information.

    Further, we delineated that the interaction between acetylcholine and dopamine (mathematically denoted by the interaction parameter) modifies the spatiotemporal receptive field properties of the complex cell in the area V5. Due to this, the complex cell will now be tuned for another speed. This change in the tuned speed will be minimal and insignificant for lower speeds but will gradually increase and become significant as the speed of the moving object increases. Thus, we could interpret that the interaction parameter (m) is proportional to the speed of the moving object (P), because the interaction parameter affects the interaction between the acetylcholine and dopamine, which will manifest as modulation in the perception of the moving object, similar to how the speed of the object modulates the perception.

    It is interesting to note that blind wounded soldiers with an injury to the primary visual cortex can perceive motion without perceiving properties like the color and shape of the moving stimulus [83]. This suggests that, even though the transmission of visual information from the primary visual cortex to area V5 is impaired, area MT/V5 is active per se and receives information from the other brain regions while perceiving moving objects [84]. Hence, we can conclude that spatial and temporal information streams from brain regions other than the primary visual cortex may meet at area MT/V5. Furthermore, these information streams from different brain regions may carry information extracted from the other sensory systems. Therefore, area MT/V5 may act as a spatiotemporal information interaction point for other sensory systems apart from vision. Indeed, experimental research studies also point to a similar direction regarding visual, auditory and tactile motion processing [85,86,87]. Our results imply that the proposed approach to understanding the modulation of the perception due to the dynamics of the causal states of an event may be generalized to different sensory systems. In principle, cerebral area V5 processes information from other sensory systems, too [85,86,87]; hence, our mathematical framework and analysis may be scalable and applicable to the more wide-ranging nature of area V5, which may be useful in understanding the general principle of brain function [88]. Furthermore, our model can be helpful when a human operator works in an environment consisting of very fast-moving objects (at high speed, the perception of a moving object is modulated significantly and causes errors in judgment). A perceptual error warning system based on our mathematical model can be used to issue an alerting signal to a human operator, such as to the pilot of a fighter jet or speeding driver of a vehicle.

    We can now come to the translational aspect of our approach, such as its application to neuroscience clinics, along with the incorporation of more accurate experimentation. We focused on the normal healthy brain while developing our mathematical framework above. However, the aforesaid neurochemical basis of the perceptual phenomenon, as shown by dopamine-acetylcholine interaction, can signify the possibility of a change in motion perception due to the relative imbalance of the neurotransmitters away from the normal levels occurring in healthy individuals. A disparity or abnormalities of the neurotransmitter levels in the brain can occur due to several factors, such as underlying neurological or psychiatric disorders. Under these conditions, the perception of stimuli moving in the external environment can become different from the normal brain. For instance, patients with dopaminergic hyperactivation (such as in schizophrenia) [89,90] or cholinergic hypoactivation (as melancholic depression) experience altered perception of the moving object [91].

    Accordingly, we can take that the underlying impaired pathophysiological status of the brain can be detected and estimated by measuring the perception of the moving stimuli by the patient and comparing it with the mathematical prediction for the normal brain. The perception of moving stimuli in patients may deviate from healthy normal subjects due to relative changes in the levels of neurotransmitters in the patient. Thus, our model, based on perceptual alteration, can act as a potential biomarker to identify the presence and gauge the intensity of a neurological condition; the method could be developed as an affordable visual optometric procedure for psychiatric diagnostics in neuroscience clinics. Furthermore, for precise diagnosis, it may be desirable to have a high-precision level of accurate experimentation (for example, when estimating subjective spatial and temporal segments). For such enhanced accuracy, one can use higher-fidelity visual stimulation apparatuses and measurement devices through the use of experimental platforms and paradigm designing facilities such as the E-Prime system [92] or PsychPhy platforms [93].

    Our approach may be developed for further understanding of motion perception in the clinical neuroscience scenario. For potential applicability in medical settings, a mathematical framework needs to be developed which can model the modulation and alteration of the perception of a moving object as the neurotransmitter levels change due to a neurological disorder. Next, for validation of the mathematical framework, experiments which can simultaneously measure motion perception and neurotransmitter levels need to be devised and performed on normal healthy controls, along with patients. Indeed, behavioral experiments can be readily planned to measure the perception of moving stimuli. Furthermore, radiological techniques such as positron emission tomography (PET) [94] and magnetic resonance spectroscopy (MRS) [95] can be used for quantitative evaluation of the neurotransmitter levels in individuals, wherein the accuracy of the assessment can be upgraded by analysis modules such as Clinica module (for PET) [96] or FMRIB Software Library (FSL)-MRS system (for spectroscopy) [97]. This is an area that we will pursue in a future investigation. To sum up, we note that there may be an appreciable implication of the methodology of our study to the experimental aspects of clinical neuroscience.

    Pratik Purohit is grateful for a student opportunity provided by the Indian Institute of Technology (BHU), Varanasi, India. Our deepest appreciation is extended to the National Brain Research Centre, Manesar, India for enabling the 3-T MRI scanning. Sincere thanks is also given to Maastricht University, Maastricht, Netherlands for the 7-T MRI scanning. For providing research facilitation, Prasun K. Roy is grateful to Shiv Nadar University, his mailing address being Dept. of Life Sciences, R-206, Shiv Nadar University, Dadri 201316, India.

    No conflict of interest, financial or otherwise, are declared by the authors.



    [1] Z. Guo, K. Yu, N. Kumar, W. Wei, S. Mumtaz, M. Guizani, Deep distributed learning-based POI recommendation under mobile edge networks, IEEE Internet Things J., 10 (2022), 303–317. https://doi.org/10.1109/JIOT.2022.3202628 doi: 10.1109/JIOT.2022.3202628
    [2] Y. Li, H. Ma, L. Wang, S. Mao, G. Wang, Optimized content caching and user association for edge computing in densely deployed heterogeneous networks, IEEE Trans. Mob. Comput., 21 (2020), 2130–2142. https://doi.org/10.1109/TMC.2020.3033563 doi: 10.1109/TMC.2020.3033563
    [3] Z. Guo, K. Yu, A. K. Bashir, D. Zhang, Y. D. Al-Otaibi, M. Guizani, Deep information fusion-driven POI scheduling for mobile social networks, IEEE Network, 36 (2022), 210–216. https://doi.org/10.1109/MNET.102.2100394 doi: 10.1109/MNET.102.2100394
    [4] L. Yang, Y. Li, S. X. Yang, Y. Lu, T. Guo, K. Yu, Generative adversarial learning for intelligent trust management in 6G wireless networks, IEEE Network, 36 (2022), 134–140. https://doi.org/10.1109/MNET.003.2100672 doi: 10.1109/MNET.003.2100672
    [5] Q. Zhang, K. Yu, Z. Guo, S. Garg, J. J. P. C. Rodrigues, M. M. Hassan, et al., Graph neural networks-driven traffic forecasting for connected internet of vehicles, IEEE Trans. Network Sci. Eng., 9 (2022), 3015–3027. https://doi.org/10.1109/TNSE.2021.3126830 doi: 10.1109/TNSE.2021.3126830
    [6] L. Zhao, Z. Bi, A. Hawbani, K. Yu, Y. Zhang, M. Guizani, ELITE: An intelligent digital twin-based hierarchical routing scheme for softwarized vehicular networks, IEEE Trans. Mob. Comput., 2022 (2022). https://doi.org/10.1109/TMC.2022.3179254 doi: 10.1109/TMC.2022.3179254
    [7] Y. Zhu, W. Zheng, Observer-based control for cyber-physical systems with DoS attacks via a cyclic switching strategy, IEEE Trans. Autom. Control, 65 (2019), 3714–3721. https://doi.org/10.1109/TAC.2019.2953210 doi: 10.1109/TAC.2019.2953210
    [8] L. Zhao, Z. Yin, K. Yu, X. Tang, L. Xu, Z. Guo, et al., A fuzzy logic based intelligent multi-attribute routing scheme for two-layered SDVNs, IEEE Trans. Netw. Serv. Manage., 2022 (2022). https://doi.org/10.1109/TNSM.2022.3202741 doi: 10.1109/TNSM.2022.3202741
    [9] L. Chen, Y. Zhu, C. K. Ahn, Adaptive neural network-based observer design for switched systems with quantized measurements, IEEE Trans. Neural Networks Learn. Syst., 2021 (2021), 1–14. https://doi.org/10.1109/TNNLS.2021.3131412 doi: 10.1109/TNNLS.2021.3131412
    [10] J. Zhang, Q. Yan, X. Zhu, K. Yu, Smart industrial IoT empowered crowd sensing for safety monitoring in coal mine, Digital Commun. Networks, 2022 (2022). https://doi.org/10.1016/j.dcan.2022.08.002 doi: 10.1016/j.dcan.2022.08.002
    [11] Z. Cai, X. Zheng, A private and efficient mechanism for data uploading in smart cyber-physical systems, IEEE Trans. Network Sci. Eng., 7 (2020), 766–775. https://doi.org/10.1109/TNSE.2018.2830307 doi: 10.1109/TNSE.2018.2830307
    [12] Z. Guo, K. Yu, A. Jolfaei, F. Ding, N. Zhang, Fuz-Spam: Label smoothing-based fuzzy detection of spammers in internet of things, IEEE Trans. Fuzzy Syst., 30 (2022), 4543–4554. https://doi.org/10.1109/TFUZZ.2021.3130311 doi: 10.1109/TFUZZ.2021.3130311
    [13] Z. Cai, X. Zheng, J. Yu, A differential-private framework for urban traffic flows estimation via taxi companies, IEEE Trans. Ind. Inf., 15 (2019), 6492–6499. https://doi.org/10.1109/TⅡ.2019.2911697 doi: 10.1109/TⅡ.2019.2911697
    [14] Z. Zhou, Y. Li, J. Li, K. Yu, G. Kou, M. Wang, et al., Gan-siamese network for cross-domain vehicle re-identification in intelligent transport systems, IEEE Trans. Network Sci. Eng., 2022 (2022), 1–12. https://doi.org/10.1109/TNSE.2022.3199919 doi: 10.1109/TNSE.2022.3199919
    [15] Z. Zhou, Y. Su, J. Li, K. Yu, Q. J. Wu, Z. Fu, et al., Secret-to-image reversible transformation for generative steganography, IEEE Trans. Dependable Secure Comput., 2022 (2022), 1–17. https://doi.org/10.1109/TDSC.2022.3217661 doi: 10.1109/TDSC.2022.3217661
    [16] S. Xia, Z. Yao, Y. Li, S. Mao, Online distributed offloading and computing resource management with energy harvesting for heterogeneous MEC-enabled IoT, IEEE Trans. Wireless Commun., 20 (2021), 6743–6757. https://doi.org/10.1109/TWC.2021.3076201 doi: 10.1109/TWC.2021.3076201
    [17] Z. Guo, Y. Shen, S. Wan, W. Shang, K. Yu, Hybrid intelligence-driven medical image recognition for remote patient diagnosis in internet of medical things, IEEE J. Biomed. Health Inf., 26 (2022), 5817–5828. https://doi.org/10.1109/JBHI.2021.3139541 doi: 10.1109/JBHI.2021.3139541
    [18] Z. Cai, Q. Chen, Latency-and-coverage aware data aggregation scheduling for multihop battery-free wireless networks, IEEE Trans. Wireless Commun., 20 (2021), 1770–1784. https://doi.org/10.1109/TWC.2020.3036408 doi: 10.1109/TWC.2020.3036408
    [19] C. Chen, Z. Liao, Y. Ju, C. He, K. Yu, S. Wan, Hierarchical domain-based multi-controller deployment strategy in SDN-enabled space-air-ground integrated network, IEEE Trans. Aerosp. Electron. Syst., 58 (2022), 4864–4879. https://doi.org/10.1109/TAES.2022.3199191 doi: 10.1109/TAES.2022.3199191
    [20] H. Jafari, J. Poshtan, Fault detection and isolation based on fuzzy‐integral fusion approach, IET Sci. Meas. Technol., 13 (2019), 296–302. https://doi.org/10.1049/iet-smt.2018.5005 doi: 10.1049/iet-smt.2018.5005
    [21] Y. Lu, L. Yang, S. X. Yang, Q. Hua, A. K. Sangaiah, T. Guo, et al., An intelligent deterministic scheduling method for ultralow latency communication in edge enabled industrial internet of things, IEEE Trans. Ind. Inf., 19 (2022), 1756–1767. https://doi.org/10.1109/TⅡ.2022.3186891 doi: 10.1109/TⅡ.2022.3186891
    [22] S. K. Gundewar, P. V. Kane, Condition monitoring and fault diagnosis of induction motor, J. Vib. Eng. Technol., 9 (2021), 643–674. https://doi.org/10.1007/s42417-020-00253-y doi: 10.1007/s42417-020-00253-y
    [23] A. Choudhary, D. Goyal, S. L. Shimi, A. Akula, Condition monitoring and fault diagnosis of induction motors: A review, Arch. Comput. Methods Eng., 26 (2019), 1221–1238. https://doi.org/10.1007/s11831-018-9286-z doi: 10.1007/s11831-018-9286-z
    [24] D. Fan, G. P. Jiang, Y. R. Song, Y. W. Li, G. Chen, Novel epidemic models on PSO-based networks, J. Theor. Biol., 477 (2019), 36–43. https://doi.org/10.1016/j.jtbi.2019.06.006 doi: 10.1016/j.jtbi.2019.06.006
    [25] I. Dilmi, A. Bouguerra, A. Djrioui, L. Chrifi-Alaoui, Interval type-2 fuzzy logic-second order sliding mode based fault detection and active fault-tolerant control of brushless DC motor, J. Eur. Syst. Automatisés, 54 (2021), 475–485. https://doi.org/10.18280/jesa.540311 doi: 10.18280/jesa.540311
    [26] O. E. Hassan, M. Amer, A. K. Abdelsalam, B. W. Williams, Induction motor broken rotor bar fault detection techniques based on fault signature analysis–a review, IET Electr. Power Appl., 12 (2018), 895–907. https://doi.org/10.1049/iet-epa.2018.0054 doi: 10.1049/iet-epa.2018.0054
    [27] H. MERABET, T. Bahi, K. BEDOUD, D. DRICI, A fuzzy logic based approach for the monitoringof open switch fault in a SVM voltage sourceinverter fed induction motor drive, J. Autom. Syst. Eng., 12 (2018), 48–66.
    [28] P. Kumar, A. S. Hati, Deep convolutional neural network based on adaptive gradient optimizer for fault detection in SCIM, ISA Trans., 111 (2021), 350–359. https://doi.org/10.1016/j.isatra.2020.10.052 doi: 10.1016/j.isatra.2020.10.052
    [29] D. K. Soother, J. Daudpoto, A brief review of condition monitoring techniques for the induction motor, Trans. Can. Soc. Mech. Eng., 43 (2019), 499–508. https://doi.org/10.1139/tcsme-2018-0234 doi: 10.1139/tcsme-2018-0234
    [30] A. Mehta, A. Choudhary, D. Goyal, B. S. Pabla, Infrared thermography based fault diagnosis and prognosis for rotating machines, J. Univ. Shanghai Sci. Technol., 23 (2021), 22–29. https://doi.org/10.1155/2021/9947300 doi: 10.1155/2021/9947300
    [31] S. Kavitha, N. S. Bhuvaneswari, R. Senthilkumar, N. R. Shanker, Magnetoresistance sensor-based rotor fault detection in induction motor using non-decimated wavelet and streaming data, Automatika, 63 (2022), 525–541. https://doi.org/10.1080/00051144.2022.2052533 doi: 10.1080/00051144.2022.2052533
    [32] A. Ebrahimi, H. Ahmad, R. Roshanfekr, Stator winding short circuit fault detection in three-phase Induction Motors using combination type-2 Fuzzy logic and Support Vector Machine classifier optimized by Fractional-order Chaotic Particle Swarm optimization algorithm, Comput. Intell. Electr. Eng., 12 (2021), 37–48.
    [33] A. Chouhan, P. Gangsar, R. Porwal, C. K. Mechefske, Artificial neural network–based fault diagnosis for induction motors under similar, interpolated and extrapolated operating conditions, Noise Vibr. Worldwide, 52 (2021), 323–333. https://doi.org/10.1177/09574565211030709 doi: 10.1177/09574565211030709
    [34] C. G. Dias, C. M. de Sousa, A neuro-fuzzy approach for locating broken rotor bars in induction motors at very low slip, J. Control Autom. Electr. Syst., 29 (2018), 489–499. https://doi.org/10.1007/s40313-018-0388-5 doi: 10.1007/s40313-018-0388-5
    [35] D. Bouneb, T. Bahi, H. MERABET, Vibration for detection and diagnosis bearing faults using adaptive neurofuzzy inference system, J. Electr. Syst., 14 (2018), 95–104.
    [36] Z. Zhu, Y. Lei, G. Qi, Y. Chai, N. Mazur, Y. An, et al., A review of the application of deep learning in intelligent fault diagnosis of rotating machinery, Measurement, 206 (2022), 112346. https://doi.org/10.1016/j.measurement.2022.112346 doi: 10.1016/j.measurement.2022.112346
    [37] X. Huang, G. Qi, N. Mazur, Y. Chai, Deep residual networks-based intelligent fault diagnosis method of planetary gearboxes in cloud environments, Simul. Modell. Pract. Theory, 116 (2022), 102469. https://doi.org/10.1016/j.simpat.2021.102469 doi: 10.1016/j.simpat.2021.102469
    [38] G. Qi, Z. Zhu, K. Erqinhu, Y. Chen, Y. Chai, J. Sun, Fault-diagnosis for reciprocating compressors using big data and machine learning, Simul. Modell. Pract. Theory, 80 (2018), 104–127. https://doi.org/10.1016/j.simpat.2017.10.005 doi: 10.1016/j.simpat.2017.10.005
    [39] X. Shen, G. Shi, H. Ren, W. Zhang, Biomimetic vision for zoom object detection based on improved vertical grid number YOLO algorithm, Front. Bioeng. Biotechnol., 10 (2022), 905583. https://doi.org/10.3389/fbioe.2022.905583 doi: 10.3389/fbioe.2022.905583
    [40] X. Zhang, T. Feng, Q. Niu, X. Deng, A novel swarm optimization algorithm based on a mixed-distribution model, Appl. Sci., 8 (2018), 632. https://doi.org/10.3390/app8040632 doi: 10.3390/app8040632
    [41] A. Glowacz, Thermographic fault diagnosis of shaft of BLDC motor, Sensors, 22 (2022), 8537. https://doi.org/10.3390/s22218537 doi: 10.3390/s22218537
    [42] A. Głowacz, W. Głowacz, Z. Głowacz, Recognition of armature current of DC generator depending on rotor speed using FFT, MSAF-1 and LDA, Ekspl. Niezawodność, 17 (2015), 64–69. https://doi.org/10.17531/ein.2015.1.9 doi: 10.17531/ein.2015.1.9
    [43] A. Glowacz, Fault diagnostics of acoustic signals of loaded synchronous motor using SMOFS-25-EXPANDED and selected classifiers, Tehnički vjesnik, 23 (2016), 1365–1372. https://doi.org/10.17559/TV-20150328135652 doi: 10.17559/TV-20150328135652
    [44] O. AlShorman, F. Alkahatni, M. Masadeh, M. Irfan, A. Glowacz, F. Althobiani, et al., Sounds and acoustic emission-based early fault diagnosis of induction motor: A review study, Adv. Mech. Eng., 13 (2021). https://doi.org/10.1177/1687814021996915 doi: 10.1177/1687814021996915
    [45] A. Glowacz, R. Tadeusiewicz, S. Legutko, W. Caesarendra, M. Irfan, H. Liu, et al., Fault diagnosis of angle grinders and electric impact drills using acoustic signals, Appl. Acoust., 179 (2021), 108070. https://doi.org/10.1016/j.apacoust.2021.108070 doi: 10.1016/j.apacoust.2021.108070
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1866) PDF downloads(82) Cited by(1)

Figures and Tables

Figures(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog