Research article Special Issues

Interaction between spatial perception and temporal perception enables preservation of cause-effect relationship: Visual psychophysics and neuronal dynamics


  • Introduction 

    Visual perception of moving objects is integral to our day-to-day life, integrating visual spatial and temporal perception. Most research studies have focused on finding the brain regions activated during motion perception. However, an empirically validated general mathematical model is required to understand the modulation of the motion perception. Here, we develop a mathematical formulation of the modulation of the perception of a moving object due to a change in speed, under the formulation of the invariance of causality.

    Methods 

    We formulated the perception of a moving object as the coordinate transformation from a retinotopic space onto perceptual space and derived a quantitative relationship between spatiotemporal coordinates. To validate our model, we undertook the analysis of two experiments: (i) the perceived length of the moving arc, and (ii) the perceived time while observing moving stimuli. We performed a magnetic resonance imaging (MRI) tractography investigation of subjects to demarcate the anatomical correlation of the modulation of the perception of moving objects.

    Results 

    Our theoretical model shows that the interaction between visual-spatial and temporal perception, during the perception of moving object is described by coupled linear equations; and experimental observations validate our model. We observed that cerebral area V5 may be an anatomical correlate for this interaction. The physiological basis of interaction is shown by a Lotka-Volterra system delineating interplay between acetylcholine and dopamine neurotransmitters, whose concentrations vary periodically with the orthogonal phase shift between them, occurring at the axodendritic synapse of complex cells at area V5.

    Conclusion 

    Under the invariance of causality in the representation of events in retinotopic space and perceptual space, the speed modulates the perception of a moving object. This modulation may be due to variations of the tuning properties of complex cells at area V5 due to the dynamic interaction between acetylcholine and dopamine. Our analysis is the first significant study, to our knowledge, that establishes a mathematical linkage between motion perception and causality invariance.

    Citation: Pratik Purohit, Prasun K. Roy. Interaction between spatial perception and temporal perception enables preservation of cause-effect relationship: Visual psychophysics and neuronal dynamics[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 9101-9134. doi: 10.3934/mbe.2023400

    Related Papers:

    [1] Meiting Qu, Shaohui Liu, Lei Li . Urban public health spatial planning using big data technology and visual communication in IoT. Mathematical Biosciences and Engineering, 2023, 20(5): 8583-8600. doi: 10.3934/mbe.2023377
    [2] Gayathri Vivekanandhan, Mahtab Mehrabbeik, Karthikeyan Rajagopal, Sajad Jafari, Stephen G. Lomber, Yaser Merrikhi . Applying machine learning techniques to detect the deployment of spatial working memory from the spiking activity of MT neurons. Mathematical Biosciences and Engineering, 2023, 20(2): 3216-3236. doi: 10.3934/mbe.2023151
    [3] Song Liu, Gusong Luo, Yonglong Cai, Wenjie Wu, Weitao Liu, Rong Zou, Wenxuan Tan . Determinants of consumer intention to adopt a self-service technology strategy for last-mile delivery in Guangzhou, China. Mathematical Biosciences and Engineering, 2024, 21(2): 3262-3280. doi: 10.3934/mbe.2024144
    [4] Jun Zhai, Bilin Xu . Research on meme transmission based on individual heterogeneity. Mathematical Biosciences and Engineering, 2021, 18(5): 5176-5193. doi: 10.3934/mbe.2021263
    [5] Xiang Wang, Yongcheng Wang, Limin He . An intelligent data analysis-based medical management method for lower limb health of football athletes. Mathematical Biosciences and Engineering, 2023, 20(8): 14005-14022. doi: 10.3934/mbe.2023624
    [6] Massimo Fioranelli, O. Eze Aru, Maria Grazia Roccia, Aroonkumar Beesham, Dana Flavin . A model for analyzing evolutions of neurons by using EEG waves. Mathematical Biosciences and Engineering, 2022, 19(12): 12936-12949. doi: 10.3934/mbe.2022604
    [7] Chrysovalantis Malesios, Nikoleta Jones, Alfie Begley, James McGinlay . Methodological approaches to exploring the spatial variation in social impacts of protected areas: An intercomparison of Bayesian regression modeling approaches and potential implications. Mathematical Biosciences and Engineering, 2024, 21(3): 3816-3837. doi: 10.3934/mbe.2024170
    [8] B. Spagnolo, D. Valenti, A. Fiasconaro . Noise in ecosystems: A short review. Mathematical Biosciences and Engineering, 2004, 1(1): 185-211. doi: 10.3934/mbe.2004.1.185
    [9] Zhipeng Ding, Hongxia Yun, Enze Li . A multimedia knowledge discovery-based optimal scheduling approach considering visual behavior in smart education. Mathematical Biosciences and Engineering, 2023, 20(3): 5901-5916. doi: 10.3934/mbe.2023254
    [10] Juncai Yao, Jing Shen, Congying Yao . Image quality assessment based on the perceived structural similarity index of an image. Mathematical Biosciences and Engineering, 2023, 20(5): 9385-9409. doi: 10.3934/mbe.2023412
  • Introduction 

    Visual perception of moving objects is integral to our day-to-day life, integrating visual spatial and temporal perception. Most research studies have focused on finding the brain regions activated during motion perception. However, an empirically validated general mathematical model is required to understand the modulation of the motion perception. Here, we develop a mathematical formulation of the modulation of the perception of a moving object due to a change in speed, under the formulation of the invariance of causality.

    Methods 

    We formulated the perception of a moving object as the coordinate transformation from a retinotopic space onto perceptual space and derived a quantitative relationship between spatiotemporal coordinates. To validate our model, we undertook the analysis of two experiments: (i) the perceived length of the moving arc, and (ii) the perceived time while observing moving stimuli. We performed a magnetic resonance imaging (MRI) tractography investigation of subjects to demarcate the anatomical correlation of the modulation of the perception of moving objects.

    Results 

    Our theoretical model shows that the interaction between visual-spatial and temporal perception, during the perception of moving object is described by coupled linear equations; and experimental observations validate our model. We observed that cerebral area V5 may be an anatomical correlate for this interaction. The physiological basis of interaction is shown by a Lotka-Volterra system delineating interplay between acetylcholine and dopamine neurotransmitters, whose concentrations vary periodically with the orthogonal phase shift between them, occurring at the axodendritic synapse of complex cells at area V5.

    Conclusion 

    Under the invariance of causality in the representation of events in retinotopic space and perceptual space, the speed modulates the perception of a moving object. This modulation may be due to variations of the tuning properties of complex cells at area V5 due to the dynamic interaction between acetylcholine and dopamine. Our analysis is the first significant study, to our knowledge, that establishes a mathematical linkage between motion perception and causality invariance.



    The visual system enables humans to see and observe the events happening in the environment by converting the electromagnetic waves (belonging to a specific frequency range, also known as the visible spectrum) into electrochemical signals (neuronal activation) and processing them to extract useful information. The process of extracting and organizing information about the surroundings through the visual system is known as visual perception. Visual perception profoundly affects the interaction between a person and their surroundings. From driving a car on a busy highway to playing sports, the visual perception of a moving object is crucial for survival in the natural environment. Modulating the perception of moving stimuli could have appreciable consequences and can affect the quality of life. Therefore, it is vital to understand the underlying neuronal dynamics and anatomical correlates, along with the quantifiable model of perception of the moving object.

    In the case of a moving object, the position of the object changes with time. Therefore, the visual perception of a moving object involves visual- spatial perception, as well as time perception. The process of perceiving a moving object starts with light from the visual field (surroundings) entering the eye and projecting onto the retina. The photosensitive cells of the retina convert the light into neural signals, which transmit through the optic nerve to the brain regions. These neuronal signals are projected into the neuropil forming the retinotopic map, where the relationship between the adjacent location in the visual field is maintained in terms of the nearby neuronal activation [1]. Retinotopic maps are isomorphic representations of the outside world based on information sensed by the retinal system, where the spatiotemporal neuronal activation represents the spatiotemporal movement of the object in the physical space (visual field) [2]. Retinal images are represented nonlinearly on the retinotopic map have been analyzed by Schwartz [3]. The retinotopic maps exist in several brain regions, e.g., the primary visual cortex (V1) [4], lateral occipital cortex [5] and cerebellum [6]. High-resolution functional magnetic resonance imaging (fMRI) imaging enables experimental observation of human retinotopic maps [7,8]. Retinotopic mapping of the visual field is the initial stage of the processing of visual perception functions such as motion [9], object recognition [10] and color [11]. The retinotopic map belongs to a broad category of the topographic map, which is a fundamental neuroanatomical feature in the cerebral cortex of humans and other primates [12,13]. We will refer to 'retinotopic map' as 'retinotopic space' for linguistic convenience.

    Visual motion perception has been studied to discover underlying mechanisms over the years [14,15,16]. Various experiments have been carried out to find the anatomical and physiological basis of the perception of moving objects using techniques such as fMRI and transcranial magnetic stimulation [17,18,19,20]. In addition, theoretical models addressing the different aspects of motion perception are available, namely, the Reichardt detector for motion detection [21] and the spatiotemporal energy model [22]. However, much work has been done to explore the dominant brain regions active while perceiving a moving object, and providing theoretical models. Nevertheless, very few studies have addressed the modulation of the perception of the moving object, but they mainly focused on the perceived speed of the moving object. For example, Mashour found that the perceived speed and actual speed are related to each other by the power law [23]. Algom and Cohen-Raz also reported similar results in another study [24]. Therefore, as far as we know, an integrative global mathematical and theoretical framework for the modulation of the perception of a moving object is yet to be readily formalized, especially from a causality perspective.

    In this paper, we quantify and provide a mathematical model to describe how the speed of a moving object modulates visual-spatial perception and time perception when perceiving a moving object. The changes in the position of the object occur in the temporal order and indicate a temporal causality relationship between the position of the object. We will take the constancy of temporal causality in the retinotopic space and perceptual space and then find the relationship between the spatiotemporal coordinates of the moving object represented in the retinotopic space and perceptual space. Here, perceptual space is the subjective experience of the physical space, and it represents the geometry of the perception. Thereafter, we will apply our mathematical model to different experimental settings to predict the results and compare them with actually observed results to validate the outcomes of our investigation. After validation, we will demarcate the anatomical regions where this perceptual transformation occurs, and experimental corroboration will be done by performing diffusion MRI-based tractography studies. Furthermore, we will form a mathematical framework to model the neuronal-level biochemical mechanism responsible for the modulation of visual-spatial perception and time perception, based on neurotransmitter interaction and dynamics.

    We have organized our paper as follows:

    (i) In Section 2, we have formulated the perception of a moving object as the mapping from retinotopic space to perceptual space. Then, we will derive the mathematical model of the modulation of perception of a moving object.

    (ii) In Section 3, we have described the methodology used for experimental investigations.

    (iii) Section 4 is divided into four subsections:

    • Section 4.1: We applied our mathematical model to two experiments for empirical validation.

    • Section 4.2: The anatomical correlates of the modulation of perception of a moving object are provided and verified by the MRI tractography investigations.

    • Section 4.3: The neurotransmitter dynamics-based neuronal-level mechanism is delineated, along with a mathematical model.

    • Section 4.4: Formal analysis of the perception of a moving object is undertaken.

    (iv) We conclude the paper by discussing the key results, implications and general significance in Section 5.

    In this section, we will formulate the geometrical representation of the moving object in the retinotopic space and its projection onto perceptual space, presuming invariance of the temporal causality. We will derive a transformation matrix (Z) that consolidates the coordinate transformation from the retinotopic space to the perceptual space when the object changes its position with time. Then, we will show the translation of the transformation equations to the neural systems and demarcate their utilization in the practical scenario.

    Suppose that the position of an object in the visual field varies with time such that the change of position with time (speed) is constant. Curve AB in Figure 1 represents the spatiotemporal activation of the neural tissue in the retinotopic map due to the changes in the position of the moving object with time (T). Curve AB is a straight line because of the constant speed of the moving object. Every point on the curve AB belongs to the instantaneous neural activation in the retinotopic space corresponding to the instantaneous position of the moving object. We assume that the position varies along a single spatial dimension for mathematical convenience. Points A and B in Figure 1 denote the start and end of the projection of a moving object on the human observer's retinotopic space.

    Figure 1.  Changes in spatial position of the moving object with time in the retinotopic map.
    Figure 2.  Transformation of the representation of the moving object from retinotopic space to perceptual space.

    Let us take a random point R between points A and B in retinotopic space (Figure 2(a)). Every point on curve AB appearing before point R in temporal order has a temporal causal connection with point R. Points A and R in the retinotopic space shown in Figure 2(a) are projected in perceptual space as points Ap and Rp (Figure 2(b)), respectively, under the assumption of invariance of the temporal causality. Therefore, points Ap and Rp in perceptual space also have a causal relationship. In Figure 2(b), T* and X* represent the time and position of the object in the perceptual space, respectively.

    Suppose that, in Figure 2(a), the coordinates of points R and A are (t, x) and (0, 0), respectively. As points A and B mark the start and end of the movement of the object in the retinotopic space, the transformation of points A and B in perceptual space should have the same coordinates. An example can explain this: if somebody is watching a ball being thrown in the air, the ball's starting and ending position in the retinotopic and perceptual space will be the same. However, anything in between may be modified. Hence, the coordinates of point Ap in the perceptual space can be taken to be the same as A, i.e., (0, 0), and we can assume the coordinates of point Rp in the perceptual space to be (t*, x*). (However, under ideal conditions, we expect the coordinates of points Rp and R to be the same so that the observer perceives the moving object without any modifications.) For generalization, we can presume that the difference in the coordinates of points Ap and Rp in the perceptual space is a function of the difference in the coordinates of points A and R in the retinotopic space (Eq (1a), (1c)).

    x0=g(t0,x0) (1a)
    i.e.,  x=g(t,x) (1b)
    t0=f(t0,x0) (1c)
    i.e.,  t=f(t,x) (1d)

    In Eq (1b), (1d), the terms f and g are two functions whose formulations we will pursue later. Likewise, the coordinates of A and R can be mathematically calculated from the coordinates of Ap and Rp under the premise of symmetry between the retinotopic and perceptual spaces.

    Therefore,

    x=g(t,x) (2a)
    t=f(t,x) (2b)

    Even if point A does not lie at the origin, Eqs (1a, 1b) and (2a, 2b) must hold. If the coordinates of point A are (x0, t0), then we can respectively rewrite Eqs (1a), (1c), (2a), (2b) as:

    xx0=g(tt0,xx0) (3a)
    tt0=f(tt0,xx0) (3b)
    xx0=g(tt0,xx0) (4a)
    tt0=f(tt0,xx0) (4b)

    Partial differentiation of Eq (1b) with respect to x yields

    δxδx=δg(t,x)δx (5)

    Partial differentiation of Eq (3a) with respect to x yields

    δxδx=δg(tt0,xx0)δx (6)

    From Eqs (5) and (6), we obtain

    δg(t,x)δx|{t0,x0}=δg(tt0,xx0)δx|{t0,x0}
    δg(t,x)δx|{t0,x0}=δg(tt0,xx0)δ(xx0)|{t0,x0}
    δg(t,x)δx|{t0,x0}=δg(t,x)δx|{0,0} (7)

    Using Eqs (1b) and (3a), we derived the Eq (7). Likewise, we can derive, from Eqs (1d), (2a), (2b), (3b), (4a) and (4b), the following Eqs (8)–(10).

    δg(t,x)δt|{t0,x0}=δg(t,x)δt|{0,0} (8)
    δf(t,x)δx|{t0,x0}=δf(t,x)δx|{0,0} (9)
    δf(t,x)δt|{t0,x0}=δf(t,x)δt|{0,0} (10)

    It is evident from Eqs (7)–(10) that functions g and f are linear functions of x and t because the differentiation of functions g and f is the same at the origin or any arbitrary point (t0, x0), which is possible only for a straight line; thus, Eqs (1d), (1d), (2a) and (2b) become

    x=A.x+B.t (11a)
    t=C.x+D.t (11b)
    x=A.x+B.t (12a)
    t=C.x+D.t (12b)

    where A, B, C and D are unknown coefficients in Eqs (11) and (12). In the subsequent mathematical analysis, we will derive mathematical expressions for A, B, C and D.

    Let us represent Eqs (11) and (12) in matrix form which become Eqs (13) and (14), respectively, below:

    From Eqs (11a) and (11b): [xt]=[ABCD][xt]

    i.e.,  X=ZX (13)

    From Eqs (12a) and (12b): [xt]=[ABCD][xt]

    i.e.,  X=ZX (14)

    where X=[xt] ,X=[xt] and Z=[ABCD].

    In Eqs (13) and (14), Z is a transformation matrix that denotes the coordinates transformation from the retinotopic to the perceptual space. From Eqs (13) and (14), Z=Z1 :

    Z=[ABCD] (15)

    Thus, Z1=1BCAD[DBCA]

    For the simplest case, BCAD=1 :

    i.e.,  Z1=[DBCA] (16)

    By comparison of Eqs (15) and (16), D=A.

    Since BCAD=1, by putting D=A, we get BC+A2=1 A=+1BC

    Therefore,

    Z=[+1BCBC+1BC] (17)

    In the retinotopic space, the position of the object changes with time, as the rate of change with time can be obtained by putting x=0 in Eq (11a), under the assumption that the rate of change of position with time is constant. Thus:

    0=A.x+B.t

    i.e., xt=BA

    Let P is the rate of change of position of the moving object in the retinotopic space (i.e., speed):

    Then,

    P=xt=BA

    i.e., P=B+1BC

    or  +1BC=BP (18)
    thereby  C=1BBP2 (19)

    Putting values from Eqs (18) and (19) into Eq (17) yields

    Z=[BPB1BBP2BP] (20)

    Retinotopic maps exist in multiple brain regions. For example, it is present in the primary visual cortex (V1), the cerebellum and the lateral occipital cortex [5,6,25]. Let us take two retinotopic maps (t1 and t2) such that the relationship between the coordinates of the retinotopic map (t1), another retinotopic map (t2) and the perceptual space (m), as per our preceding formulation (Eqs (1)–(20)), is as follows (i.e., Eqs (21)–(23)), where Xt1, Xt2, and Xm are the column matrices whose components are the spatiotemporal coordinates of the moving object in the t1 retinotopic map, t2 retinotopic map and perceptual space (m), respectively; and, Z1, Z2 and Z3 are transformation matrices representing the coordinate transformation. Thus we can come to the following relations.

    Relationship between retinotopic maps t1 and t2:

    Xt2=Z1Xt1 (21)

    Relationship between retinotopic map t2 and perceptual space (m):

    Xm=Z2Xt2 (22)

    Relationship between retinotopic map t1 and perceptual space (m):

    Xm=Z3Xt1 (23)

    By comparing the previous three equations (Eqs (21)–(23)), we obtained the following:

    Z3=Z2Z1

    Applying the transformation matrix Z given in Eq (20) gives

    [B3P3B31B3B3P32B3P3]=[B2P2B21B2B2P22B2P2][B1P1B11B1B1P12B1P1]

    After matrix multiplication and comparison of the matrix components, we obtained four equations. After solving those equations, we obtained the following (please see Supplementary material section S1 for the analytical solution):

    (1B211P21)2=(1B221P22)2=(1B231P23)2 (24)

    In Eq (24), the squared term (with the same mathematical arrangement and equivalent variables) is equal in different cases, showing that this particular term is invariant in different situations. Therefore, let us donate Eq (24) for the general situation of any space (such as perceptual space, cerebral retinotopic space or cerebellar retinotopic space) in Eq (25) below:

    (1B21P2)=1k2 (25)

    In Eq (25), k can be considered as a fidelity parameter denoting the invariance across different representational spaces (such as perceptual space, cerebral retinotopic space or cerebellar retinotopic space).

    By putting the value of B from Eq (25) into Eq (20), we get

    Z=[11(Pk)2P1(Pk)2Pk21(Pk)211(Pk)2] (26)

    Now, putting the value of the transformation matrix Z from Eq (26) into Eq (13) gives

    [xt]=[11(Pk)2P1(Pk)2Pk21(Pk)211(Pk)2][xt] (27)

    Now, in Eqs (11) and (12), the transformation of the coordinates from retinotopic to perceptual space or vice versa is the same and occurs when the spatial dimensions x and x point toward opposite directions. To make them unidirectional, we put x=x in Eq (27). We get the following:

    [xt]=[11(Pk)2P1(Pk)2Pk21(Pk)211(Pk)2][xt]

    Hence, the relationship between the coordinates of retinotopic and perceptual space is

    x=xPt1(Pk)2  and  t=tPxk21(Pk)2 (28)

    The moving object in the visual field (physical space) is projected onto the retina, and then the position of the moving object is mapped from the retinal surface in the retinotopic space. In the transformation equation, Eq (27), x and t are physical coordinates in the retinotopic space. However, the physical size of the visual field (where the object is moving in external space) differs from the physical size of the cortical tissues (where the retinotopic space is situated). Figure 3 illustrates the cortical magnification factor (CMF) [26], which defines the cortical tissue allotted (in mm) for one degree of the visual angle subtended at the eye. The CMF gives the ratio of the size of the neuronal tissue (z) activated due to the movement of an object in the visual field, subtending a particular visual angle (θ) [27]. The visual field projected around the fovea obtains more neural tissue than the peripheral regions, as shown by decreasing values of the CMF in the peripheral region of the retina [28].

    Figure 3.  Cortical magnification factor denotes the cortical tissue involved in the retinotopic representation of the given size of the visual field. The cortical magnification factor is the ratio of the size of the cortical tissue (z) to the visual angle subtended on the eye (θ).

    On the contrary, x represents the perceived extent of the position of the external event in the perceptual space. To make x compatible with the physical size of the moving object, it is necessary to remap x to enable comparison of the x value with the physical value of the position of the moving object. Therefore, applying the CMF in a reverse way will modify the x to become compatible with the physical scale of the visual field. Suppose that γ represents the CMF function (which quantitatively shows the mapping of the position of the moving object from the visual field to the retinotopic map) and w is the physical position of the moving object. Then, the relationship between θ,x,x and ɳ is given by Eq (29), where ɳ represents the perception of the position of the moving object. The inverse function γ1 represents the mechanism by which the brain makes neural activation compatible with the physical extent of external events.

    x=γ(w)  and  ɳ=γ1(x) (29)

    In Eq (29), γ and γ1 are mathematical functions, and the term inside the parenthesis or round brackets is input to the function, while γ1 is the inverse function of the mathematical function γ. Our formulation of the perception of a moving object is shown in Figure 4, where the transformation equations of Eq (28) were derived in the previous section.

    Apart from this, in the transformation equations of Eq (28), P is the rate of change of the position of the moving object in the retinotopic space (i.e., speed). At the same time, k (fidelity parameter) transpires to be a constant. As highlighted by Eqs (24) and (25), the fidelity parameter is constant, irrespective of the frames of reference. Because neural signals are responsible for transferring information between the retinotopic space and perceptual space, the fidelity parameter might be related to the neuronal signals.

    Figure 4.  Our formulation of the perception of a moving object: The moving object is projected onto the retinotopic map through the retinal surface, and follows the cortical magnification factor in terms of mapping from the visual field to the retinotopic map. Then, the transformation equations relate the spatiotemporal coordinates of the moving object in the retinotopic and perceptual spaces. After that, applying the inverse cortical magnification factor provides the coordinates of the perceived moving object at the scale of physical space.

    A luminous arc was formed on a wheel having a diameter of 61 cm. The length of the arc and radial distance of the arc from the center was 13 cm and 20.7 cm, respectively. A flat black box with a horizontal line facing the subject was situated ahead of the center of the wheel. The length of this horizontal line could vary through the rolling shutter as desired by the subject. The presence of the black box did not affect the visibility of the path of the rotating arc. Distance between the subject and the rotating wheel varied between 2 to 4 feet. Different speeds of 0, 0.5, 0.7, 1 and 1.3 revolutions per second were used to rotate the wheel. Speed could not be increased beyond 1.3 revolutions per second, as the subjects could not follow the arc and saw a complete circle. A total of 12 subjects participated in the experiment. Two groups were formed from the subjects; for one group, the speed varied from lowest to highest, and for the other one, it was vice versa. Subjects varied the length of the stationery line to match it with the length of the moving arc [29]. Figure 5 illustrates the experimental setup.

    Figure 5.  Experimental setup for measuring the perceived length of a moving arc at different speeds.

    During the experiment, subjects needed to fixate on the middle of the horizontal line, which coincides with the center of the circle traced by the moving arc, as well as to judge the length of the moving arc at the different rotational speeds of the wheel. Subjects varied the length of the stationary line to make it subjectively equal to the length of the moving arc.

    Matching and reproduction methods were used to measure the perceived time period for eight subjects. Participants fixated at 6.6° above the Gabor patch while a chin rest restrained any head movement. Vertical Gabor patches with a 6° radius displayed on the cathode-ray tube (CRT) monitor acted as stimuli and were placed at 57 cm from the participants. Sinusoidal luminance modulation drifting left or right of the stationary Gaussian contrast envelope in the vertical Gabor patch was used as moving stimuli. In contrast, a vertical Gabor patch without any luminance modulation was used as stationary stimuli.

    Two methods were used to measure the perceived time while perceiving moving stimuli [30]:

    (i) In the matching method, the moving stimulus was displayed for a fixed time duration, followed by a stationary stimulus whose duration was varied so that the subjects assessed it to be subjectively equal to the moving stimulus' duration.

    (ii) In the reproduction method, the moving stimulus was displayed on the screen for some period; after which time, the subjects reproduced the perceived duration by pressing a switch.

    Diffusion-weighted images were acquired on a 3T Philips Achieva scanner at the National Brain Research Centre, Manesar, India, by using the HARDI schema (128 direction, b-value: 2000 s/mm2). The human MRI scanning procedure was approved by the Institutional Human Ethics Committee of the National Brain Research Centre, and informed consent was taken. The in-plane resolution and slice thickness were 2 mm. FSL eddy was used to correct for eddy current distortion through the use of the integrated interface in DSI Studio ("Chen" release). The diffusion MRI data were rotated to align with the AC-PC line. After motion correction, deterministic fiber tracking was performed by using DSI Studio with the following tracking parameters: fractional anisotropic threshold: 0.04162, angular threshold: 75 degrees, step size: 0.1 mm and total seeds: 1, 000, 000. We performed this analysis pipeline for one normal subject (gender: male, age: 24 years).

    Scans were acquired on a 7T Siemens MAGNETOM machine at Maastricht University, Netherlands. Approval was given by the Ethics Committee of the Faculty for Psychology and Neuroscience at Maastricht University, and informed consent was obtained. Diffusion-weighted MRI images were scanned by using multi-band diffusion-weighted spin-echo EPI protocol with the following parameters: b-values = 1000, 2000 and 3000 s/mm2, field of view (FOV) = 200 × 200 mm with partial Fourier 6/8, 132 slices, 1.05 mm isotropic voxel size, repetition time (TR) = 7080 ms, echo time (TE) = 75.6 ms, 66 directions and 11 additional b = 0 volumes for every b-value [31]. The susceptibility artifact was estimated by using reversed phase-encoding b0 by TOPUP from the Tiny FSL package (http://github.com/frankyeh/TinyFSL), a re-compiled version of FSL TOPUP (FMRIB, Oxford) with multi-thread support. FSL eddy was used to correct for eddy current distortion. After preprocessing the MRI image, we used DSI Studio software (http://dsi-studio.labsolver.org) to enable deterministic tractography using the diffusion tensor imaging technique [32]. We used Brainnetome Atlas to locate the region of interest [33]. The tracking parameters were as follows: fractional anisotropy threshold of 0.06, angular threshold of 65 degrees, step size of 0.1 mm and 500, 000 seeds. We performed this analysis pipeline for one normal subject (gender: female, age: 27 years).

    The transformation equations of Eq (28) show that the spatiotemporal coordinates of the moving object in the perceptual space can be different from the spatiotemporal coordinates in the retinotopic space, which thus indicates that the perception of a moving object may deviate from the physical reality. Now, we will analyze two experiments by using our formulation of the perception of the moving stimulus (Figure 4). Then, we will theoretically predict the experimental outcomes and compare them with the actual results to validate our theoretical formulation. We will now investigate two such experiments.

    In this experiment, the perceived length of the moving arc was measured by Ansbacher, using the experimental setup explained in Section 3.1 [29]. Figure 6 shows the experimental measurement of the variation in the perceived length as the rotational speed of the moving arc was varied from 0 to 1.3 revolutions per second. Reduction in the perceived length indicates the modulation of visual perception due to a change in the speed of the arc.

    Figure 6.  Alteration in the perceived length of the moving arc at different rotational speeds of the arc.

    Next, we obtained the relationship between the length of the moving arc in the retinotopic space and perceptual space. The length of the moving arc in the retinotopic space (L) can be described in terms of x1 and x2 which are, respectively, the spatial coordinates of the start and end points of the moving arc in the retinotopic space at a particular moment (t = T0)). Thus:

    L=x2x1 (30a)

    Then, applying the transformation equations of Eq (28), the length of the moving arc in the perceptual space (L*) becomes

    L=x2x1=x2PT01(Pk)2x1PT01(Pk)2
    L=L1(Pk)2 (30b)

    In Eq (30b), P is the speed of the moving arc in the retinotopic space. We can find the length of the moving arc in the retinotopic space (L, using the physical length of the arc) and the length of the moving arc in the perceptual space (L*, using the experimental observation shown in Figure 6), as well as the speed (P), by applying the CMF. However, the CMF is not a constant and it decreases for the peripheral visual field. Therefore, we calculated the CMF for the moving arc (physical length: 0.13 m) based on experimental observations from different studies [5,34,35,36,37,38]. Because the CMF varies as the angular distance from the fixation point increases, we can find the average value of the tissue allocated per degree of the visual field used by the observer to make a judgment.

    In this experiment, the circular path followed by the moving arc subtends an angle of 23.4° on the retina. Therefore, the angular distance from the fovea is 11.7°. The calculated CMF (γ) is 9.55 mm of neural tissue per degree of the visual field, represented as γ in Figure 4. Although the arc followed a circular path while the subjects make length judgments, the circle was mapped to a line in the retinotopic space due to logarithmic mapping [3]. The rotational velocity of the moving arc was converted into linear velocity, followed by the application of γ to find the value of P (i.e., the speed of the arc in the retinotopic space).

    Since we have experimental observations about L,L and P, we can find k (fidelity parameter) by putting these values in Eq (30b). As per our earlier analysis (Eqs (24) and (25)), the value of k (fidelity parameter) should be constant regardless of the speed of the moving arc.

    We calculated the fidelity parameter (k) for different rotational speeds of the moving arc; the results are shown in Figure 7. As evident in Figure 7, the value of the fidelity parameter (k is almost constant, irrespective of the speed of the moving arc in the retinotopic space. From Eq (30b), we find that, as the value of P approaches the value of the fidelity parameter (k, the subject's underestimation of the arc's length (LL increases. From Figure 7, we see that the fidelity parameter (k has a very small coefficient of variation (~ 5%), with an average value of 0.74, and it can be taken as a constant. The chi-square (χ2) goodness-of-fit test is also satisfied (p > 0.99). The constant value of the fidelity parameter across different observation conditions supports our mathematical prediction. This constancy of the fidelity parameter ensures full faithfulness and correspondence between the different representations of the moving object in different spaces (i.e., the perceptual space, cerebral retinotopic space or cerebellar retinotopic space).

    Figure 7.  Constancy of the fidelity parameter, k, while the rotational speed of the moving arc changes. This constancy validates the theoretical prediction of our mathematical model. (Statistical goodness-of-fit test satisfied, p > 0.99).

    Over the years, various researchers have consistently observed temporal overestimation when the external stimulus moves relative to the stationary stimulus [39,40,41]. Now, we will predict the observations of a similar experiment by using our model and compare it with the actual results to validate our model. Kaneko and Murakami performed a similar experiment that measured the perceived time period during stationary and moving stimuli. The perceived time was measured by using two experimental procedures. In the first procedure, the matching method was used such that the duration of the stationary stimuli varied until it was perceived as equal to the duration of the moving stimulus. The second procedure incorporated the reproduction method in which the subjects reproduced the perceived duration of the moving stimulus by pressing the switch [30]. The ratio of perceived time and actual time was used to quantify the perceptual changes in the subjects, termed as the ratio of overestimation.

    Using the transformation equations (Eq (28)), we can derive the mathematical equation Eq (31) below, which provides the time interval transformation from the retinotopic time (Δt) to perceptual time (Δt) at a particular spatial coordinate (x = X0). Thus:

    Δt=t1t2=t1PX0k21(Pk)2t2PX0k21(Pk)2

    i.e., Δt=t1t21(Pk)2

    or  Δt=Δt1(Pk)2 (31)

    Now, to find the speed of the moving stimulus in the retinotopic space, we calculated the CMF for the current experiment. In this experiment, the subjects fixated their eyes 6.6° above the Gabor patch center, whose diameter was 12°. Therefore, the moving stimulus was covering the 12.6° in the visual field. Using the same procedure as that described in Section 4.1.1, we obtained an average CMF equal to 9.51 mm of the neural tissue per degree of the visual field.

    Figure 8.  Experimental observations of the reproduction method (second experiment), validating the theoretical prediction of our mathematical model. (Statistical goodness-of-fit test satisfied, p > 0.99).
    Figure 9.  Experimental observations of the matching method (first experiment), validating the theoretical prediction of our mathematical model. (Statistical goodness-of-fit test satisfied, p > 0.99).

    We used the value of k=0.74, as obtained after analyzing the perception of the moving arc experiment in the previous subsection (Section 4.1.1). After that, we calculated the perceived time interval at different speeds by using Eq (31) and plotted these data along with the corresponding experimental measurements for comparison, as shown in Figures 8 and 9. Then, we performed the chi-square (χ2) goodness-of-fit test to test the congruence of our mathematical prediction with the actual observations. We obtained p>0.99 for both experiments, which indicates that our mathematical model is well corroborated.

    In the previous subsection (Section 4.1), we validated our model based on empirical evidence and showed that our mathematical formulation reliably describes the ongoing perception phenomenon. In this section, we will examine the anatomical region in the brain, which implements the coordinate transformation equations (Eq (28)).

    In the transformation equations (Eq (28)):

    (a) The position in perceptual space (x) is a function of the spatial (x) and temporal (t) coordinates in the retinotopic space.

    (b) The time in perceptual space (t) is a function of the spatial (x) and temporal (t) coordinates in the retinotopic space.

    This indicates that the position in the perceptual space (x depends partly on the time (t) in the retinotopic space. Similarly, the time in the perceptual space (t) depends partly on the position (x in the retinotopic space. These observations are counterintuitive because, when an object is not moving (P = 0), the visual-spatial perception of the object depends only on the spatial position, and the temporal perception depends only on the time information. Nevertheless, the motion of the object causes two different information streams (spatial and temporal) along the motion perception pathways to interact with each other. In other words, temporal information (t) also takes part in the perception of spatial information (x) due to this interaction process (in addition to spatial information (x). Similarly, spatial information (x) also takes part in the perception of time (t) due to this interaction process (in addition to temporal information (t)). This interaction process modulates the perception of spatial position and time, which was also observed in the experiments discussed in the previous subsection (Section 4.1).

    Now, we come to the interaction of the space-time coordinates during motion. Suppose that a pendulum is moving in the fronto-parallel plane of an observer. The observer perceives the oscillating movement of the pendulum in a plane with their naked eyes. However, when the same observer views the scene after placing a neutral density filter in front of one eye, they perceive the pendulum's movement as an elliptical orbit, making the pendulum appear to move close (rightward swing of the pendulum) and then far from them (leftward swing of the pendulum). This phenomenon is known as the Pulfrich illusion, named after its discoverer Carl Pulfrich, who was ironically blind from one eye [42]. A neutral density filter introduces a time delay in processing the retinal image [43,44], and, during perception, a time delay affects spatial perception. The magnitude of the perceived depth depends on the change in the pendulum's position with time (speed) [45]. In contrast to the perception of the moving object, the interaction between the spatial and time dimensions is experimentally observable in the Pulfritch phenomenon [46].

    Now, we come to the neuroanatomical correlate of the interaction between the spatial and temporal information during the perception of the moving object. Several experimental studies [47,48,49] have indicated that the middle temporal visual area (V5) is active during the perception of moving stimuli. Similarly, during the Pulfrich illusion, the middle temporal visual area (V5) is active, as has been observed experimentally [50]. Hence, we can conclude that, during the process of visual motion perception, which involves interaction between time and spatial information, the middle temporal visual area (V5) is the anatomical locus of interaction.

    Although we can conclude by analyzing earlier studies that area V5 of the visual brain takes part in the space-time interaction, perception is a complex process and it involves several cortical areas. Therefore, we investigated the anatomical connectivity between the area V5 and brain regions that are active during visual-spatial and temporal perception by analyzing the diffusion MRI scans to find out the neural tracts. We analyzed earlier literature findings [29,51,52,53,54,55,56,57,58,59,60,61,62] that delineated the brain regions responsible for (i) time perception and (ii) perception of a position of an object located in the visual field (spatial aspect). Based on the literature analysis, we found the differential brain areas that are activated during time perception and visual spatial perception, as mentioned in Table 1.

    Table 1.  Brain regions active during the perception of time and the spatial location of an object.
    Perception of Time Perception of Spatial Location of an Object
    Prefrontal Cortex (Brodmann Area 45) Posterior Parietal Cortex (Brodmann Area 7)
    Premotor Cortex (Brodmann Area 4 & 6) Intraparietal Sulcus
    Inferior Parietal Cortex (Brodmann Area 40) Superior Parietal Lobule
    Putamen (Basal Ganglia)

     | Show Table
    DownLoad: CSV

    Then, we tracked the neural tracts in each hemisphere between the following brain regions:

    (i) brain regions active during time perception and area V5.

    (ii) brain regions active during visual-based spatial perception and area V5.

    (iii) V5 of the left and right hemispheres.

    We performed a tractography experiment for two subjects; the results are shown in Figures 10 and 11. The tractography results suggest that area V5 has anatomical connectivity to brain regions along the time and spatial position information processing streams and can act as a conjoining point or interaction center for these two streams. We also performed a centrality analysis of the network consisting of the brain regions in Table 1 as nodes; we found that area V5 has the highest centrality (please see Tables S1 and S2 in the Supplementary material Section S2).

    Figure 10.  Pathways for spatiotemporal interaction (Subject 1, MRI 3 T scanner). Upper Row: Tracts between the middle temporal visual area (V5) and brain regions active during time perception. Middle Row: Tracts between the middle temporal visual area (V5) and brain regions active during the spatial location of an object. Lower Row: Tracts between the left and right middle temporal visual areas (V5). (The brain regions are listed in Table 1.).
    Figure 11.  Pathways for spatiotemporal interaction (Subject 2, MRI 7 T scanner): Upper Row: Tracts between the middle temporal visual area (V5) and brain regions active during time perception. Middle Row: Tracts between the middle temporal visual area (V5) and brain regions active during the spatial location of an object. Lower Row: Tracts between the left and right middle temporal visual areas (V5). (The brain regions are listed in Table 1.).

    Different experimental studies have pointed out the role of different neurotransmitters during the time and visuospatial perception. While an object moves in the visual field, the perception of time is modulated by dopamine levels [63,64]. Similarly, acetylcholine modulates the spatial perception of the moving object [65]. Therefore, the corresponding biochemical mechanism (of interaction between visual-spatial and temporal information) should be the modulatory effects of acetylcholine and dopamine levels on each other.

    In brain tissue, acetylcholine and dopamine release can affect each other's concentration due to mutual neuromodulation at the synaptic cleft. Here, we show that a similar mechanism will occur in area V5 of the visual cortex. Muscarinic acetylcholine receptors can regulate dopamine release. However, for instance, in the case of low-frequency stimuli (1 to 10 Hz), acetylcholine suppresses dopamine release; however, for high-frequency stimuli (>25 Hz), the dopamine release probability increases [66,67]. Similarly, dopamine can promote the release of acetylcholine through D1 receptors while suppressing acetylcholine release through D2 receptors [68,69,70,71]. It is also known that the dopamine D1 receptor's density is significantly more than the D2 receptor's density in the visual cortex [72,73]. Therefore, dopamine promotes acetylcholine release, and acetylcholine, through muscarinic receptors, suppresses dopamine release in the area V5 because there is usually low-frequency activity.

    To paraphrase, the dopamine-acetylcholine interaction will cause changes in the dopamine and acetylcholine concentrations with time. We can model the interaction with the Lotka-Volterra system, as developed for chemical reaction dynamics.

    Thereby, we can formulate that

    dAdt=mAμAD (32)
    dDdt=mD+βAD (33)

    where

    A = Instantaneous concentration of acetylcholine at the synaptic cleft (mmol);

    D = Instantaneous concentration of dopamine at the synaptic cleft (mmol);

    µ = Density of dopamine D1 receptor on dendrites (µmol/m2);

    β = Density of acetylcholine muscarinic receptor on dendrites (µmol/m2);

    m = Interaction parameter (per millisecond) (0 < m ≤ 1).

    We used the Runge-Kutta 4th-order method to find the numerical solution of Eqs (32) and (33). We calculated the density of the dopamine D1 (µ) and acetylcholine muscarinic receptors (β) in the visual cortex by using experimental observations from another study [74]. We obtained µ = 0.0381 µmol/m2 and β = 0.0996 µmol/m2. Using these values, and keeping the interaction parameter (m) equal to 1, we calculated the dopamine and acetylcholine concentration dynamics, as shown in Figure 12.

    As evident from Figure 12, the resulting concentrations were oscillatory with an orthogonal phase difference (around 90°) between them. When we gradually increased the interaction parameter (m) from 0 to 1, the phase difference increased with the maximum value of 90° at m = 1 (Figure 13a). Thus, the interaction parameter affects the phase shift between oscillatory dopamine and acetylcholine concentrations, and we can formulate that the interaction parameter (m) signifies the interaction between the time and spatial information streams.

    The tilted spatiotemporal receptive field profile of complex neuronal cells in area V5 is tuned to a particular spatiotemporal frequency, resulting in the perception of an equivalent speed [75,76,77,78,79]. Therefore, a change in the perception of a moving object should result from the alterations in neuronal tuning properties in area V5. As the speed of the moving object varies, the interaction between the time and spatial information streams also varies due to changes in the interaction between dopamine and acetylcholine. Dopamine and acetylcholine interact with each other via receptors at dendrites (axon-dendrite synapse), which may modulate the spatiotemporal receptive field of complex cells and change the tuning speed of complex cells. Due to the changes in the tuning properties of the complex cells, the perception of the moving object will be modulated.

    Figure 12.  Temporal dynamics of acetylcholine concentration and dopamine concentration.
    Figure 13.  (a) Alteration of the phase shift between the oscillatory concentrations of dopamine (or acetylcholine) while the acetylcholine-dopamine interaction parameter varies. (b) Alteration of the oscillatory time period of acetylcholine (or dopamine) while the acetylcholine-dopamine interaction parameter varies.

    This dynamic mechanism induces a process by which spatial information and temporal information can interact and modulate the perception of a moving object (graphical illustrated in Figure 14). Thus, we can observe the significance of the axodendritic synapse for spatiotemporal interaction.

    Figure 14.  Biochemical basis of the interaction of the spatial information stream and the temporal information stream during the perception of a moving object; the interaction occurs at the axodendritic synapse.

    Considering that the spatial position of an object in the visual field is constant, based on their perception, an observer can predict that the object is static and for how much time. Even without any change in the spatial position of the object, the time information stream is present in the brain for time perception. When multiple objects are changing positions in the visual field at different rates, an observer can perceive that different objects are changing their positions differently. Therefore, we can infer that time and spatial information streams are represented separately and independently in the brain. However, these two streams interact to link time and change in spatial position during the visual spatiotemporal perception of a moving object. As already mentioned, this integration happens in area V5 of the visual cortex.

    Since temporal information and spatial information are independent information streams in the brain, in the vectorial representation, they should be orthogonal to each other. Figure 15(a) is the pictorial representation of the spatial vector (X) and time vector (T); because of the orthogonality, the magnitude of the resultant vector (S) follows the Pythagorean theorem.

    S=X+T (34a)

    Thus

    |S|2=|X|2+|T|2 (34b)
    Figure 15.  (a) Vectorial representation of the interaction between the spatial information stream (X) and the time information stream (T). (b) Interaction between the orthogonal components of the time information stream and spatial information stream.

    S is mathematically equivalent to displacement or length; therefore, the magnitude of S should be the same, irrespective of the frame of reference (either retinotopic space or perceptual space). Applying the constraint that |S|2 will be equal in the retinotopic space and perceptual space, we found the following (please see Supplementary material Section S3 for derivation):

    |S|2=x2k2t2 (35a)
    Hence,  X=x  and  T=jkt (35b)

    where j=1,j is an imaginary number.

    If we analyze the above equations, we can obtain insight into what happens during the perception of a moving object. X and T represent the temporal and spatial information streams, while S represents the resultant interaction among them. We can now recollect that dopamine and acetylcholine modulate the perception of time and spatial location, respectively. The 90° orthogonal phase shift between oscillatory dopamine and acetylcholine concentrations (shown in Figure 12) is mathematically expressed as j(1) in Eq (35a), (35b), showing the interaction of time perception and spatial perception. Moreover, in Figure 13(a), phase shift varies with changes in the interaction parameter (m). This change in phase shift will result in a variation in interaction level because only the orthogonal components of the spatial vector and time vector will interact, as shown in Figure 15(b). As the value of phase shift approaches 90°, the magnitude of the horizontal component of the spatial information stream will increase. In consequence, the magnitude of the interaction vector S will increase too. Hence, the perception of the moving object will vary as the interaction parameter varies. Therefore, the value of the interaction parameter is proportional to the speed of the moving object (from earlier analysis, we know that the perception of the moving object varies with the speed of the moving object).

    The environment from different vantage points is not a static system but a dynamic one. This dynamic nature is observable along the temporal dimension as different events occur in the spatiotemporal arena. Causality defines the framework for assessing the causal or generative relationship between two events. Deducing the cause-effect relationship between events is an innate feature of human cognition. This is such an essential cognitive ability, as causal understanding is one of the fundamental differences between human and nonhuman brains [80,81]. In this paper, we formulated a theoretical framework for perceiving a moving object under the condition of invariant representation of temporal causality in the retinotopic space and perceptual space. For this, we represented a change in the position of a moving object with time as the spatiotemporal coordinates in the retinotopic space and perceptual space. Thus, we could derive transformation equations that explain the transformation of the spatiotemporal coordinates of the moving object from retinotopic space to perceptual space.

    In the transformation equations (Eq (28)), P (speed) quantifies the dynamic nature of the position of the moving object and the fidelity parameter (k delineates the possible role of the anatomical characteristics of the brain during the perception. The transformation equation predicted that the perception of a moving object would vary with speed, and it was later observed in the moving arc and time perception experiment. Equation (25) predicted that the numerical value of the fidelity parameter k would be constant, which was subsequently verified by analyzing the experimental findings of the moving arc experiment using our approach. We calculated the value of the fidelity parameter and showed the value to be robustly constant (k=0.74, as predicted by our theoretical model. The fidelity parameter represents the conformity and correspondence between the different representations of the moving object in different spaces, such as the perceptual space, cerebral retinotopic space and cerebellar retinotopic space. We investigated another experimental study (that measured the perceived time) to validate our mathematical model further. Using the transformation equations and k = 0.74 (the value obtained after analyzing the moving arc experiment), we predicted the perceived time, which satisfactorily followed the experimental outcomes (goodness-of-fit test, p > 0.99). Thus, we verified the transformation equations based on the empirical analysis. Our results indicate that the conservation of causality between the retinotopic and perceptual spaces shapes the observer's perception of a moving object. The novel findings provide a new dimension to understanding perception by developing an innovative multi-scale mathematical formulation.

    The transformation equations show that the position of the object in the perceptual space (x*) depends on both the position (x) and time (t) in the retinotopic space. Similarly, the time in the perceptual space (t*) depends on both the position (x) and time (t) in the retinotopic space. Thus, we could conclude that, in the perceptual space, the time (t) and position (x) information interact during the visual- spatial and temporal perception. However, during the perception of a moving object, the interaction between the temporal and spatial information is not explicitly observable. But, the interaction between the temporal and spatial information is explicitly observable during the perception of a moving pendulum with and without a neutral density filter in front of one eye (Pulfrich phenomenon). The neutral density filter affects the luminance and introduces a time delay in processing the retinal image [43,44], and the pendulum is perceived to move in an elliptical path.

    As per our proposed model, the change in temporal information affects the interaction between the spatial and temporal information, which modulates the perception of the position of the moving pendulum. During the perception of the moving object and Pulfrich illusion, the same visual area V5 is active, which confirms that both the perception of the moving object and the Pulfrich illusion involve interaction between the spatial and temporal information. Furthermore, we verified that the area V5 is where the visual- spatial information and temporal information interact by performing the MRI tractography investigations. Then, we found the neural tracts between the area V5 and relevant brain regions, thus linking the areas of visual-spatial perception and temporal perception. The centrality analysis of the network (considering the brain regions as nodes and neural tracts as the connection between them) yields that area V5 is the most important node.

    The neurons in the area V5 are tuned to the particular speed of the moving object [79,82]. We elucidated that spatial and temporal information interaction should occur at the visual area V5. We devised a mathematical model based on the Lotka-Volterra system to quantify the interaction between the visual- spatial and temporal perception mediated by acetylcholine and dopamine neurotransmitters in area V5. In our model, the interaction parameter (m) denotes the level of the interaction. For m = 1, we obtained the oscillatory nature of the acetylcholine and dopamine concentrations with a phase difference of 90°. The phase difference decreases and the period of oscillatory concentration increases as the interaction parameter (m) increases. Using the concepts of vector algebra, we represented the spatial and temporal perception as orthogonal vectors. Further mathematical analysis yielded that the spatiotemporal perception of moving objects can be represented as a complex number (real part: spatial information, imaginary part: temporal information). The 90° phase difference between the acetylcholine and dopamine concentration is denoted by j(1) in the complex number representation. We showed that the orthogonal components of the spatial information and temporal information interact. Therefore, the phase difference decreases as the interaction parameter (m) decreases and, thus, there is reduction in the interaction between the spatial and temporal information.

    Further, we delineated that the interaction between acetylcholine and dopamine (mathematically denoted by the interaction parameter) modifies the spatiotemporal receptive field properties of the complex cell in the area V5. Due to this, the complex cell will now be tuned for another speed. This change in the tuned speed will be minimal and insignificant for lower speeds but will gradually increase and become significant as the speed of the moving object increases. Thus, we could interpret that the interaction parameter (m) is proportional to the speed of the moving object (P), because the interaction parameter affects the interaction between the acetylcholine and dopamine, which will manifest as modulation in the perception of the moving object, similar to how the speed of the object modulates the perception.

    It is interesting to note that blind wounded soldiers with an injury to the primary visual cortex can perceive motion without perceiving properties like the color and shape of the moving stimulus [83]. This suggests that, even though the transmission of visual information from the primary visual cortex to area V5 is impaired, area MT/V5 is active per se and receives information from the other brain regions while perceiving moving objects [84]. Hence, we can conclude that spatial and temporal information streams from brain regions other than the primary visual cortex may meet at area MT/V5. Furthermore, these information streams from different brain regions may carry information extracted from the other sensory systems. Therefore, area MT/V5 may act as a spatiotemporal information interaction point for other sensory systems apart from vision. Indeed, experimental research studies also point to a similar direction regarding visual, auditory and tactile motion processing [85,86,87]. Our results imply that the proposed approach to understanding the modulation of the perception due to the dynamics of the causal states of an event may be generalized to different sensory systems. In principle, cerebral area V5 processes information from other sensory systems, too [85,86,87]; hence, our mathematical framework and analysis may be scalable and applicable to the more wide-ranging nature of area V5, which may be useful in understanding the general principle of brain function [88]. Furthermore, our model can be helpful when a human operator works in an environment consisting of very fast-moving objects (at high speed, the perception of a moving object is modulated significantly and causes errors in judgment). A perceptual error warning system based on our mathematical model can be used to issue an alerting signal to a human operator, such as to the pilot of a fighter jet or speeding driver of a vehicle.

    We can now come to the translational aspect of our approach, such as its application to neuroscience clinics, along with the incorporation of more accurate experimentation. We focused on the normal healthy brain while developing our mathematical framework above. However, the aforesaid neurochemical basis of the perceptual phenomenon, as shown by dopamine-acetylcholine interaction, can signify the possibility of a change in motion perception due to the relative imbalance of the neurotransmitters away from the normal levels occurring in healthy individuals. A disparity or abnormalities of the neurotransmitter levels in the brain can occur due to several factors, such as underlying neurological or psychiatric disorders. Under these conditions, the perception of stimuli moving in the external environment can become different from the normal brain. For instance, patients with dopaminergic hyperactivation (such as in schizophrenia) [89,90] or cholinergic hypoactivation (as melancholic depression) experience altered perception of the moving object [91].

    Accordingly, we can take that the underlying impaired pathophysiological status of the brain can be detected and estimated by measuring the perception of the moving stimuli by the patient and comparing it with the mathematical prediction for the normal brain. The perception of moving stimuli in patients may deviate from healthy normal subjects due to relative changes in the levels of neurotransmitters in the patient. Thus, our model, based on perceptual alteration, can act as a potential biomarker to identify the presence and gauge the intensity of a neurological condition; the method could be developed as an affordable visual optometric procedure for psychiatric diagnostics in neuroscience clinics. Furthermore, for precise diagnosis, it may be desirable to have a high-precision level of accurate experimentation (for example, when estimating subjective spatial and temporal segments). For such enhanced accuracy, one can use higher-fidelity visual stimulation apparatuses and measurement devices through the use of experimental platforms and paradigm designing facilities such as the E-Prime system [92] or PsychPhy platforms [93].

    Our approach may be developed for further understanding of motion perception in the clinical neuroscience scenario. For potential applicability in medical settings, a mathematical framework needs to be developed which can model the modulation and alteration of the perception of a moving object as the neurotransmitter levels change due to a neurological disorder. Next, for validation of the mathematical framework, experiments which can simultaneously measure motion perception and neurotransmitter levels need to be devised and performed on normal healthy controls, along with patients. Indeed, behavioral experiments can be readily planned to measure the perception of moving stimuli. Furthermore, radiological techniques such as positron emission tomography (PET) [94] and magnetic resonance spectroscopy (MRS) [95] can be used for quantitative evaluation of the neurotransmitter levels in individuals, wherein the accuracy of the assessment can be upgraded by analysis modules such as Clinica module (for PET) [96] or FMRIB Software Library (FSL)-MRS system (for spectroscopy) [97]. This is an area that we will pursue in a future investigation. To sum up, we note that there may be an appreciable implication of the methodology of our study to the experimental aspects of clinical neuroscience.

    Pratik Purohit is grateful for a student opportunity provided by the Indian Institute of Technology (BHU), Varanasi, India. Our deepest appreciation is extended to the National Brain Research Centre, Manesar, India for enabling the 3-T MRI scanning. Sincere thanks is also given to Maastricht University, Maastricht, Netherlands for the 7-T MRI scanning. For providing research facilitation, Prasun K. Roy is grateful to Shiv Nadar University, his mailing address being Dept. of Life Sciences, R-206, Shiv Nadar University, Dadri 201316, India.

    No conflict of interest, financial or otherwise, are declared by the authors.



    [1] M. G. P. Rosa, Visual maps in the adult primate cerebral cortex: Some implications for brain development and evolution, Braz. J. Med. Biol. Res., 35 (2002), 1485–1498. https://doi.org/10.1590/S0100-879X2002001200008 doi: 10.1590/S0100-879X2002001200008
    [2] H. Strasburger, On the cortical mapping function-Visual space, cortical space, and crowding, Vision Res., 194 (2022), 107972. https://doi.org/10.1016/j.visres.2021.107972 doi: 10.1016/j.visres.2021.107972
    [3] E. L. Schwartz, Computational anatomy and functional architecture of striate cortex: A spatial mapping approach to perceptual coding, Vision Res., 20 (1980), 645–669. https://doi.org/10.1016/0042-6989(80)90090-5 doi: 10.1016/0042-6989(80)90090-5
    [4] C. Bordier, J. M. Hupé, M. Dojat, Quantitative evaluation of fMRI retinotopic maps, from V1 to V4, for cognitive experiments, Front. Hum. Neurosci., 9 (2015), 277. https://doi.org/10.3389/fnhum.2015.00277 doi: 10.3389/fnhum.2015.00277
    [5] J. Larsson, D. J. Heeger, Two retinotopic visual areas in human lateral occipital cortex, J. Neurosci., 26 (2006), 13128–13142. https://doi.org/10.1523/JNEUROSCI.1657-06.2006 doi: 10.1523/JNEUROSCI.1657-06.2006
    [6] D. M. van Es, W. van der Zwaag, T. Knapen, Topographic maps of visual space in the human cerebellum, Curr. Biol., 29 (2019), 1689–1694. https://doi.org/10.1016/j.cub.2019.04.012 doi: 10.1016/j.cub.2019.04.012
    [7] C. A. Olman, P. F. Van de Moortele, J. F. Schumacher, J. R. Guy, K. Uǧurbil, E. Yacoub, Retinotopic mapping with spin echo BOLD at 7T, Magn. Reson. Imaging, 28 (2010), 1258–1269. https://doi.org/10.1016/j.mri.2010.06.001 doi: 10.1016/j.mri.2010.06.001
    [8] B. A. Wandell, J. Winawer, Imaging retinotopic maps in the human brain, Vision Res., 51 (2011), 718–737. https://doi.org/10.1016/j.visres.2010.08.004 doi: 10.1016/j.visres.2010.08.004
    [9] A. C. Huk, D. Ress, D. J. Heeger, Neuronal basis of the motion aftereffect reconsidered, Neuron, 32 (2001), 161–172. https://doi.org/10.1016/S0896-6273(01)00452-4 doi: 10.1016/S0896-6273(01)00452-4
    [10] K. Grill-Spector, T. Kushnir, T. Hendler, S. Edelman, Y. Itzchak, R. Malach, A sequence of object-processing stages revealed by fMRI in the human occipital lobe, Hum. Brain Mapp., 6 (1998), 316–328. https://doi.org/10.1002/(SICI)1097-0193(1998)6:4<316::AID-HBM9>3.0.CO;2-6 doi: 10.1002/(SICI)1097-0193(1998)6:4<316::AID-HBM9>3.0.CO;2-6
    [11] S. Engel, X. Zhang, B. Wandell, Colour tuning in human visual cortex measured with functional magnetic resonance imaging, Nature, 388 (1997), 68–71. https://doi.org/10.1038/40398 doi: 10.1038/40398
    [12] R. Hartig, C. Battal, G. Chávez, A. Vedoveli, T. Steudel, E. Krampe, et al., Topographic mapping of the primate primary interoceptive cortex, Front. Neurosci., 11 (2017). https://doi.org/10.3389/conf.fnins.2017.94.00005 doi: 10.3389/conf.fnins.2017.94.00005
    [13] J. A. Bourne, M. G. P. Rosa, Hierarchical development of the primate visual cortex, as revealed by neurofilament immunoreactivity: Early maturation of the middle temporal area (MT), Cereb. Cortex, 16 (2006), 405–414. https://doi.org/10.1093/cercor/bhi119 doi: 10.1093/cercor/bhi119
    [14] S. Nishida, T. Kawabe, M. Sawayama, T. Fukiage, Motion perception: From detection to interpretation, Annu. Rev. Vis. Sci., 4 (2018), 501–523. https://doi.org/10.1146/annurev-vision-091517-034328 doi: 10.1146/annurev-vision-091517-034328
    [15] T. D. Albright, G. R. Stoner, Visual motion perception, Proc. Natl. Acad. Sci. U. S. A., 92 (1995), 2433–2440. https://doi.org/10.1073/pnas.92.7.2433 doi: 10.1073/pnas.92.7.2433
    [16] A. M. Derrington, H. A. Allen, L. S. Delicato, Visual mechanisms of motion analysis and motion perception, Annu. Rev. Psychol., 55 (2004), 181–205. https://doi.org/10.1146/annurev.psych.55.090902.141903 doi: 10.1146/annurev.psych.55.090902.141903
    [17] J. D. Herrington, S. Baron-Cohen, S. J. Wheelwright, K. D. Singh, E. T. Bullmore, M. Brammer, et al., The role of MT+/V5 during biological motion perception in Asperger Syndrome: An fMRI study, Res. Autism Spectr. Disord., 1 (2007), 14–27. https://doi.org/10.1016/j.rasd.2006.07.002 doi: 10.1016/j.rasd.2006.07.002
    [18] A. Antal, M. A. Nitsche, W. Kruse, T. Z. Kincses, K. P. Hoffmann, W. Paulus, Direct current stimulation over V5 enhances visuomotor coordination by improving motion perception in humans, J. Cogn. Neurosci., 16 (2004), 521–527. https://doi.org/10.1162/089892904323057263 doi: 10.1162/089892904323057263
    [19] R. Laycock, D. P. Crewther, P. B. Fitzgerald, S. G. Crewther, Evidence for fast signals and later processing in human V1/V2 and V5/MT+: A TMS study of motion perception, J. Neurophysiol., 98 (2007), 1253–1262. https://doi.org/10.1152/jn.00416.2007 doi: 10.1152/jn.00416.2007
    [20] D. Tadin, J. Silvanto, A. Pascual-Leone, L. Battelli, Improved motion perception and impaired spatial suppression following disruption of cortical area MT/V5, J. Neurosci., 31 (2011), 1279–1283. https://doi.org/10.1523/JNEUROSCI.4121-10.2011 doi: 10.1523/JNEUROSCI.4121-10.2011
    [21] J. P. H. van Santen, G. Sperling, Elaborated Reichardt detectors, J. Opt. Soc. Am. A, 2 (1985), 300. https://doi.org/10.1364/josaa.2.000300 doi: 10.1364/josaa.2.000300
    [22] E. H. Adelson, J. R. Bergen, Spatiotemporal energy models for the perception of motion, J. Opt. Soc. Am. A, 2 (1985), 284. https://doi.org/10.1364/josaa.2.000284 doi: 10.1364/josaa.2.000284
    [23] M. Mashour, The information basis in the perception of velocity, Acta Psychol. (Amst)., 48 (1981), 69–78. https://doi.org/10.1016/0001-6918(81)90049-4 doi: 10.1016/0001-6918(81)90049-4
    [24] D. Algom, L. Cohen-Raz, Visual velocity input-output functions: The integration of distance and duration onto subjective velocity, J. Exp. Psychol. Hum. Percept. Perform., 10 (1984), 486–501. https://doi.org/10.1037/0096-1523.10.4.486 doi: 10.1037/0096-1523.10.4.486
    [25] D. J. Hagler, M. I. Sereno, Spatial maps in frontal and prefrontal cortex, Neuroimage, 29 (2006), 567–577. https://doi.org/10.1016/j.neuroimage.2005.08.058 doi: 10.1016/j.neuroimage.2005.08.058
    [26] S. D. Slotnick, S. A. Klein, T. Carney, E.E. Sutter, Electrophysiological estimate of human cortical magnification, Clin. Neurophysiol., 112 (2001), 1349–1356. https://doi.org/10.1016/S1388-2457(01)00561-2 doi: 10.1016/S1388-2457(01)00561-2
    [27] J. Swearer, Visual Angle, Encycl. Clin. Neuropsychol., (2011), 2626–2627. https://doi.org/10.1007/978-0-387-79948-3_1411 doi: 10.1007/978-0-387-79948-3_1411
    [28] A. Cowey, E. T. Rolls, Human cortical magnification factor and its relation to visual acuity, Exp. Brain Res., 21 (1974), 447–454. https://doi.org/10.1007/BF00237163 doi: 10.1007/BF00237163
    [29] H. L. Ansbacher, Distortion in the perception of real movement, J. Exp. Psychol., 34 (1944), 1–23. https://doi.org/10.1037/h0061686 doi: 10.1037/h0061686
    [30] S. Kaneko, I. Murakami, Perceived duration of visual motion increases with speed, J. Vis., 9 (2009), 1–12. https://doi.org/10.1167/9.7.14 doi: 10.1167/9.7.14
    [31] K. R. Sitek, O. F. Gulban, E. Calabrese, G. A. Johnson, A. Lage-Castellanos, M. Moerel, et al., Mapping the human subcortical auditory system using histology, postmortem MRI and in vivo MRI at 7T, Elife, 8 (2019), 1–36. https://doi.org/10.7554/eLife.48932 doi: 10.7554/eLife.48932
    [32] P. J. Basser, J. Mattiello, D. Lebihan, Estimation of the effective self-diffusion tensor from the NMR spin echo, J. Magn. Reson. Ser. B, 103 (1994), 247–254. https://doi.org/10.1006/jmrb.1994.1037 doi: 10.1006/jmrb.1994.1037
    [33] L. Fan, H. Li, J. Zhuo, Y. Zhang, J. Wang, L. Chen, et al., The human brainnetome atlas: A new brain atlas based on connectional architecture, Cereb. Cortex, 26 (2016), 3508–3526. https://doi.org/10.1093/cercor/bhw157 doi: 10.1093/cercor/bhw157
    [34] R. O. Duncan, G. M. Boynton, Cortical magnification within human primary visual cortex correlates with acuity thresholds, Neuron, 38 (2003), 659–671. https://doi.org/10.1016/S0896-6273(03)00265-4 doi: 10.1016/S0896-6273(03)00265-4
    [35] J. C. Horton, W. F. Hoyt, The representation of the visual field in human striate cortex: A revision of the classic Holmes map, Arch. Ophthalmol., 109 (1991), 816–824. https://doi.org/10.1001/archopht.1991.01080060080030 doi: 10.1001/archopht.1991.01080060080030
    [36] J. Rovamo, V. Virsu, An estimation and application of the human cortical magnification factor, Exp. Brain Res., 37 (1979), 495–510. https://doi.org/10.1007/BF00236819 doi: 10.1007/BF00236819
    [37] M. M. Schira, A. R. Wade, C. W. Tyler, Two-dimensional mapping of the central and parafoveal visual field to human visual cortex, J. Neurophysiol., 97 (2007), 4284–4295. https://doi.org/10.1152/jn.00972.2006 doi: 10.1152/jn.00972.2006
    [38] D. J. Tolhurst, L. Ling, Magnification factors and the organization of the human striate cortex, Hum. Neurobiol., 6 (1988), 247–254.
    [39] J. Mate, A. C. Pires, G. Campoy, S. Estaún, Estimating the duration of visual stimuli in motion environments., Psicológica, 30 (2009), 287–300.
    [40] H. Karşilar, Y. D. Kisa, F. Balci, Dilation and constriction of subjective time based on observed walking speed, Front. Psychol., 9 (2018), 2565. https://doi.org/10.3389/fpsyg.2018.02565 doi: 10.3389/fpsyg.2018.02565
    [41] S. W. brown, Time, change, and motion: The effects of stimulus movement on temporal perception, Percept. Psychophys., 57 (1995), 105–116. https://doi.org/10.3758/BF03211853
    [42] A. Petzold, E. Pitz, The historical origin of the pulfrich effect: A serendipitous astronomic observation at the border of the Milky Way, Neuro-Ophthalmology, 33 (2009), 39–46. https://doi.org/10.1080/01658100802590829 doi: 10.1080/01658100802590829
    [43] J. A. Wilson, S. M. Anstis, Visual delay as a function of luminance, Am. J. Psychol., 82 (1969), 350–358. https://doi.org/10.2307/1420750 doi: 10.2307/1420750
    [44] A. Reynaud, R. F. Hess, Interocular contrast difference drives illusory 3D percept, Sci. Rep., 7 (2017), 1–6. https://doi.org/10.1038/s41598-017-06151-w doi: 10.1038/s41598-017-06151-w
    [45] N. Qian, R. A. Andersen, A physiological model for motion-stereo integration and a unified explanation of Pulfrich-like phenomena, Vision Res., 37 (1997), 1683–1698. https://doi.org/10.1016/S0042-6989(96)00164-2 doi: 10.1016/S0042-6989(96)00164-2
    [46] A. Anzai, I. Ohzawa, R. D. Freeman, Joint-encoding of motion and depth by visual cortical neurons: Neural basis of the Pulfrich effect, Nat. Neurosci., 4 (2001), 513–518. https://doi.org/10.1038/87462 doi: 10.1038/87462
    [47] A. A. L. D'Alfonso, J. Van Honk, D. J. L. G. Schutter, A. R. Caffé, A. Postma, E. H. F. De Haan, Spatial and temporal characteristics of visual motion perception involving V5 visual cortex, Neurol. Res., 24 (2002), 266–270. https://doi.org/10.1179/016164102101199891 doi: 10.1179/016164102101199891
    [48] G. Beckers, S. Zeki, The consequences of inactivating areas V1 and V5 on visual motion perception, Brain, 118 (1995), 49–60. https://doi.org/10.1093/brain/118.1.49 doi: 10.1093/brain/118.1.49
    [49] R. Laycock, D. P. Crewther, P. B. Fitzgerald, S. G. Crewther, Evidence for fast signals and later processing in human V1/V2 and V5/MT+: A TMS study of motion perception, J. Neurophysiol., 98 (2007), 1253–1262. https://doi.org/10.1152/jn.00416.2007 doi: 10.1152/jn.00416.2007
    [50] K. Spang, M. Morgan, Cortical correlates of stereoscopic depth produced by temporal delay, J. Vis., 8 (2008), 1–12. https://doi.org/10.1167/8.9.10 doi: 10.1167/8.9.10
    [51] R. A. Andersen, G. K. Essick, R. M. Siegel, Encoding of spatial location by posterior parietal neurons, Science, 230 (1985), 456–458. https://doi.org/10.1126/science.4048942 doi: 10.1126/science.4048942
    [52] A. M. Ferrandez, L. Hugueville, S. Lehéricy, J. B. Poline, C. Marsault, V. Pouthas, Basal ganglia and supplementary motor area subtend duration perception: An fMRI study, Neuroimage, 19 (2003), 1532–1544. https://doi.org/10.1016/S1053-8119(03)00159-9 doi: 10.1016/S1053-8119(03)00159-9
    [53] D. L. Harrington, K. Y. Haaland, R. T. Knight, Cortical networks underlying mechanisms of time perception, J. Neurosci., 18 (1998), 1085–1095. https://doi.org/10.1523/jneurosci.18-03-01085.1998 doi: 10.1523/jneurosci.18-03-01085.1998
    [54] R. B. Ivry, R. M. C. Spencer, The neural representation of time, Curr. Opin. Neurobiol., 14 (2004), 225–232. https://doi.org/10.1016/j.conb.2004.03.013 doi: 10.1016/j.conb.2004.03.013
    [55] P. Janssen, M. N. Shadlen, A representation of the hazard rate of elapsed time in macaque area LIP, Nat. Neurosci., 8 (2005), 234–241. https://doi.org/10.1038/nn1386 doi: 10.1038/nn1386
    [56] M. Jazayeri, M. N. Shadlen, A neural mechanism for sensing and reproducing a time interval, Curr. Biol., 25 (2015), 2599–2609. https://doi.org/10.1016/j.cub.2015.08.038 doi: 10.1016/j.cub.2015.08.038
    [57] C. S. Konen, S. Kastner, Representation of eye movements and stimulus motion in topographically organized areas of human posterior parietal cortex, J. Neurosci., 28 (2008), 8361–8375. https://doi.org/10.1523/JNEUROSCI.1930-08.2008 doi: 10.1523/JNEUROSCI.1930-08.2008
    [58] M. I. Leon, M. N. Shadlen, Representation of time by neurons in the posterior parietal cortex of the macaque, Neuron, 38 (2003), 317–327. https://doi.org/https://doi.org/10.1016/S0896-6273(03)00185-5 doi: 10.1016/S0896-6273(03)00185-5
    [59] P. A. Lewis, R. C. Miall, Distinct systems for automatic and cognitively controlled time measurement: Evidence from neuroimaging, Curr. Opin. Neurobiol., 13 (2003), 250–255. https://doi.org/10.1016/S0959-4388(03)00036-9 doi: 10.1016/S0959-4388(03)00036-9
    [60] H. Onoe, M. Komori, K. Onoe, H. Takechi, H. Tsukada, Y. Watanabe, Cortical networks recruited for time perception: A monkey positron emission tomography (PET) study, Neuroimage, 13 (2001), 37–45. https://doi.org/10.1006/nimg.2000.0670 doi: 10.1006/nimg.2000.0670
    [61] F. Protopapa, M. J. Hayashi, S. Kulashekhar, W. Van Der Zwaag, G. Battistella, M. M. Murray, et al., Chronotopic maps in human supplementary motor area, 17 (2019), e3000026. https://doi.org/10.1371/journal.pbio.3000026
    [62] H. Sakata, M. Kusunoki, Organization of space perception: neural representation of three-dimensional space in the posterior parietal cortex, Curr. Opin. Neurobiol., 2 (1992), 170–174. https://doi.org/10.1016/0959-4388(92)90007-8 doi: 10.1016/0959-4388(92)90007-8
    [63] J. G. Mikhael, S. J. Gershman, Adapting the flow of time with dopamine, J. Neurophysiol., 121 (2019), 1748–1760. https://doi.org/10.1152/jn.00817.2018 doi: 10.1152/jn.00817.2018
    [64] T. Liu, P. Hu, R. Cao, X. Ye, Y. Tian, X. Chen, et al., Dopaminergic modulation of biological motion perception in patients with Parkinson's disease, Sci. Rep., 7 (2017), 1–9. https://doi.org/10.1038/s41598-017-10463-2 doi: 10.1038/s41598-017-10463-2
    [65] C. Gratton, S. Yousef, E. Aarts, D. L. Wallace, M. D'Esposito, M. A. Silver, Cholinergic, but not dopaminergic or noradrenergic, enhancement sharpens visual spatial perception in humans, J. Neurosci., 37 (2017), 4405–4415. https://doi.org/10.1523/JNEUROSCI.2405-16.2017 doi: 10.1523/JNEUROSCI.2405-16.2017
    [66] S. Threlfell, M. A. Clements, T. Khodai, I. S. Pienaar, R. Exley, J. Wess, et al., Striatal muscarinic receptors promote activity dependence of dopamine transmission via distinct receptor subtypes on cholinergic interneurons in ventral versus dorsal striatum, J. Neurosci., 30 (2010), 3398–3408. https://doi.org/10.1523/JNEUROSCI.5620-09.2010 doi: 10.1523/JNEUROSCI.5620-09.2010
    [67] S. Threlfell, T. Lalic, N. J. Platt, K. A. Jennings, K. Deisseroth, S. J. Cragg, Striatal dopamine release is triggered by synchronized activity in cholinergic interneurons, Neuron, 75 (2012), 58–64. https://doi.org/10.1016/j.neuron.2012.04.038 doi: 10.1016/j.neuron.2012.04.038
    [68] E. D. Abercrombie, P. DeBoer, Substantia nigra D1 receptors and stimulation of striatal cholinergic interneurons by dopamine: A proposed circuit mechanism, J. Neurosci., 17 (1997), 8498–8505. https://doi.org/10.1523/jneurosci.17-21-08498.1997 doi: 10.1523/jneurosci.17-21-08498.1997
    [69] B. Di Cara, F. Panayi, A. Gobert, A. Dekeyne, D. Sicard, L. De Groote, et al., Activation of dopamine D1 receptors enhances cholinergic transmission and social cognition: A parallel dialysis and behavioural study in rats, Int. J. Neuropsychopharmacol., 10 (2007), 383–399. https://doi.org/10.1017/S1461145706007103 doi: 10.1017/S1461145706007103
    [70] A. Imperato, M. C. Obinu, G. L. Gessa, Stimulation of both dopamine D1 and D2 receptors facilitates in vivo acetylcholine release in the hippocampus, Brain Res., 618 (1993), 341–345. https://doi.org/10.1016/0006-8993(93)91288-4 doi: 10.1016/0006-8993(93)91288-4
    [71] A. Martorana, F. Mori, Z. Esposito, H. Kusayanagi, F. Monteleone, C. Codecà, et al., Dopamine modulates cholinergic cortical excitability in Alzheimer's disease patients, Neuropsychopharmacology, 34 (2009), 2323–2328. https://doi.org/10.1038/npp.2009.60 doi: 10.1038/npp.2009.60
    [72] M. S. Lidow, P. S. Goldman-Rakic, D. W. Gallager, P. Rakic, Distribution of dopaminergic receptors in the primate cerebral cortex: Quantitative autoradiographic analysis using 3H.raclopride, 3H.spiperone and 3H.SCH23390, Neuroscience, 40 (1991), 657–671. https://doi.org/10.1016/0306-4522(91)90003-7 doi: 10.1016/0306-4522(91)90003-7
    [73] A. Mueller, R. M. Krock, S. Shepard, T. Moore, Dopamine receptor expression among local and visual cortex-projecting frontal eye field neurons, Cereb. Cortex, 30 (2020), 148–164. https://doi.org/10.1093/cercor/bhz078 doi: 10.1093/cercor/bhz078
    [74] K. Zilles, N. Palomero-Gallagher, Multiple transmitter receptors in regions and layers of the human cerebral cortex, Front. Neuroanat., 11 (2017), 1–26. https://doi.org/10.3389/fnana.2017.00078 doi: 10.3389/fnana.2017.00078
    [75] J. McLean, L. A. Palmer, Contribution of linear spatiotemporal receptive field structure to velocity selectivity of simple cells in area 17 of cat, Vision Res., 29 (1989), 675–679. https://doi.org/10.1016/0042-6989(89)90029-1 doi: 10.1016/0042-6989(89)90029-1
    [76] A. S. Pawar, S. Gepshtein, S. Savel'ev, T. D. Albright, Mechanisms of spatiotemporal selectivity in cortical area MT, Neuron, 101 (2019), 514–527. https://doi.org/10.1016/j.neuron.2018.12.002 doi: 10.1016/j.neuron.2018.12.002
    [77] N. J. Priebe, C. R. Cassanello, S. G. Lisberger, The neural representation of speed in macaque area MT/V5, J. Neurosci., 23 (2003), 5650–5661. https://doi.org/10.1523/jneurosci.23-13-05650.2003 doi: 10.1523/jneurosci.23-13-05650.2003
    [78] D. Giaschi, A. Zwicker, S. A. Young, B. Bjornson, The role of cortical area V5/MT+ in speed-tuned directional anisotropies in global motion perception, Vision Res., 47 (2007), 887–898. https://doi.org/10.1016/j.visres.2006.12.017 doi: 10.1016/j.visres.2006.12.017
    [79] J. A. Perrone, A. Thiele, A model of speed tuning in MT neurons, Vision Res., 42 (2002), 1035–1051. https://doi.org/10.1016/S0042-6989(02)00029-9 doi: 10.1016/S0042-6989(02)00029-9
    [80] D. C. Penn, K. J. Holyoak, D. J. Povinelli, Darwin's mistake: Explaining the discontinuity between human and nonhuman minds, Behav. Brain Sci., 31 (2008), 109–178. https://doi.org/10.1017/S0140525X08003543 doi: 10.1017/S0140525X08003543
    [81] M. Stuart-Fox, The origins of causal cognition in early hominins, Biol. Philos., 30 (2015), 247–266. https://doi.org/10.1007/s10539-014-9462-y doi: 10.1007/s10539-014-9462-y
    [82] J. A. Perrone, A. Thiele, Speed skills: measuring the visual speed analyzing properties of primate MT neurons, Nat. Neurosci., 4 (2001), 526–532. https://doi.org/10.1038/87480 doi: 10.1038/87480
    [83] G. Riddoch, Dissociation of visual perceptions due to occipital injuries, with especial reference to appreciation of movement, Brain, 40 (1917), 15–57. https://doi.org/10.1093/brain/40.1.15 doi: 10.1093/brain/40.1.15
    [84] S. Zeki, D.H. Ffytche, The Riddoch syndrome: Insights into the neurobiology of conscious vision, Brain, 121 (1998), 25–45. https://doi.org/10.1093/brain/121.1.25 doi: 10.1093/brain/121.1.25
    [85] T. Amemiya, B. Beck, V. Walsh, H. Gomi, P. Haggard, Visual area V5/hMT+ contributes to perception of tactile motion direction: A TMS study, Sci. Rep., 7 (2017), 1–7. https://doi.org/10.1038/srep40937 doi: 10.1038/srep40937
    [86] K. Krug, A common neuronal code for perceptual processes in visual cortex? Comparing choice and attentional correlates in V5/MT, Philos. Trans. R. Soc. B Biol. Sci., 359 (2004), 929–941. https://doi.org/10.1098/rstb.2003.1415 doi: 10.1098/rstb.2003.1415
    [87] C. Poirier, O. Collignon, A. G. DeVolder, L. Renier, A. Vanlierde, D. Tranduy, et al., Specific activation of the V5 brain area by auditory motion processing: An fMRI study, Cogn. Brain Res., 25 (2005), 650–658. https://doi.org/10.1016/j.cogbrainres.2005.08.015 doi: 10.1016/j.cogbrainres.2005.08.015
    [88] S. Zeki, Area V5—a microcosm of the visual brain, Front. Integr. Neurosci., 9 (2015), 1–18. https://doi.org/10.3389/fnint.2015.00021 doi: 10.3389/fnint.2015.00021
    [89] J. Kim, D. Norton, R. McBain, D. Ongur, Y. Chen, Deficient biological motion perception in schizophrenia: Results from a motion noise paradigm, Front. Psychol., 4 (2013), 391. https://doi.org/10.3389/fpsyg.2013.00391 doi: 10.3389/fpsyg.2013.00391
    [90] Y. Chen, Abnormal visual motion processing in schizophrenia: A review of research progress, Schizophr. Bull., 37 (2011), 709–715. https://doi.org/10.1093/schbul/sbr020 doi: 10.1093/schbul/sbr020
    [91] J. D. Golomb, J. R. B. McDavitt, B. M. Ruf, J. I. Chen, A. Saricicek, K. H. Maloney, et al., Enhanced visual motion perception in major depressive disorder, J. Neurosci., 29 (2009), 9072–9077. https://doi.org/10.1523/JNEUROSCI.1003-09.2009 doi: 10.1523/JNEUROSCI.1003-09.2009
    [92] L. Richard, D. Charbonneau, An introduction to E-Prime, Tutor. Quant. Methods Psychol., 5 (2009), 68–76. https://doi.org/10.20982/tqmp.05.2.p068 doi: 10.20982/tqmp.05.2.p068
    [93] J. W. Peirce, PsychoPy-Psychophysics software in Python, J. Neurosci. Methods, 162 (2007), 8–13. https://doi.org/10.1016/j.jneumeth.2006.11.017 doi: 10.1016/j.jneumeth.2006.11.017
    [94] J. Ceccarini, H. Liu, K. Van Laere, E. D. Morris, C. Y. Sander, Methods for quantifying neurotransmitter dynamics in the living brain with PET imaging, Front. Physiol., 11 (2020), 792. https://doi.org/10.3389/fphys.2020.00792 doi: 10.3389/fphys.2020.00792
    [95] E. J. Novotny, R. K. Fulbright, P. L. Pearl, K. M. Gibson, D. L. Rothman, Magnetic resonance spectroscopy of neurotransmitters in human brain, Ann. Neurol., 54 (2003), S25–S31. https://doi.org/10.1002/ana.10697 doi: 10.1002/ana.10697
    [96] A. Routier, N. Burgos, M. Díaz, M. Bacci, S. Bottani, O. El-Rifai, et al., Clinica: An open-source software platform for reproducible clinical neuroscience studies, Front. Neuroinform., 15 (2021), 39. https://doi.org/10.3389/fninf.2021.689675 doi: 10.3389/fninf.2021.689675
    [97] W. T. Clarke, C. J. Stagg, S. Jbabdi, FSL-MRS: An end-to-end spectroscopy analysis package, Magn. Reson. Med., 85 (2021), 2950–2964. https://doi.org/10.1002/mrm.28630 doi: 10.1002/mrm.28630
  • mbe-20-05-400-supplementary.pdf
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2515) PDF downloads(127) Cited by(0)

Figures and Tables

Figures(15)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog