Review Recurring Topics

Cognitive Neuroscience of Attention
From brain mechanisms to individual differences in efficiency

  • Aspects of activation, selection and control have been related to attention from early to more recent theoretical models. In this review paper, we present information about different levels of analysis of all three aspects involved in this central function of cognition. Studies in the field of Cognitive Psychology have provided information about the cognitive operations associated with each function as well as experimental tasks to measure them. Using these methods, neuroimaging studies have revealed the circuitry and chronometry of brain reactions while individuals perform marker tasks, aside from neuromodulators involved in each network. Information on the anatomy and circuitry of attention is key to research approaching the neural mechanisms involved in individual differences in efficiency, and how they relate to maturational and genetic/environmental influences. Also, understanding the neural mechanisms related to attention networks provides a way to examine the impact of interventions designed to improve attention skills. In the last section of the paper, we emphasize the importance of the neuroscience approach in order to connect cognition and behavior to underpinning biological and molecular mechanisms providing a framework that is informative to many central aspects of cognition, such as development, psychopathology and intervention.

    Citation: M. Rosario Rueda, Joan P. Pozuelos, Lina M. Cómbita, Lina M. Cómbita. Cognitive Neuroscience of Attention From brain mechanisms to individual differences in efficiency[J]. AIMS Neuroscience, 2015, 2(4): 183-202. doi: 10.3934/Neuroscience.2015.4.183

    Related Papers:

    [1] Keyu Zhong, Qifang Luo, Yongquan Zhou, Ming Jiang . TLMPA: Teaching-learning-based Marine Predators algorithm. AIMS Mathematics, 2021, 6(2): 1395-1442. doi: 10.3934/math.2021087
    [2] Fatma S. Alrayes, Latifah Almuqren, Abdullah Mohamed, Mohammed Rizwanullah . Image encryption with leveraging blockchain-based optimal deep learning for Secure Disease Detection and Classification in a smart healthcare environment. AIMS Mathematics, 2024, 9(6): 16093-16115. doi: 10.3934/math.2024779
    [3] Eman A. Al-Shahari, Marwa Obayya, Faiz Abdullah Alotaibi, Safa Alsafari, Ahmed S. Salama, Mohammed Assiri . Accelerating biomedical image segmentation using equilibrium optimization with a deep learning approach. AIMS Mathematics, 2024, 9(3): 5905-5924. doi: 10.3934/math.2024288
    [4] Mesut GUVEN . Leveraging deep learning and image conversion of executable files for effective malware detection: A static malware analysis approach. AIMS Mathematics, 2024, 9(6): 15223-15245. doi: 10.3934/math.2024739
    [5] Maha M. Althobaiti, José Escorcia-Gutierrez . Weighted salp swarm algorithm with deep learning-powered cyber-threat detection for robust network security. AIMS Mathematics, 2024, 9(7): 17676-17695. doi: 10.3934/math.2024859
    [6] Olfa Hrizi, Karim Gasmi, Abdulrahman Alyami, Adel Alkhalil, Ibrahim Alrashdi, Ali Alqazzaz, Lassaad Ben Ammar, Manel Mrabet, Alameen E.M. Abdalrahman, Samia Yahyaoui . Federated and ensemble learning framework with optimized feature selection for heart disease detection. AIMS Mathematics, 2025, 10(3): 7290-7318. doi: 10.3934/math.2025334
    [7] Farah Liyana Azizan, Saratha Sathasivam, Nurshazneem Roslan, Ahmad Deedat Ibrahim . Logic mining with hybridized 3-satisfiability fuzzy logic and harmony search algorithm in Hopfield neural network for Covid-19 death cases. AIMS Mathematics, 2024, 9(2): 3150-3173. doi: 10.3934/math.2024153
    [8] Thavavel Vaiyapuri, Prasanalakshmi Balaji, S. Shridevi, Santhi Muttipoll Dharmarajlu, Nourah Ali AlAseem . An attention-based bidirectional long short-term memory based optimal deep learning technique for bone cancer detection and classifications. AIMS Mathematics, 2024, 9(6): 16704-16720. doi: 10.3934/math.2024810
    [9] Alaa O. Khadidos . Advancements in remote sensing: Harnessing the power of artificial intelligence for scene image classification. AIMS Mathematics, 2024, 9(4): 10235-10254. doi: 10.3934/math.2024500
    [10] Wahida Mansouri, Amal Alshardan, Nazir Ahmad, Nuha Alruwais . Deepfake image detection and classification model using Bayesian deep learning with coronavirus herd immunity optimizer. AIMS Mathematics, 2024, 9(10): 29107-29134. doi: 10.3934/math.20241412
  • Aspects of activation, selection and control have been related to attention from early to more recent theoretical models. In this review paper, we present information about different levels of analysis of all three aspects involved in this central function of cognition. Studies in the field of Cognitive Psychology have provided information about the cognitive operations associated with each function as well as experimental tasks to measure them. Using these methods, neuroimaging studies have revealed the circuitry and chronometry of brain reactions while individuals perform marker tasks, aside from neuromodulators involved in each network. Information on the anatomy and circuitry of attention is key to research approaching the neural mechanisms involved in individual differences in efficiency, and how they relate to maturational and genetic/environmental influences. Also, understanding the neural mechanisms related to attention networks provides a way to examine the impact of interventions designed to improve attention skills. In the last section of the paper, we emphasize the importance of the neuroscience approach in order to connect cognition and behavior to underpinning biological and molecular mechanisms providing a framework that is informative to many central aspects of cognition, such as development, psychopathology and intervention.



    Cardiovascular disease (CVD) is a common kind of illness that is mainly caused by heart problems. CVD is characterized by shortness of breath, physical weakness, swollen feet, and exhaustion. A few risk factors of CVD include a high fat percentage, smoking, inactive lifestyles, and high blood pressure [1]. According to the World Health Organization (WHO), the foremost reason for death is CVD, which kills 18 million persons a year. Coronary artery disease is a type of heart disease [2]. As an outcome, stroke and heart disease are considered as main public health concerns. Clinicians regularly use angiography to analyze CVD. The use of analytical techniques, the expertise of doctors, and other resources are insufficient in underdeveloped states; therefore, this diagnosis procedure consumes more time and is expensive, requiring the investigation of numerous variables [3]. Recently, heart disease has become the most dangerous medical topic because the death toll from CVD has risen. Prediction helps in the primary recognition of the disease, as well as be the most effective action. In medical diagnostics, the use of machine learning (ML) has become popular [4]. ML has been recognized to enhance the classification and identification of diseases by providing information to aid medical experts in determining and identifying illnesses, supporting human health, and reducing the death rate [5]. When defining the prospect of a disease occurrence, ML classification techniques are frequently utilized.

    ML is the capability of computers to learn without being automated [6]. Generally, in the artificial intelligence (AI) model, computers learn from preceding experiences and then information. The quantity of data is growing quickly, so it is essential to professionally handle data. Occasionally, it becomes quite complex for human beings to physically remove valuable info from raw data due to their imprecision, alikeness, changeability, and uncertainty [7]. This is where ML is beneficial. With an excess of information in big data, its demand is on high growth, as it acquires more precise, helpful, and steady info from raw data. One of the chief aims of ML is to permit machines to study without being methodically automated. ML has been curiously innovative in numerous areas such as pre-processing methods and learning procedures during the last few years [8]. Deep learning (DL) originated on artificial neural networks (ANNs), which are an important technique to deliver appropriate algorithmic structures. DL permits computational methods that are collected from numerous processing layers to learn data illustrations with numerous stages of concepts and needs little work by hand [9]. DL methodology established great latency in dissimilar fields of health care, as well as showed outstanding performance in natural language processing (NLP), computer vision (CV), removal of automated health records, health modalities, and sensor data analytics [10].

    This study introduces a new Nature Inspired Metaheuristic Algorithm with Deep Learning for Healthcare Data Analysis (NIMADL-HDA) technique. The NIMADL-HDA technique examines healthcare data to recognize and classify CVD. In the presented NIMADL-HDA technique, Z-score normalization is initially performed to normalize the input data. In addition, the NIMADL-HDA technique makes use of a barnacle mating optimizer (BMO) for the feature selection (FS) process. For healthcare data classification, a convolutional long short-term memory (CLSTM) model can be employed. At last, the prairie dog optimization (PDO) algorithm can be exploited for the optimal hyperparameter selection process. The experimentation outcome analysis of the NIMADL-HDA methodology was tested on a benchmark healthcare dataset. Designed for healthcare data analysis, the NIMADL-HDA technique offers several key contributions:

    • An automated NIMADL-HDA method including BMO-based feature subset selection, CLSTM-based classification, and PDO-based hyperparameter tuning has been proposed for CVD classification. To the best of our knowledge, the NIMADL-HDA method has never existed in the literature.

    • BMO contributes to the model's efficacy by providing an optimum subset of applicable features for healthcare data analysis, thereby enhancing interpretability and reducing dimensionality.

    • CLSTM captures the temporal dependency in healthcare information, which is crucial to analyzing medical sensor data or time-series patient information and improving the model's capability to recognize trends and patterns over time.

    • PDO contributes to efficiently enhance the hyperparameters, which ensures the finetuning of the model for better performance on healthcare data.

    Khanna et al. [11] developed an innovative internet of things (IoT) and DL-permitted healthcare disease diagnosis (IoTDL-HDD) technique. This methodology uses a bidirectional long short term memory (BiLSTM) feature extraction model to remove valuable feature vectors from signals of electrocardiograms (ECG). To improve the efficacy of the BiLSTM model, the artificial flora optimization (AFO) methodology used hyperparameter optimization. Additionally, a fuzzy deep neural network (FDNN) classification algorithm was used to convey the appropriate class labels to signals of ECG. Rath et al. [12] identified appropriate DL and ML technologies and proposed and tested essential classification methods proposed. The Generative Adversarial Network (GAN) technique was selected by the main aim to handle imbalanced facts by creating and employing extra false information for recognition purposes. Additionally, an ensemble technique employing LSTM and GAN has been presented.

    In [13], a Fog-based cardiac health recognition framework, termed FogDLearner, has been developed. FogDLearner uses spread resources to identify a person's cardiac health deprived of cooperating Quality of Service (QoS) and correctness. FogDLearner executes a DL based classification algorithm to forecast the cardiac health of the user. The planned structure is estimated on the PureEdgeSim simulator. In [14], a DL-based system, chiefly a convolutional neural network (CNN) with BiLSTM, was developed. The most appropriate features were only nominated through FS, which is skilled at ranking and choosing features that are extremely valued in the provided illness dataset. Afterwards, the CNN + BiLSTM-based hybrid DL technique was employed to forecast CVD.

    Hussain et al. [15] developed a new DL architecture that employed one-dimensional CNNs to detect healthy and non-healthy individuals with balanced datasets to decrease the limits of the traditional ML model. Many medical parameters were utilized to estimate danger contour in patients, which were sustained in the initial analysis. Numerous regularization models were applied to avoid overfitting in the presented method. Bensenane et al. [16] projected a decision support system-based system (DSS) to create an analysis of CVD. It employed DL techniques that categorized ECG signals. Therefore, a dual-stage LSTM-based NN framework with ample pre-processing of ECG signals planned as an analysis-assisted method for cardiac arrhythmia recognition depends on an ECG signal study.

    In [17], a smart healthcare method was proposed that employed ensemble DL and feature fusion techniques. First, the feature fusion model integrates removed features to make valuable healthcare information. Second, the data gain method removed irrelevant and redundant features. Additionally, the restricted prospect tactic calculated a precise feature load for every class. Lastly, the ensemble DL method was trained for heart illness forecast. Najafi et al. [18] presented an effectual and precise method that employed ANNs, FS, and multiple-criteria decision-making (MCDM) models. Suitable features were chosen by employing 5 FS models. Then, the three ANNs for the CVD forecast were applied. Furthermore, a Particle Swarm Optimizer (PSO) was utilized. A new combined weighting model that utilized the Best worst method (BWM) and the Method based on the Removal Effects of Criteria (MEREC) were developed.

    The research gap in healthcare data classification highlights the urgent necessity for advanced methodologies in hyperparameter tuning and feature selection. Existing techniques often lack a comprehensive approach to address the intricate nature of healthcare datasets that are considered by diverse feature types and a high dimensionality. Insufficient attention to the feature selection method results in interpretability and a sub-optimal model performance. Moreover, the absence of systematic hyperparameter tuning hampers the capability of models to adapt to the unique features of healthcare information, which limits the generalization over different healthcare scenarios. Connecting this gap needs the development of robust methods that incorporates advanced hyperparameter tuning strategies and efficient FS mechanisms, which ensures the creation of interpretable and accurate healthcare classification techniques that are crucial for informed medical decision-making.

    In this research, we focus on the development of the NIMADL-HDA technique. The NIMADL-HDA technique examines healthcare data to recognize and classify CVD. The presented NIMADL-HDA technique is comprised of Z-score normalization, BMO-based FS, CLSTM-based recognition, and PDO-based hyperparameter tuning. Figure 1 exemplifies the workflow of the NIMADL-HDA technique.

    Figure 1.  Workflow of NIMADL-HDA technique.

    Z-score normalization is also recognized as a standardization technique. It is a statistical model that is employed to change and rescale information by conveying every data point's value in terms of standard deviations from the mean of the dataset. This procedure includes deducting the mean of a dataset from every data point and separating the result by the standard deviation (SD). The resultant z-scores offer a standardized measure of how many SDs a specific data point is from the mean. Z-score normalization is usually used in many areas, such as ML and statistics, to ensure that variables with dissimilar measures and units are on a similar scale, thus simplifying meaningful contrasts and analyses through various datasets.

    In this stage, the NIMADL-HDA technique makes use of BMO for the FS process. Barnacles generally originate and strongly fuse to solid matter such as ships, rocks, corals, and even sea turtles [19]. They are hermaphroditic animals which contain both male and female reproduction methods. One of the exclusive features of barnacles is their penis size, which stretches numerous times when equated to the span of their body (7 to 8 times).

    The mating performance of barnacles occurs in dual methods such as normal copulation and a sperm‐cast. The male barnacle will hit the female barnacle for normal copulation and then the mating procedure occurs. The sperm cast takes place to reproduce insulated barnacles. This is completed by liquidating fertilized eggs into the aquatic environment. This performance produces novel offspring that become a vision in an overview of BMO to crack optimization difficulties.

    Comparable to other evolutionary methods such as a genetic algorithm (GA), BMO utilizes the same technique to have the selection procedure of parents reproduced to generate novel offspring. However, the method of the answer is dissimilar when equaled to the GA, which is deprived of employing any familiar range such as a tournament, roulette wheels, and much more. The assortment process for the reproduced barnacle's parents is completed depending on the simplification instructions mentioned below:

    • Barnacles are generally recognized as hermaphroditic animals and denote those female barnacles capable of being composted by many male barnacles; it is expected one other barnacle reproduces each barnacle. It mainly evades the algorithm's difficulty.

    • The value of $ pl $ is required to be usual by the user; a collection of the barnacle's parents is complete arbitrarily. A value of $ pl $ is a control parameter in which the consumer can attain noble optimizer outcomes away from the amount of barnacles and the maximal iterations.

    • The Hardy‐Weinberg model employed in an assortment of the barnacle's parents is inside the array of $ pl $. Otherwise, the sperm‐cast procedure is executed to gain novel offspring.

    The group of novel offspring is directed by a standard of Hardy-Weinberg's idea. The description is measured by Eqs (1) and (2):

    $ {x}_{i}^{{N}_{-}new} = p{x}_{bornocl{e}_{-}m}^{N}+q{x}_{bornocl{e}_{-}d}^{N}~~ for~~ k\le pl $ (1)
    $ {x}_{i}^{{N}_{-}new} = rand\left(\right)\times {x}_{barnacl{e}_{-}m}^{N}~~ for~~ k > pl $ (2)

    where $ k = |barnacI{e}_{-}m-barncI{e}_{-}d|, $ $ p $ refers to a usually dispersed pseudo-random amount, $ q = (1-p), $ and $ {x}_{barnacl{e}_{-}m}^{N} $ and $ {x}_{barnacl{e}_{-}d}^{N} $ denote randomly selected variables for the barnacle's parents (Mum and Dad), correspondingly. $ rand\left(\right) $ signifies a random amount array between zero to one $ \left(0\sim 1\right) $. By mentioning these equations, $ p $ and $ q $ signify the inheritance percentage from the individual barnacle's parents. For instance, let's say $ p $ is produced to be 0.80. It specifies that the novel offspring receive 80% of Mum's features and 20% of Dad's features. Eq (1) is essentially preserved as an optimizing exploitation procedure, whereas Eq (2) is preserved as an exploration procedure of the advanced BMO. Additionally, it is valuable to remark that the exploration procedure (sperm‐cast) is only related to the barnacle's mum, and the expected sperm is unconstrained from other barnacles.

    In case of classification among the barnacles, many populaces get folded from the primary populace. Similar to the GA procedure, the BMO too requires a sorting procedure. In this procedure, the optimal outcome for a definite iteration is found at the top of the doubled populace. Then, it is measured for the next generation whereas the bottom half is dead.

    When it comes to designing an optimization approach, the fitness function (FF) remains the key feature that must be taken into account [20]. Both the objectives must be considered while assessing a solution; meanwhile, FS is a multi-objective optimization issue. The fitness of the feature subset can be defined by the classifier accuracy (maximum) and the number of features selected (minimum). The most common approach for multi‐objective formulation is aggregation. In the proposed technique, the main aim is combined into a solitary objective; therefore, the present load recognizes every objective importance:

    $ Fitness\left(X\right) = \alpha \cdot E\left(X\right)+\beta \mathrm{*}\left(1-\frac{\left|R\right|}{\left|N\right|}\right) $ (3)

    Algorithm 1: Steps involved in BMO Algorithm
    Step 1: Parameter Initialization
    Initialize search space dimension, Population size, limits for variables, and objective function
    Randomly produce initial position for all the barnacles within the given bounds, which represents possible solution.
    Calculate the fitness of all the positions using the objective function.
    Step 2: Selection
    For every barnacle, randomly select two other barnacles as possible mates.
    Each selection is influenced by the "attractiveness" score, which is inversely proportional to the individual's fitness (best fitness = more attractive).
    Step 3: Reproduction
    For all the pairs of mates, produce an offspring location using a weighted average of their positions, similar to the crossover operation in genetic algorithm.
    The weighting is based on the "attractiveness" of all the parents, providing more weight to the parent with best fitness.
    Step 4: Mutation
    With a specific probability, employ a mutation operator to all the offspring positions.
    Step 5: Evaluation and Selection
    Compute the fitness of all the offspring positions using the objective function.
    For every barnacle, compare its fitness with the optimum offspring produced from its mating pairs.
    Replace the barnacle with the best individual (either itself or the offspring) for the next generation.
    Step 6: Repeat steps 2-5 for a predetermined number of iterations.
    Step 7: After each iterations, the barnacle with the better (lowest) fitness signifies the optimum solution found by the BMO technique.

    In Eq (3), the fitness value of a subset $ X $ is represented as the $ Fitness\left(X\right), $ t classifier error rate by applying features selected in the $ X $ subset, which is denoted by $ E\left(X\right) $. The number of features selected and original features in the dataset are $ \left|R\right| $ and $ \left|N\right| $, respectively, and $ \alpha \in \left[\mathrm{0, 1}\right] $ and $ \beta = \left(1-\alpha \right) $ are the weights of the classifier error and the reduction ratio of $ \alpha $ and $ \beta $, respetively. The solution representation is an additional factor that must be taken into account when designing an optimization approach to address the FS problems. In this study, the feature subset is characterized by a binary vector of $ N $ components, where the overall amount of features in a unique dataset is represented as $ N $. Every dimension has a binary value ($ 0 $ or $ 1 $), where 1 specifies that the corresponding feature is selected and 0 denotes that it is not selected.

    For healthcare data classification, the CLSTM model can be employed. The CLSTM is different from the LSTM network, which handles data communication within the cell over gates, namely forget $ \left({f}_{t}\right) $, input $ \left({i}_{t}\right) $, and output $ \left({o}_{t}\right) $ [21]. The discrete gates switch information to join and upgrade the in cell state (CS), which selectively hold or remove information via gates. If the input gate is activated, then the input will be gathered into a cell. If the forget gate starts, then the preceding CS is forgotten. The output gate organizes if the cell output is conveyed to the last hidden layer (HL). CLSTM completely varies from LSTM because it uses convolution processes rather than matrix multiplication in the "input‐to‐state", as well as "state‐to‐state" fragments and its inputs $ {X}_{1}, \dots, {X}_{t} $, unit outputs $ {C}_{1}, \dots, {C}_{t} $, and HL $ {\mathcal{H}}_{1}, \dots, {\mathcal{H}}_{i} $; the forget gate $ \left({f}_{t}\right) $, the input gate (i), and the output gate (o) are entirely three-dimensional tensors. The benefit of this technique is that it can eliminate a huge amount of spatial terminated features and resolve the problem of the time requirement of information, thereby removing spatial information to understand the joint forming of the time and spatial data. Figure 2 depicts the framework of CLSTM.

    Figure 2.  Architecture of CLSTM.

    The transfer relations among every CLSTM gate are exposed by Eq (4), whereas $ {I}_{t} $ denotes the input gate, $ {f}_{t} $ denotes the forget gate, $ {C}_{t} $ signifies the cell state, $ {o}_{t} $ represents the output gate, $ {\mathcal{H}}_{t} $ represents the HL output, "*" symbolizes the convolution operator, $ "\mathrm{◯}" $ embodies the Hadamard product, and $ \sigma $ represents the sigmoid activation function. Eq (4) shows the formula for this initiation function. The CLSTM technique employs a peephole LSTM construction based on the number of peeps that utilizes the cells to compute the forgotten and input doors so as to hold the data. The forget gate controls and removes the information that is redundantly measured, recollects the beneficial data and then transfers it back. The sustained information arrives at the input gate, information to be upgraded is defined via the sigmoid layer, and novel cell information is obtained via the $ \mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h} $ layer to upgrade the cell. At last, the final output of the CLSTM part is obtained by multiplying the sigmoid information in the output gate with the memory cell data via $ tanh $.

    $ {i}_{t} = \sigma \left({W}_{xi}\mathrm{*}{X}_{t}+{W}_{hi}\mathrm{*}{\mathcal{H}}_{t-1}+{W}_{ci}\mathrm{◯}{C}_{t-1}+{b}_{i}\right) $
    $ {f}_{t} = \sigma \left({W}_{xi}\mathrm{*}{X}_{t}+{W}_{hf}\mathrm{*}{\mathcal{H}}_{t-1}+{W}_{cf}\mathrm{◯}{C}_{t-1}+{b}_{f}\right) $
    $ {o}_{t} = \sigma \left({W}_{xo}\mathrm{*}{X}_{t}+{W}_{ho}\mathrm{*}{\mathcal{H}}_{t-1}+{W}_{co}\mathrm{◯}{C}_{t}+{b}_{o}\right) $ (4)
    $ {C}_{t} = {f}_{t}\mathrm{◯}{C}_{t-1}+{i}_{t}\mathrm{◯}\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h}\left({W}_{xc}\mathrm{*}{X}_{t}+{W}_{hc}\mathrm{*}{\mathcal{H}}_{t-1}+{b}_{c}\right) $
    $ {\mathcal{H}}_{t} = {o}_{t}\mathrm{◯}\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h}\mathrm{ }\left(C\right) $
    $ \sigma \left(x\right) = \frac{1}{1+{e}^{-x}} $ (5)

    At last, the PDO algorithm can be exploited for the optimal hyperparameter selection process. The PDO algorithm chooses the following hyperparameters: learning rate, number of epochs, and batch size. The PDO technique is an optimization algorithm that pretends to search the activity of prairie dogs (PD) [22]. PD is involved in social actions such as hunting, constructing caves, preserving caves, and protection against hunters every day. Therefore, depending on the daily actions of PD, a PDO technique is separated into 4-time stages. Then, we separate the exploitation and exploration depending on a fixed mirror existence.

    The hunting action of every PD is symbolized by $ 1\times \mathrm{ }\mathrm{d}\mathrm{i}\mathrm{m} $ in the spatial dimension. To avert PD from opposing their trajectory, upper‐bound is represented by $ UB $ and lower‐bound is represented by $ LB $ to limit the movement range of PD. The set of every PD in dissimilar places is an optimal solution to a problem.

    During the first period, the location of PD in foraging actions is connected to food sources $ \rho $, current excellence of food, and location of arbitrarily created PDs. $ \rho $ is a fixed food source alarm at 0.1 Khz. In a calculated method, the quality of existing food is definite as the efficiency of estimation presently acquired the finest solution $ eCBes{t}_{i, j} $. The position of a randomly created PD is defined as the random collective effect $ CP{D}_{i, j} $. The calculation expression is shown below:

    $ eCBes{t}_{i, j} = GBes{t}_{i, j}\times \Delta +\frac{P{D}_{i, j}\times mean\left(P{D}_{i}\right)}{GBes{t}_{i, j}\times \left(U{B}_{j}-L{B}_{j}\right)+\Delta } $ (6)
    $ CP{D}_{i, j} = \frac{GBes{t}_{i, j}-rP{D}_{i, j}}{GBes{t}_{i, j}+\Delta } $ (7)

    where $ GBes{t}_{i, j} $ is the global optimum solution gained so far, $ \Delta $ is a very small number that represents variances among PD, and $ rP{D}_{ij} $ denotes positions of random solution. Therefore, an equation to upgrade the position of PD searching for food is mentioned below:

    $ P{D}_{i+1, j+1} = GBes{t}_{i, j}-eCBes{t}_{i, j}\times p-CP{D}_{i, j}\times Levy\left(n\right) $ (8)

    From the above expression, Levy represents a Levy distribution with intermittent jumps. After discovering novel food sources, PD's dig and construct new caves around them. At this time, the position of the PD is connected to their dig force DS of caves. The upgraded equation for the DS is mentioned below:

    $ DS = 1.5\times r\times {\left(1-\frac{t}{T}\right)}^{t\left(2\frac{\mathrm{t}}{\mathrm{T}}\right)} $ (9)

    where $ r $ is converted among ‐l and 1 according to the equivalence of the present iteration number, $ t $ denotes the current iteration number, and $ T $ refers to the maximal iteration number. During the second period, Eq (10) displays an upgrade in a location of PD

    $ P{D}_{i+1, j+1} = GBes{t}_{i, j}\times rPD\times DS\times Levy\left(n\right) $ (10)

    During 3rd time stage, PD refers to the excellence of the present food source $ \varepsilon $ and the increasing effect of all PDs to arbitrarily upgrade their locations. In the calculated method, the quality of the existing food source $ \varepsilon $ is a small number labeled as the quality of the food source. The procedure for upgrading the location of PD is shown below:

    $ P{D}_{i+1, j+1} = GBes{t}_{i, j}-eCBes{t}_{i, j}\times \varepsilon -CP{D}_{i, j}\times rand $ (11)

    where rand represents a random number amongst $ 0 $ and 1. During the foraging procedure of PDs, predators frequently attack them. Therefore, a predator attack is described as a predatory effect of PE. The $ PE $ calculation equation is given below:

    $ PE = 1.5\times {\left(1-\frac{t}{T}\right)}^{t\left(2\frac{\mathrm{t}}{\mathrm{T}}\right)} $ (12)

    Upgrade the position of PD during the 4th period in Eq (13):

    $ P{D}_{i+1, j+1} = GBes{t}_{i, j}\times PE\times rand $ (13)

    The novel PDO technique mimics the performance of PDs in hunting, digging, and avoiding natural enemies, thus separating the performance of PDs into 4 time periods, where $ \rho $ refers to the food sources alarm, $ CP{D}_{i, j} $ is a cumulative effect of all PDs, DS signifies the strength of burrowing, $ \varepsilon $ is quality of the food sources, and PE denotes the predatory effect of predators regularly apprising the location to discover improved food sources. Eq (14) reviews the upgraded positions of PDs at four time periods.

    $ \left\{PDi+1,j+1=GBesti,jeCBesti,j×ρCPDi,j×Levy(n)t<T4PDi+1,j+1=GBesti,j×rPD×DS×Levy(n)   T4t<T2PDi+1,j+1=GBesti,jeCBesti,j×εCPDi,j×rand   T2t<3T4PDi+1,j+1=GBesti,j×PE×rand   3T4t<T
    \right. $
    (14)

    The fitness selection is a significant factor that influences the performance of the PDO method. The hyperparameter selection procedure includes a solution encoding model to estimate the efficiency of candidate solutions. In this work, the PDO methodology reflects the accurateness as the main standard to design FF as expressed below:

    $ Fitness = \mathrm{ }\mathrm{m}\mathrm{a}\mathrm{x}\left(P\right) $ (15)
    $ P = \frac{TP}{TP+FP} $ (16)

    where TP represents a true positive and FP represents a false positive value.

    In this section, the healthcare data classification result of the NIMADL-HDA technique will be examined in detail. The NIMADL-HDA technique was tested using a healthcare dataset [23], which combines the Cleveland, Hungarian, Switzerland, Long Beach, and Stalog Heart datasets. Table 1 represents the details of a database.

    Table 1.  Details on database.
    Classes No. of Instances
    Normal 561
    Disease Affected 629
    Total Number of Instances 1190

     | Show Table
    DownLoad: CSV

    Figure 3 validates the confusion matrices formed by the NIMADL-HDA model undernumerous epochs. The results suggest that the NIMADL-HDA method has an effective detection of normal and disease-affected classes.

    Figure 3.  Confusion matrices of NIMADL-HDA technique (a-f) Epochs 500–3000.

    Table 2 and Figure 4 report the general disease detection results of the NIMADL-HDA method under varying numbers of epochs. The results show that the NIMADL-HDA model properly recognize normal and disease-affected samples.

    Figure 4.  Disease detection outcome of NIMADL-HDA technique (a-f) Epochs 500–3000.
    Table 2.  Disease detection outcome of NIMADL-HDA technique under various epochs.
    Class Accuracybal Precision Recall F-Score G-Measure
    Epoch-500
    Normal 99.11 99.29 99.11 99.20 99.20
    Disease Affected 99.36 99.21 99.36 99.29 99.29
    Average 99.24 99.25 99.24 99.24 99.24
    Epoch-1000
    Normal 99.29 99.46 99.29 99.38 99.38
    Disease Affected 99.52 99.37 99.52 99.44 99.44
    Average 99.41 99.41 99.41 99.41 99.41
    Epoch-1500
    Normal 99.29 99.11 99.29 99.20 99.20
    Disease Affected 99.21 99.36 99.21 99.28 99.28
    Average 99.25 99.24 99.25 99.24 99.24
    Epoch-2000
    Normal 98.93 99.28 98.93 99.11 99.11
    Disease Affected 99.36 99.05 99.36 99.21 99.21
    Average 99.15 99.17 99.15 99.16 99.16
    Epoch-2500
    Normal 99.29 99.64 99.29 99.46 99.46
    Disease Affected 99.68 99.37 99.68 99.52 99.52
    Average 99.48 99.50 99.48 99.49 99.49
    Epoch-3000
    Normal 98.75 99.82 98.75 99.28 99.28
    Disease Affected 99.84 98.90 99.84 99.37 99.37
    Average 99.30 99.36 99.30 99.33 99.33

     | Show Table
    DownLoad: CSV

    In Figure 5, an average detection result of the NIMADL-HDA technique is portrayed under a varying number of epochs. The obtained values highlight that the NIMADL-HDA technique properly accomplishes a classification performance. With 500 epochs, the NIMADL-HDA technique gains an average $ acc{u}_{bal} $ of 99.24%, $ pre{c}_{n} $ of 99.25%, $ rec{a}_{l} $ of 99.24%, $ {F}_{score} $ of 99.24%, and $ {G}_{measure} $ of 99.24%. Meanwhile, with 1500 epochs, the NIMADL-HDA methodology gains an average $ acc{u}_{bal} $ of 99.25%, $ pre{c}_{n} $ of 99.24%, $ rec{a}_{l} $ of 99.25%, $ {F}_{score} $ of 99.24%, and $ {G}_{measure} $ of 99.24%. Besides, with 2500 epochs, the NIMADL-HDA method gains an average $ acc{u}_{bal} $ of 99.48%, $ pre{c}_{n} $ of 99.50%, $ rec{a}_{l} $ of 99.48%, $ {F}_{score} $ of 99.49%, and $ {G}_{measure} $ of 99.49%. At last, with 3000 epochs, the NIMADL-HDA method gains an average $ acc{u}_{bal} $ of 99.30%, $ pre{c}_{n} $ of 99.36%, $ rec{a}_{l} $ of 99.30%, $ {F}_{score} $ of 99.33%, and $ {G}_{measure} $ of 99.33%.

    As shown in Figure 6, the training and validation accuracy curves of the NIMADL-HDA method below epoch-2500 deliver valuable insights into the performance of the NIMADL-HDA approach over multiple epochs. These curves highlight the vital insights into the learning procedure and the model's ability to simplify. Additionally, it is noticeable that there is a consistent development in training (TR) and testing (TS) accurateness over increasing epochs. Moreover, the model's capability to learn and identify patterns within both TR and TS datasets is highlighted. The growing testing accurateness recommends that the model not only adjusts to the training data, but also excels in creating precise forecasts on before-hidden data, thus highlighting its robust generalization abilities.

    Figure 5.  Average outcome of NIMADL-HDA technique under various epochs.
    Figure 6.  $ Acc{u}_{y} $ curve of NIMADL-HDA technique under epoch 2500.

    In Figure 7, we signify an inclusive view of TR and TS loss values for the NIMADL-HDA methodology under epoch-2500. The TR loss progressively decreases as the model enhances its weights to reduce classification errors on both the TR and TS datasets. These loss curves offer a perfect picture of how well the model supports the training data, thus highlighting its aptitude to professionally hold patterns in both datasets. It is valuable to note that the NIMADL-HDA model constantly enhances its parameters to diminish discrepancies between predictions and genuine training labels.

    Figure 7.  Loss curve of NIMADL-HDA technique under epoch 2500.

    With respect to the precision-recall (PR) curve, as assumed in Figure 8, the results confirm that the NIMADL-HDA approach under epoch-2500 gradually achieves greater PR values through each class. The results highlight the effective capability of the model to discriminate dissimilar classes, thus highlighting its efficiency in the detection of class labels.

    Figure 8.  PR curve of NIMADL-HDA technique under epoch 2500.

    In Figure 9, we present Receiver Operating Characteristic (ROC) curves formed by the NIMADL-HDA model under epoch-2500, which is best at distinguishing between classes. These curves offer valuable insights into the balance among TPR and FPR across dissimilar classification thresholds and epochs. The outcomes highlight the precise classification performance below dissimilar class labels, thus underlining its performance in tackling dissimilar classification tasks.

    Figure 9.  ROC curve of NIMADL-HDA technique under epoch 2500.

    Table 2 reports a detailed comparison study of the NIMADL-HDA technique with other models [24,25,26]. In Figure 10, a comparative result of NIMADL-HDA methodology is reported in terms of $ acc{u}_{y} $. Based on $ acc{u}_{y} $, the results indicate that the NIMADL-HDA model extents a higher $ acc{u}_{y} $ of 99.48%, while the ACVD-HBOMDL, SC, J48, ANN, Bagging, REPTree, and SVM models obtain lower $ acc{u}_{y} $ values of 99.39%, 98.86%, 97.60%, 95.55%, 94.99%, 92.91%, and 84.49%, respectively.

    Table 3.  Comparative outcome of NIMADL-HDA approach with other methods [24,25,26].
    Methods Accuracy Precision Recall F-Score
    NIMADL-HDA 99.48 99.50 99.48 99.49
    ACVD-HBOMDL 99.39 99.44 99.39 99.41
    SC Algorithm 98.86 98.42 97.60 98.12
    J48 Algorithm 97.60 97.52 98.42 98.21
    ANN Algorithm 95.55 95.16 94.85 95.27
    Bagging Algorithm 94.99 94.38 94.69 94.42
    REPTree Algorithm 92.91 92.92 92.47 93.18
    SVM Algorithm 84.49 84.95 83.95 83.99

     | Show Table
    DownLoad: CSV
    Figure 10.  $ Acc{u}_{y} $ outcome of NIMADL-HDA approach with other methods.

    In Figure 11, a comparative result of the NIMADL-HDA model is conveyed in terms of $ pre{c}_{n} $, $ rec{a}_{l} $, and $ {F}_{score} $. Depending on $ pre{c}_{n} $, the results designate that the NIMADL-HDA model attains a greater $ pre{c}_{n} $ of 99.50%, while the ACVD-HBOMDL, SC, J48, ANN, Bagging, REPTree, and SVM techniques attain lower $ pre{c}_{n} $ values of 99.44%, 98.42%, 97.52%, 95.16%, 94.38%, 92.92%, and 84.95%, respectively. Besides, based on $ rec{a}_{l} $, the results show that the NIMADL-HDA methodology reached an advanced $ rec{a}_{l} $ of 99.48%, while the ACVD-HBOMDL, SC, J48, ANN, Bagging, REPTree, and SVM methods acquired lower $ rec{a}_{l} $ values of 99.39%, 97.60%, 98.42%, 94.85%, 94.69%, 92.47%, and 83.95%, respectively. Lastly, based on $ {F}_{score} $, the results indicate that the NIMADL-HDA model reached a higher $ {F}_{score} $ of 99.49%, while the ACVD-HBOMDL, SC, J48, ANN, Bagging, REPTree, and SVM models acquired lower $ {F}_{score} $ values of 99.41%, 98.12%, 98.21%, 95.27%, 94.42%, 93.18%, and 83.99%, respectively.

    Figure 11.  Comparative outcome of NIMADL-HDA approach with other methods [24,25,26].

    We have carried out an experiment to ensure the performance enhancement made by feature selection and the hyperparameter tuning process. The overall results of the NIMADL-HAD technique (with FS and hyperparameter tuning) with BMOA-CLSTM (without FS) and CLSTM (without FS and hyperparameter tuning) are depicted in Table 4 and Figure 12, respectively. The results indicate that the NIMADL-HDA model outperforms the other ones with a maximum performance due to the integration of the FS and hyperparameter tuning process.

    Table 4.  Comparative outcome of NIMADL-HDA approach with CLSTM and BMOA-CLSTM model.
    Measures (%)
    Models Accuracy Precision Recall F-Score
    NIMADL-HDA 99.48 99.50 99.48 99.49
    BMOA-CLSTM 98.76 98.89 98.65 98.32
    CLSTM 97.15 98.08 97.12 97.96

     | Show Table
    DownLoad: CSV
    Figure 12.  Comparative outcome of NIMADL-HDA approach before and after FS with hyperparameter tuning process.

    The NIMADL-HDA method excels over existing techniques in healthcare data analyses due to its innovative incorporation of bio-inspired optimization techniques, adaptive FS using the BMO, the CLSTM-based classifier, and effective hyperparameter tuning with the PDO method. Thus, the NIMADL-HDA technique can be applied for enhanced detection processes in the healthcare environment.

    In this paper, we focused on the design and improvement of the NIMADL-HDA technique. The NIMADL-HDA technique examines healthcare data to recognize and classify CVD. The presented NIMADL-HDA technique is comprised of Z-score normalization, BMO-based FS, CLSTM-based detection, and PDO-based hyperparameter tuning. In the developed NIMADL-HDA technique, Z-score normalization was initially performed to normalize the input data. In addition, the NIMADL-HDA technique made use of a BMO-based FS process. For healthcare data classification, the CLSTM model was employed. Finally, a PDO algorithm was exploited to optimize the hyperparameter selection procedure. An experimental result analysis of the NIMADL-HDA technique was tested on a benchmark healthcare dataset. The acquired outcomes stated that the NIMADL-HDA technique reached an effectual performance over other models.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work, under the General Research Funding program grant code (NU/DRP/SERC/12/23).

    The authors declare that they have no conflict of interest. The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.

    [1] James W (1890) The principles of psychology (H. Holt, New York, NY).
    [2] Posner MI, Petersen SE (1990) The attention system of the human brain. Annu Rev Neurosci 13: 25-42. doi: 10.1146/annurev.ne.13.030190.000325
    [3] Norman DA, Shallice T (1986) Attention to action: willed and automatic control of behavior. Consciousness and Self-Regulation, eds Davison RJ, Schwartz GE, Shapiro D (Plenum Press, New York, NY), pp 1-18.
    [4] Corbetta M, Shulman GL (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3: 201-215.
    [5] Posner MI, DiGirolamo GJ (1998) Executive attention: Conflict, target detection, and cognitive control. The Attentive Brain, ed Parasuraman R (MIT Press, Cambridge, MA), pp 401-423.
    [6] D’Angelo MC, Milliken B, Jiménez L, et al. (2013) Implementing flexibility in automaticity: Evidence from context-specific implicit sequence learning. Conscious Cogn 22: 64-81. doi: 10.1016/j.concog.2012.11.002
    [7] Rueda MR, Posner MI, Rothbart MK (2005) The development of executive attention: contributions to the emergence of self-regulation. Dev Neuropsychol 28: 573-94. doi: 10.1207/s15326942dn2802_2
    [8] Luu P, Tucker DM, Derryberry D, et al. (2003) Electrophysiological responses to errors and feedback in the process of action regulation. Psychol Sci 14: 47-53. doi: 10.1111/1467-9280.01417
    [9] Dagenbach D, Carr TH (1994) Inhibitory processes in attention, memory, and language (Academic Press, San Diego, CA).
    [10] Petersen SE, Posner MI (2012) The Attention System of the Human Brain: 20 Years After. Annu Rev Neurosci 35: 73-89. doi: 10.1146/annurev-neuro-062111-150525
    [11] Posner MI, Rueda MR, Kanske P (2007) Probing the Mechanisms of Attention. Handb Psychophysiol: 410-432.
    [12] Fan J, McCandliss BD, Sommer T, et al. (2002) Testing the efficiency and independence of attentional networks. J Cogn Neurosci 14: 340-347. doi: 10.1162/089892902317361886
    [13] Callejas A, Lupiàñez J, Funes MJ, et al. (2005) Modulations among the alerting, orienting and executive control networks. Exp brain Res 167: 27-37. doi: 10.1007/s00221-005-2365-z
    [14] Fan J, Gu X, Guise KG, et al. (2009) Testing the behavioral interaction and integration of attentional networks. Brain Cogn 70: 209-220. doi: 10.1016/j.bandc.2009.02.002
    [15] Hackley SA, Valle-Inclán F (2003) Which stages of processing are speeded by a warning signal? Biol Psychol 64: 27-45. doi: 10.1016/S0301-0511(03)00101-7
    [16] Weinbach N, Henik A (2012) The relationship between alertness and executive control. J Exp Psychol Hum Percept Perform 38: 1530-1540. doi: 10.1037/a0027875
    [17] Pozuelos JP, Paz-Alonso PM, Castillo A, et al. (2014) Development of Attention Networks and Their Interactions in Childhood. Dev Psychol 50: 2405-2415. doi: 10.1037/a0037469
    [18] Fox MD, Corbetta M, Snyder AZ, et al. (2006) Spontaneous neuronal activity distinguishes human dorsal and ventral attention systems. Proc Natl Acad Sci U S A 103: 10046-10051. doi: 10.1073/pnas.0604187103
    [19] Dosenbach NUF, Fair DA, Miezin FM, et al. (2007) Distinct brain networks for adaptive and stable task control in humans. Proc Natl Acad Sci U S A 104: 11073-11078. doi: 10.1073/pnas.0704320104
    [20] Posner MI, Rothbart MK (2007) Research on attention networks as a model for the integration of psychological science. Annu Rev Psychol 58: 1-23. doi: 10.1146/annurev.psych.58.110405.085516
    [21] Coull JT, Nobre AC, Frith CD (2001) The noradrenergic a2 agonist clonidine modulates behavioural and neuroanatomical correlates of human attentional orienting and alerting. Cereb Cortex 11: 73-84. doi: 10.1093/cercor/11.1.73
    [22] Aston-Jones G, Cohen JD (2005) An integrative theory of locus coeruleus-norepinephrine function: adaptive gain and optimal performance. Annu Rev Neurosci 28: 403-450. doi: 10.1146/annurev.neuro.28.061604.135709
    [23] Coull JT, Frith CD, Frackowiak RSJ, Grasby PM (1996) A fronto-parietal network for rapid visual information processing: A PET study of sustained attention and working memory. Neuropsychologia 34: 1085-1095. doi: 10.1016/0028-3932(96)00029-2
    [24] Cui RQ, Egkher A, Huter D, et al. (2000) High resolution spatiotemporal analysis of the contingent negative variation in simple or complex motor tasks and a non-motor task. Clin Neurophysiol 111: 1847-1859. doi: 10.1016/S1388-2457(00)00388-6
    [25] Coull JT (2004) fMRI studies of temporal attention: Allocating attention within, or towards, time. Cogn Brain Res 21: 216-226. doi: 10.1016/j.cogbrainres.2004.02.011
    [26] Hillyard SA (1985) Electrophysiology of human selective attention. Trends Neurosci 8: 400-405. doi: 10.1016/0166-2236(85)90142-0
    [27] Mangun GR, Hillyard SA (1987) The spatial allocation of visual attention as indexed by event-related brain potentials. Hum Factors 29: 195-211.
    [28] Desimone R, Duncan J (1995) Neural mechanisms of selective visual attention. Annu Rev Neurosci 18: 193-222. doi: 10.1146/annurev.ne.18.030195.001205
    [29] Corbetta M, Patel G, Shulman GL (2008) The Reorienting System of the Human Brain: From Environment to Theory of Mind. Neuron 58 : 306-324.
    [30] Greicius MD, Krasnow B, Reiss AL, et al. (2003) Functional connectivity in the resting brain: a network analysis of the default mode hypothesis. Proc Natl Acad Sci U S A 100: 253-258. doi: 10.1073/pnas.0135058100
    [31] Mantini D, Perrucci MG, Del Gratta C, et al. (2007) Electrophysiological signatures of resting state networks in the human brain. Proc Natl Acad Sci U S A 104: 13170-13175. doi: 10.1073/pnas.0700668104
    [32] Visintin E, De Panfilis C, Antonucci C, et al. (2015) Parsing the intrinsic networks underlying attention: A resting state study. Behav Brain Res 278: 315-322. doi: 10.1016/j.bbr.2014.10.002
    [33] Umarova RM, Saur D, Schnell S, et al. (2010) Structural connectivity for visuospatial attention: Significance of ventral pathways. Cereb Cortex 20: 121-129. doi: 10.1093/cercor/bhp086
    [34] Buschman TJ, Miller EK (2007) Top-Down Versus Bottom-Up Control of Attention in the Prefrontal and Posterior Parietal Cortices. Sci 315: 1860-1862. doi: 10.1126/science.1138071
    [35] Vossel S, Geng JJ, Fink GR (2013) Dorsal and Ventral Attention Systems: Distinct Neural Circuits but Collaborative Roles. Neurosci 20: 150-159.
    [36] He BJ, AZ Snyder, JL Vincent, et al. (2007) Breakdown of functional connectivity in frontoparietal networks underlies behavioral deficits in spatial neglect. Neuron 53: 905-918. doi: 10.1016/j.neuron.2007.02.013
    [37] Giesbrecht B, Weissman DH, Woldorff MG, et al. (2006) Pre-target activity in visual cortex predicts behavioral performance on spatial and feature attention tasks. Brain Res 1080: 63-72. doi: 10.1016/j.brainres.2005.09.068
    [38] Geng JJ, Mangun GR (2011) Right temporoparietal junction activation by a salient contextual cue facilitates target discrimination. Neuroimage 54: 594-601. doi: 10.1016/j.neuroimage.2010.08.025
    [39] Fan J, Flombaum JI, McCandliss BD, et al. (2003) Cognitive and Brain Consequences of Conflict. Neuroimage 18: 42-57. doi: 10.1006/nimg.2002.1319
    [40] Bush G, Luu P, Posner MI (2000) Cognitive and emotional influences in anterior cingulate cortex. Trends Cogn Sci 4: 215-222. doi: 10.1016/S1364-6613(00)01483-2
    [41] Drevets WC, Raichle ME (1998) Reciprocal suppression of regional cerebral blood flow during emotional versus higher cognitive processes: Implications for interactions between emotion and cognition. Cogn emotioin 12: 353-385. doi: 10.1080/026999398379646
    [42] Botvinick MM, Nystrom L, Fissell K, et al. (1999) Conflict monitoring versus selection-for-action in anterior cingulate cortex. Nature 402: 179-181. doi: 10.1038/46035
    [43] Botvinick MM, Braver TS, Barch DM, et al. (2001) Conflict monitoring and cognitive control. Psychol Rev 108: 624-652. doi: 10.1037/0033-295X.108.3.624
    [44] Kopp B, Tabeling S, Moschner C, et al. (2006) Fractionating the Neural Mechanisms of Cognitive Control. J Cogn Neurosci: 949-965.
    [45] Van Veen V, Carter CS (2002) The timing of action-monitoring processes in the anterior cingulate cortex. J Cogn Neurosci 14: 593-602. doi: 10.1162/08989290260045837
    [46] Posner MI, Sheese BE, Odludaş Y, et al. (2006) Analyzing and shaping human attentional networks. Neural Networks 19: 1422-1429. doi: 10.1016/j.neunet.2006.08.004
    [47] Dosenbach NUF, Fair Da, Cohen AL, et al. (2008) A dual-networks architecture of top-down control. Trends Cogn Sci 12: 99-105. doi: 10.1016/j.tics.2008.01.001
    [48] Rueda MR (2014) Development of Attention. Oxford Handb Cogn Neurosci 1: 296-318.
    [49] Rueda MR, Posner MI, Rothbart MK, et al. (2004) Development of the time course for processing conflict: an event-related potentials study with 4 year olds and adults. BMC Neurosci 5: 39. doi: 10.1186/1471-2202-5-39
    [50] Abundis-Gutiérrez A, Checa P, Castellanos C, et al. (2014) Electrophysiological correlates of attention networks in childhood and early adulthood. Neuropsychologia 57: 78-92. doi: 10.1016/j.neuropsychologia.2014.02.013
    [51] Gießing C, Thiel CM, Alexander-Bloch aF, et al. (2013) Human brain functional network changes associated with enhanced and impaired attentional task performance. J Neurosci 33: 5903-5914. doi: 10.1523/JNEUROSCI.4854-12.2013
    [52] Gao W, Zhub HT, Giovanello KS, et al. (2009) Evidence on the emergence of the brain’s default network from 2-week-old to 2-year-old healthy pediatric subjects. Proc Natl Acad Sci U S A 106: 6790-6795. doi: 10.1073/pnas.0811221106
    [53] Fair DA, Dosenbach NU, Church JA, et al. (2007) Development of distinct control networks through segregation and integration. Proc Natl Acad Sci U S A 104: 13507-13512. doi: 10.1073/pnas.0705843104
    [54] Fair DA, Cohen AL, Power JD, et al. (2009) Functional brain networks develop from a “local to distributed” organization. PLoS Comput Biol 5: e1000381. doi: 10.1371/journal.pcbi.1000381
    [55] Fan J, Wu Y, Fossella JA, et al. (2001) Assessing the heritability of attentional networks. BMC Neurosci 2: 14. doi: 10.1186/1471-2202-2-14
    [56] Marrocco RT, Davidson MC (1998) Neurochemistry of attention. The Attentive Brain, ed Parasuraman R (MIT Press, Cambridge, MA), pp 35-50.
    [57] Congdon E, Lesch KP, Canli T (2008) Analysis of DRD4 and DAT polymorphisms and behavioral inhibition in healthy adults: implications for impulsivity. Am J Med Genet B Neuropsychiatr Genet 147: 27-32.
    [58] Rueda MR, Rothbart MK, McCandliss BD, et al. (2005) Training, maturation, and genetic influences on the development of executive attention. Proc Natl Acad Sci U S A 102: 14931-14936. doi: 10.1073/pnas.0506897102
    [59] Diamond A (2007) Consequences of variations in genes that affect dopamine in prefrontal cortex. Cereb cortex 17: i161-170. doi: 10.1093/cercor/bhm082
    [60] Forbes EE, Brown SM, Kimak M, et al. (2009) Genetic variation in components of dopamine neurotransmission impacts ventral striatal reactivity associated with impulsivity. Mol Psychiatry 14: 60-70. doi: 10.1038/sj.mp.4002086
    [61] Congdon E, Constable RT, Lesch KP, et al. (2009) Influence of SLC6A3 and COMT variation on neural activation during response inhibition. Biol Psychol 81: 144-152. doi: 10.1016/j.biopsycho.2009.03.005
    [62] Mueller EM, Makeig S, Stemmler G, et al. (2011) Dopamine effects on human error processing depend on catechol-O-methyltransferase VAL158MET genotype. J Neurosci 31: 15818-15825. doi: 10.1523/JNEUROSCI.2103-11.2011
    [63] Espeseth T, Sneve MH, Rootwelt H, et al. (2010) Nicotinic receptor gene CHRNA4 interacts with processing load in attention. PLoS One 5: e14407. doi: 10.1371/journal.pone.0014407
    [64] Greenwood PM, Parasuraman R, Espeseth T (2012) A cognitive phenotype for a polymorphism in the nicotinic receptor gene CHRNA4. Neurosci Biobeha Rev 36: 1331-1341. doi: 10.1016/j.neubiorev.2012.02.010
    [65] Lundwall Ra, Guo DC, Dannemiller JL (2012) Exogenous visual orienting is associated with specific neurotransmitter genetic markers: A population-based genetic association study. PLoS One 7.
    [66] Zozulinsky P, Greenbaum L, Brande-Eilat N, et al. (2014) Dopamine system genes are associated with orienting bias among healthy individuals. Neuropsychologia 62: 48-54. doi: 10.1016/j.neuropsychologia.2014.07.005
    [67] Sheese BE, Voelker P, Posner MI, et al. (2009) Genetic variation influences on the early development of reactive emotions and their regulation by attention. Cogn Neuropsychiatry 14: 332-355. doi: 10.1080/13546800902844064
    [68] Posner MI, Rothbart MK, Sheese BE (2007) Attention genes. Dev Sci 10: 24-29. doi: 10.1111/j.1467-7687.2007.00559.x
    [69] Bornstein MH, Bradley RH (2003) Socioeconomic status, parenting, and child development (Lawrence Erlbaum Associates Publishers, Mahwah, NJ).
    [70] Bernier A, Carlson SM, Whipple N (2010) From external regulation to self-regulation: early parenting precursors of young children’s executive functioning. Child Dev 81: 326-339. doi: 10.1111/j.1467-8624.2009.01397.x
    [71] Gaertner BM, Spinrad TL, Eisenberg N (2008) Focused attention in toddlers: Measurement, stability, and relations to negative emotion and parenting. Infant Child Dev 17: 339-363. doi: 10.1002/icd.580
    [72] Cipriano EA, Stifter CA (2010) Predicting preschool effortful control from toddler temperament and parenting behavior. J Appl Dev Psychol 31: 221-230. doi: 10.1016/j.appdev.2010.02.004
    [73] Liew J, Chen Q, Hughes JN (2010) Child Effortful Control, Teacher-student Relationships, and Achievement in Academically At-risk Children: Additive and Interactive Effects. Early Child Res Q 25: 51-64. doi: 10.1016/j.ecresq.2009.07.005
    [74] Hackman D, Farah M (2009) Socioeconomic status and the developing brain Daniel. Trends Cogn Sci 13: 65-73. doi: 10.1016/j.tics.2008.11.003
    [75] Wanless SB, McClelland MM, Tominey SL, et al. (2011) The Influence of Demographic Risk Factors on Children’s Behavioral Regulation in Prekindergarten and Kindergarten. Early Educ Dev 22: 461-488. doi: 10.1080/10409289.2011.536132
    [76] Mezzacappa E (2004) Alerting, orienting, and executive attention: developmental properties and sociodemographic correlates in an epidemiological sample of young, urban children. Child Dev 75: 1373-1386. doi: 10.1111/j.1467-8624.2004.00746.x
    [77] Clearfield MW, Niman LC (2012) SES affects infant cognitive flexibility. Infant Behav Dev 35: 29-35. doi: 10.1016/j.infbeh.2011.09.007
    [78] Lawson GM, Duda JT, Avants BB, et al. (2013) Associations between children’s socioeconomic status and prefrontal cortical thickness. Dev Sci 16: 641-652. doi: 10.1111/desc.12096
    [79] Jolles DD, Crone EA (2012) Training the developing brain: a neurocognitive perspective. Front Hum Neurosci 6: 76.
    [80] Tang Y-Y, Posner MI (2009) Attention training and attention state training. Trends Cogn Sci 13: 222-227. doi: 10.1016/j.tics.2009.01.009
    [81] Karbach J, Kray J (2009) How useful is executive control training? Age differences in near and far transfer of task-switching training. Dev Sci 12: 978-990.
    [82] Jaeggi SM, Buschkuehl M, Jonides J, et al. (2011) Short- and long-term benefits of cognitive training. Proc Natl Acad Sci U S A 108: 10081-10086. doi: 10.1073/pnas.1103228108
    [83] Thorell LB, Lindqvist S, Bergman Nutley S, et al. (2008) Training and transfer effects of executive functions in preschool children. Dev Sci 12: 106-113.
    [84] Olesen PJ, Westerberg H, Klingberg T (2004) Increased prefrontal and parietal activity after training of working memory. Nat Neurosci 7: 75-79. doi: 10.1038/nn1165
    [85] Jolles DD, Van Buchem MA, Crone EA, et al. (2013) Functional brain connectivity at rest changes after working memory training. Hum Brain Mapp 34: 396-406. doi: 10.1002/hbm.21444
    [86] McNab F, Andrea V, Lars F, et al. (2009) Changes in cortical dopamine D1 receptor binding associated with cognitive training. Science 323: 800-802. doi: 10.1126/science.1166102
    [87] Tang Y-Y, Posner MI (2014) Training brain networks and states. Trends Cogn Sci 18: 345-350. doi: 10.1016/j.tics.2014.04.002
    [88] Malinowski P (2013) Neural mechanisms of attentional control in mindfulness meditation. Front Neurosci 7: 8.
    [89] Tang Y-Y, Ma YH, Wang JH, et al. (2007) Short-term meditation training improves attention and self-regulation. Proc Natl Acad Sci U S A 104: 17152-17156. doi: 10.1073/pnas.0707678104
    [90] Moore A, Gruber T, Derose J, et al. (2012) Regular, brief mindfulness meditation practice improves electrophysiological markers of attentional control. Front Hum Neurosci 6: 1-15.
    [91] Slagter HA, Lutz A, Greischar LL, et al. (2007) Mental Training Affects Distribution of Limited Brain Resources. Plos Biol 5.
    [92] Hölzel BK, Ott U, Hempel H, et al. (2007) Differential engagement of anterior cingulate and adjacent medial frontal cortex in adept meditators and non-meditators. Neurosci Lett 421: 16-21. doi: 10.1016/j.neulet.2007.04.074
    [93] Tang Y-Y, Lu Q, Fan M, et al. (2012) Mechanisms of white matter changes induced by meditation. Proc Natl Acad Sci: 1-5.
    [94] Posner MI, Tang Y-Y, Lynch G (2014) Mechanisms of white matter change induced by meditation training. Front Psychol 5: 1-4.
    [95] Tang Y-Y, Lua Ql, Gengc XJ, et al. (2010) Short-term meditation induces white matter changes in the anterior cingulate. Proc Natl Acad Sci U S A 107: 15649-15652. doi: 10.1073/pnas.1011043107
    [96] Hebb DO (1949) Organization of behavior (John Wiley & Sons, New York, NY).
  • This article has been cited by:

    1. Seda Göktepe Körpeoğlu, Süleyman Mesut Yılmaz, Exploring Evolutionary Algorithms for Multi-Objective Optimization in Seismic Structural Design, 2024, 14, 2076-3417, 9951, 10.3390/app14219951
  • Reader Comments
  • © 2015 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(17200) PDF downloads(3042) Cited by(47)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog